Strange transaction-id behaviour? (was Re: [GENERAL] Two updates problem)
Yuri B. Lukyanov wrote: I have table: and function: But this thing don't work: UPDATE test SET text2='test' WHERE id = (SELECT test1()); (rows affected: 0) Why? There is two updates on the same row, but work only first update (in the function). Maybe it's bug? Hmm - PostgreSQL has a transaction-counter that is used to track which rows your current command can see. I think the function is incrementing the transaction ID of the row your main update is trying to access. So - after getting our "2" to compare "id" to there is no matching row *visible to the original transaction ID*. So - it finds no matches and does no update. I'm not sure it's sensible to have the update in the WHERE clause - I don't know that you can depend on how many times that function will be called. On the other hand, I wouldn't like to say this is the right behaviour - I'm cc:ing this to the hackers list so they can take a look at it. PS - I used the following to test. BEGIN; CREATE TABLE foo (a int4, b text); INSERT INTO foo VALUES (1,'aaa'); INSERT INTO foo VALUES (2,'bbb'); CREATE TABLE bar (a int4, b text); INSERT INTO bar VALUES (1,'ccc'); INSERT INTO bar VALUES (2,'ddd'); CREATE FUNCTION foo_func() RETURNS int4 AS ' BEGIN UPDATE foo SET b = b || ''X'' WHERE a = 2; UPDATE bar SET b = b || ''X'' WHERE a = 2; RETURN 2; END; ' LANGUAGE plpgsql; -- UPDATE foo SET b = b || 'Y' WHERE a <= (SELECT foo_func()); UPDATE foo SET b = b || 'Y' WHERE a <= (SELECT foo_func()); SELECT * FROM foo; SELECT * FROM bar; ROLLBACK; -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] how do you set foriegn keys in pgaccess?
Hi, Jeremy, Have you tried PGadminIII? It's much easier to use. To input a script, you can use psql psql -h HOSTNAME -d DANAME -f FILENAME -U USERNAME Changyu --- jeremy ` <[EMAIL PROTECTED]> wrote: > I am a bit of a newbie to postgres, but I managed to > install 8.0.4 on my > windows box and it mostly appears to be working > fine; > I can set a primary key constraint, but when i try > to set the foreign key it > requires a 'reference' - but there is nothing there > to chose from. > > I also have another query: > How do you export the database tables and > relationships etc? I found a > pg_dump command in help and I am guessing this I > what i need but where do I > enter this command in pgaccess? > > Also how do you input existing SQL script into > postgres? > > Thanks for any help. > Jeremy > > _ > Become a fitness fanatic @ > http://xtramsn.co.nz/health > > > ---(end of > broadcast)--- > TIP 8: explain analyze is your friend > __ Discover Yahoo! Find restaurants, movies, travel and more fun for the weekend. Check it out! http://discover.yahoo.com/weekend.html ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] how do you set foriegn keys in pgaccess?
jeremy ` wrote: I am a bit of a newbie to postgres, but I managed to install 8.0.4 on my windows box and it mostly appears to be working fine; I can set a primary key constraint, but when i try to set the foreign key it requires a 'reference' - but there is nothing there to chose from. I also have another query: How do you export the database tables and relationships etc? I found a pg_dump command in help and I am guessing this I what i need but where do I enter this command in pgaccess? Also how do you input existing SQL script into postgres? Well, pg_dump and psql are command-line tools that are very flexible parts of your toolkit. They are described quite extensively in the manuals, and these should have been installed as HTML on your system. Also - are you sure you're using pgaccess and not pgadmin - both are graphical tools for PG, but I though pgadmin was installed by default on Windows. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Backup Compatibility between minor versions.
Thanks for the response Doug. I am frightened to upgrade the linux machine to 8.0.3 at the moment because the server is live and I want to make sure that 8.0.3 will fix it. I have extracted the relevant parts of the restore as follows: 1. The restore command pg_restore.exe -i -h 10.202.6.195 -p 5432 -U postgres -d export -l "backup9.backup" 2. The first error in the output. pg_restore: creating FUNCTION dex_init(text) pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 25; 1255 72303 FUNCTION dex_init(text) postgres pg_restore: [archiver (db)] could not execute query: ERROR: unsafe use of INTERNAL pseudo-type DETAIL: A function returning "internal" must have at least one "internal" argument. Command was: CREATE FUNCTION dex_init(text) RETURNS internal AS '$libdir/tsearch2', 'dex_init' LANGUAGE c; This was intalled to a clean database on XP with no tsearch2 installed. To me it looks as if there has been some modification in the use of INTERNAL from 8.0.1 to 8.0.3. FYI here is a dump of that tsearch2 function: CREATE OR REPLACE FUNCTION dex_init(text) RETURNS internal AS '$libdir/tsearch2', 'dex_init' LANGUAGE 'c' VOLATILE; ALTER FUNCTION dex_init(text) OWNER TO postgres; Is it possible to change that function to be compatible with both 8.0.1 and 8.0.3 - or am I looking in the wrong place? Howard www.selestial.com I would suggest upgrading to 8.0.3 on the linux machine, just to make sure you get all pg_dump bugfixes, then retrying the dump/restore. AFAIK there's no reason it shouldn't work. If you still have trouble, post the exact dump and restore commands you're using and the exact error message you get. -Doug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] vulnerability/SSL
On Wed, 2005-06-08 at 10:00 -0700, dong changyu wrote: > Hi, > A possible countermeasure on Windows platform, > inspired by Magnus.Thanks ;) > First we remove the passphrase from the key file, > making it plain. > Windows provides a feature "encrypted file system", > provide transparent encryption/decryption. We can log > on using the account we run Postgres with and encrypt > the plaintext key file. Then we logon using another > non-amin account, and start postgres using "runas" > service. Therefore the file is encrypted, only the > Postgres acount and the recovery agent(built-in > administrator by default) can read/modify it. The file > will remain encrypted when restored from backup. > I've tested it on my computer and it works. > > cheers, > Changyu You mean that every process that runs as "postgres" has the ability to read the file _without typing any password_? Or when you start PostgreSQL it prompts for one? Can "administrator" read it _without knowing password_? I may be missing something, but what's the difference with a file like this: -r 1 postgres postgres50 Jan 15 21:15 akey in any Unix system? Only "postgres" and "root" can read it. How about backups? Does the backup process (I assume it runs as administrator) store the key in cleartext? .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Backup Compatibility between minor versions.
Interestingly, this is the latest tsearch2 function that ships with 8.0.3 - note the slightly different syntax to the one below. It looks like the compatibility issue is caused by this. I shall try and modify the function in the 8.0.1 database, then try and restore it to a 8.0.3 server and see what happens! Howard Cole www.selestial.com CREATE OR REPLACE FUNCTION dex_init(internal) RETURNS internal AS '$libdir/tsearch2', 'dex_init' LANGUAGE 'c' VOLATILE; ALTER FUNCTION dex_init(internal) OWNER TO postgres; Howard Cole wrote: CREATE OR REPLACE FUNCTION dex_init(text) RETURNS internal AS '$libdir/tsearch2', 'dex_init' LANGUAGE 'c' VOLATILE; ALTER FUNCTION dex_init(text) OWNER TO postgres; Is it possible to change that function to be compatible with both 8.0.1 and 8.0.3 - or am I looking in the wrong place? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgres and ggobi/xgvis
On Jun 8, 2005, at 7:39 PM, Hrishikesh Deshmukh wrote: Hi All, How easy or difficult is it to get ggobi/xgvis working with postgresql?! Is it possible to write a query and send the output straight to ggobi/xgvis without much work? Any pointers. I would think that you would need to construct the input file for ggobi from your query output. There are many ways to do this, I would suppose, but if I were to do it, I would make a perl script that accepts some parameters, does the SQL query, and outputs the results with appropriate formatting to a file. Then call ggobi with that file as input. Pretty simple, but you still have to do the work of determining the file format for ggobi and generating that file format on your own. As an alternative, you could use a system like R (http://www.r-project.org) that has plugins that allow postgres access and access to ggobi. Sean ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] vulnerability/SSL
Hi Marco, The problem I described in the first mail is that because of some unknown reasons, if you save the server.key file with a passphrase, you will be prompted to enter the passphrase every time you start the server AND a client make a connection, which actually forbids us to use a passphrase to protect the key file, therefore the key file have to be saved in plaintext without encryption. EFS is a feature provided by Windows which will encrypt any selected file using a symmetric algorithm, the symmetric key will encrypted by the user¡¯s public key and the recovery agent¡¯s public key and the encrypted key will be saved within the file header. Thus only the user and recovery agent can decrypt it. And for another user, he cannot even open it (but can delete it). So we can ensure no one can read and modify it. Decryption is transparent to users and applications. The operation system will do it automatically if it can find appropriate private key. The difference between this and -r 1 postgres postgres50 Jan 15 21:15 is that the file is encrypted using EFS, while the latter is remain plaintext. When you backup the file, it remains encrypted. If you restore the file to a file system which doesn¡¯t support EFS (non-NTFS), it will corrupt, else it will remain encrypted. cheers, Changyu --- Marco Colombo <[EMAIL PROTECTED]> wrote: > > You mean that every process that runs as "postgres" > has the ability to > read the file _without typing any password_? Or when > you start > PostgreSQL it prompts for one? Can "administrator" > read it _without > knowing password_? > > I may be missing something, but what's the > difference with a file like > this: > > -r 1 postgres postgres50 Jan 15 21:15 > akey > > in any Unix system? Only "postgres" and "root" can > read it. > > How about backups? Does the backup process (I assume > it runs as > administrator) store the key in cleartext? > > .TM. > -- > / / / > / / / Marco > Colombo > ___/ ___ / / Technical > Manager >/ / / ESI s.r.l. > _/ _/ _/ > [EMAIL PROTECTED] > > > On Wed, 2005-06-08 at 10:00 -0700, dong changyu > wrote: > > Hi, > > A possible countermeasure on Windows platform, > > inspired by Magnus.Thanks ;) > > First we remove the passphrase from the key file, > > making it plain. > > Windows provides a feature "encrypted file > system", > > provide transparent encryption/decryption. We can > log > > on using the account we run Postgres with and > encrypt > > the plaintext key file. Then we logon using > another > > non-amin account, and start postgres using "runas" > > service. Therefore the file is encrypted, only the > > Postgres acount and the recovery agent(built-in > > administrator by default) can read/modify it. The > file > > will remain encrypted when restored from backup. > > I've tested it on my computer and it works. > > > > cheers, > > Changyu __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Postgre "idle" process using 100% CPU
Hi, i am using postgresql version 8.0.1 on Gentoo Linux and from time to time a postgres process that is marked as idle - "postgres: user db IP(34079) idle" - starts using 100% CPU. There is nothing in the logs, so i don't have a clue what could be the problem. Regards, Jernej Kos. -- Jernej Kos <[EMAIL PROTECTED]> JWeb-Network ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] vulnerability/SSL
On Thu, 2005-06-09 at 02:59 -0700, Changyu Dong wrote: > Hi Marco, > The problem I described in the first mail is that > because of some unknown reasons, if you save the > server.key file with a passphrase, you will be > prompted to enter the passphrase every time you start > the server AND a client make a connection, which > actually forbids us to use a passphrase to protect the > key file, therefore the key file have to be saved in > plaintext without encryption. > EFS is a feature provided by Windows which will > encrypt any selected file using a symmetric algorithm, > the symmetric key will encrypted by the user¡¯s public > key and the recovery agent¡¯s public key and the > encrypted key will be saved within the file header. As long as the 'postgres' user has access to it w/o typing any password, that's only a detail. Unless someone physically steals your disk, the fact it's stored encrypted is irrelevant. The only thing that matters is who can access it, and how. > Thus only the user and recovery agent can decrypt it. > And for another user, he cannot even open it (but can > delete it). So we can ensure no one can read and > modify it. That's how the permission bits work in Unix. No need to encrypt the file, we know permission bits actually work as expected under Unix. In this case encryption adds no extra level of security on a running system. > Decryption is transparent to users and > applications. The operation system will do it > automatically if it can find appropriate private key. > The difference between this and -r 1 postgres > postgres50 Jan 15 21:15 > is that the file is encrypted using EFS, while the > latter is remain plaintext. I fail to see the difference. On Windows, the 'postgres' user can read it without password. 'Administrator' has access to it, too. On Unix, with 400 permissions, the 'postgres' user can read it without password. 'root' has access to it, too. > When you backup the file, it remains encrypted. Then the backup is useless. If the secret key of the user 'postgres' is lost (and it can be, since it is stored elsewhere, I think buried somewhere where 'Administrator' can find it, maybe in user profile), you'll never recover then content of the file. > If you > restore the file to a file system which doesn¡¯t > support EFS (non-NTFS), it will corrupt, else it will > remain encrypted. Now THAT puzzles me a lot. I can imagine it be restored in plain. I can imagine it be restored encrypted. I have no way to justify the file contents being lost only because of restoring it on FAT. Anyway, that's not the point here. The point is: on Windows, if someone breaks in your 'postgres' account, he can read the key. If someone breaks in your 'administrator' account, he can read the key. But other users cannot read it. This level of protection is exactly the same provided by the 400 permissions above under Unix. If someone breaks in the 'postgres' account, he can read the key. If someone breaks in the 'root' account, he can read the key. But other users cannot read it. I fail to see any difference in the above scenarios. Encrypting the key in the .pem file (as supported by openssl) is completely different! No one, ever, can access it w/o knowing the password. That's why it takes the operator to type the password in. Also backups are safe. And just as useful as the file itself, they can be restored everywhere. If someone forgets the password, the contents are lost, but that's true for the file itself. The backup is just what you expect to be, a copy. You restore it, and get a _working_ copy for the file, on every filesystem. The .pem key can be sent by email even, as is (since it's base64 encoded). The daemon should ask for the password only once, we agree on that. Storing the key encrypted (in the openssl sense) doesn't help much against root, if he's able to trick the operator into typing the password again. If you're able to avoid it, that is you're in a highly secure environment with operators trained _not_ to type the password in "just to have the server restarted", .pem encryption adds a lot to your security. The EFS encryption as you described it adds nothing but a false sense of security (and the ability to use some more buzzwords). The level of protection is just the same of a Unix file with the right permissions. The key point here is that both the 'postgres' user and 'administrator' have _transparent_ access to the file contents. No password required. .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Pb with linked tables on PG8
Hello ! I have an Access database using linked tables on PG 8.0.3 (on Windows 2003 server).I use pgODBC 8.00.01.01. Some tables appear like deleted !In Access I see : Categ : Tablevt_cat vt_libcat #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé #Supprimé (Supprmé means deleted in French) I try to delete the link and recreate it, same problem ! I try a VACUUM FULL ANALYSE on the database, same problem ! How can I solve this problem ? Why ? Many thanks for all your explainations. Regards. Luc
Re: [GENERAL] vulnerability/SSL
> The EFS encryption as you described it adds nothing but a > false sense of security (and the ability to use some more > buzzwords). The level of protection is just the same of a > Unix file with the right permissions. > The key point here is that both the 'postgres' user and > 'administrator' > have _transparent_ access to the file contents. No password required. While most of what you wrote is definitly correct, you missed a few things about EFS. 1) Administrator does not necessarily have *transparent* access. It's only the users access that is transparent. 2) It is quite possible to remove the administrator recovery key. This can be used to protect *against* administrators reading the file. You do *not* need to have *any* recovery key. 2b) It's even so that in Windows XP (and I think 2003), if it is *not* a member of a domain, there *is* no default recovery key. In a domain, it's the domain admins key, or whatever is configured in your domain policy. In 2000, it's the local admin that first logs on to the box. 3) The recommended practice is to have the recovery key only available off-line, locked into a separate building with half an army defending it. Or something like that. At least put it in a smartcard that nobody can access without going through lots and lots of safe checks on who they are. So it does offer a bit of extra security. "Just" to protect the key used to set up the SSL sessions, I'm not sure it's worth it. Because again, if they hack your admin account, they can get to your files *without* going thruogh getting into the SSL stream. //Magnus ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] vulnerability/SSL
--- Marco Colombo <[EMAIL PROTECTED]> wrote: > As long as the 'postgres' user has access to it w/o > typing any password, > that's only a detail. Unless someone physically > steals your disk, the > fact it's stored encrypted is irrelevant. The only > thing that matters is > who can access it, and how. > > That's how the permission bits work in Unix. No need > to encrypt the > file, we know permission bits actually work as > expected under Unix. In > this case encryption adds no extra level of security > on a running > system. > > I fail to see the difference. On Windows, the > 'postgres' user can read > it without password. 'Administrator' has access to > it, too. > > On Unix, with 400 permissions, the 'postgres' user > can read it without > password. 'root' has access to it, too. > Then how about resore it from a backup to another system? In this way the permission is bypassed but EFS still works. > > Then the backup is useless. If the secret key of the > user 'postgres' is > lost (and it can be, since it is stored elsewhere, I > think buried > somewhere where 'Administrator' can find it, maybe > in user profile), > you'll never recover then content of the file. > Right, but the user's private key can be exported into a password protected pem file. > Now THAT puzzles me a lot. I can imagine it be > restored in plain. I can > imagine it be restored encrypted. I have no way to > justify the file > contents being lost only because of restoring it on > FAT. If the encrypted file can be restored in plain to FAT, it's useless. Anyone can backup and then resore it to decrypt the file. And the file is not lost, you can still restore it from the backup. It's not so hard to find an NTFS partition, Postgres requires NTFS to run. > > Anyway, that's not the point here. > > The point is: on Windows, if someone breaks in your > 'postgres' account, > he can read the key. If someone breaks in your > 'administrator' account, > he can read the key. But other users cannot read it. > > This level of protection is exactly the same > provided by the 400 > permissions above under Unix. If someone breaks in > the 'postgres' > account, he can read the key. If someone breaks in > the 'root' account, > he can read the key. But other users cannot read it. > > I fail to see any difference in the above scenarios. > If an intruder can break the postgres or root account, he can read everything, as have been discussed, not only the key but also the data file. So in this situation, it's useless to protect the key only. > Encrypting the key in the .pem file (as supported by > openssl) is > completely different! No one, ever, can access it > w/o knowing the > password. That's why it takes the operator to type > the password in. > Also backups are safe. And just as useful as the > file itself, they can > be restored everywhere. If someone forgets the > password, the contents > are lost, but that's true for the file itself. The > backup is just what > you expect to be, a copy. You restore it, and get a > _working_ copy for > the file, on every filesystem. The .pem key can be > sent by email even, > as is (since it's base64 encoded). > Yes, the .pem file can be kept for distribution and backup, but the working copy has to be plain. > The daemon should ask for the password only once, we > agree on that. > Yes, that's the ultimate solution. So we can use encrypted key without any outside mechanism. > Storing the key encrypted (in the openssl sense) > doesn't help much > against root, if he's able to trick the operator > into typing the > password again. If you're able to avoid it, that is > you're in a highly > secure environment with operators trained _not_ to > type the password in > "just to have the server restarted", .pem encryption > adds a lot to your > security. > I'm not sure, but windows begins to support smart card logon, therefore no password will be need and stored. > The EFS encryption as you described it adds nothing > but a false sense of > security (and the ability to use some more > buzzwords). The level of > protection is just the same of a Unix file with the > right permissions. > The key point here is that both the 'postgres' user > and 'administrator' > have _transparent_ access to the file contents. No > password required. > At least it make it impossible to restore the plain key from backup. > .TM. > -- > / / / > / / / Marco > Colombo > ___/ ___ / / Technical > Manager >/ / / ESI s.r.l. > _/ _/ _/ > [EMAIL PROTECTED] > > > ---(end of > broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > cheers, Changyu __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yaho
Re: [GENERAL] vulnerability/SSL
Hi Magnus, You are right. My description is based on windows 2000 which is the weakest one. Have the recovery key only available off-line is a good practice. And if you don't want recovery agent, backup the user's private key is also appropriate. It can be done without effort. You don't need an army or something like that :) cheers, Changyu --- Magnus Hagander <[EMAIL PROTECTED]> wrote: > > > The EFS encryption as you described it adds > nothing but a > > false sense of security (and the ability to use > some more > > buzzwords). The level of protection is just the > same of a > > Unix file with the right permissions. > > The key point here is that both the 'postgres' > user and > > 'administrator' > > have _transparent_ access to the file contents. No > password required. > > While most of what you wrote is definitly correct, > you missed a few > things about EFS. > > 1) Administrator does not necessarily have > *transparent* access. It's > only the users access that is transparent. > > 2) It is quite possible to remove the administrator > recovery key. This > can be used to protect *against* administrators > reading the file. You do > *not* need to have *any* recovery key. > > 2b) It's even so that in Windows XP (and I think > 2003), if it is *not* a > member of a domain, there *is* no default recovery > key. In a domain, > it's the domain admins key, or whatever is > configured in your domain > policy. In 2000, it's the local admin that first > logs on to the box. > > 3) The recommended practice is to have the recovery > key only available > off-line, locked into a separate building with half > an army defending > it. Or something like that. At least put it in a > smartcard that nobody > can access without going through lots and lots of > safe checks on who > they are. > > So it does offer a bit of extra security. "Just" to > protect the key used > to set up the SSL sessions, I'm not sure it's worth > it. Because again, > if they hack your admin account, they can get to > your files *without* > going thruogh getting into the SSL stream. > > > //Magnus > > ---(end of > broadcast)--- > TIP 1: subscribe and unsubscribe commands go to > [EMAIL PROTECTED] > __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] vulnerability/SSL
On Thu, 2005-06-09 at 13:54 +0200, Magnus Hagander wrote: > > The EFS encryption as you described it adds nothing but a > > false sense of security (and the ability to use some more > > buzzwords). The level of protection is just the same of a > > Unix file with the right permissions. > > The key point here is that both the 'postgres' user and > > 'administrator' > > have _transparent_ access to the file contents. No password required. > > While most of what you wrote is definitly correct, you missed a few > things about EFS. [...stuff on EFS deleted...] I agree on that. I'm not saying EFS is a bad idea. I'm only considering the server key case. Anyway protecting against a malicious superuser is the subject of a chapter on its own. There are many ways for a superuser to hijack an user account. I'm not a Windows guy, but with a similar setup under Unix, I'd do: su - user -c "cp encrypted_file /tmp/unencrypted" or anything equivalent. That is, at C level, open(2) the file, read(2) it (the system gives me plaintext contents) write(2) it to another file. The OP said he starts PostgreSQL with 'runas' which I get being the Windows relative to 'su' (or 'cron'?). If Administrator can run programs as the user, he can read the key file. That's the big difference. > So it does offer a bit of extra security. "Just" to protect the key used > to set up the SSL sessions, I'm not sure it's worth it. Because again, > if they hack your admin account, they can get to your files *without* > going thruogh getting into the SSL stream. I think you're missing two points: 1) the purpose of getting the key is _not_ to be able to decrypt the SSL stream (even if that's a nice consequence). The worse you can do is _sign_ things with the key, that is, impersonate the server, mount man in the middle attacks, and so on. Without anyone notice it for a while. The whole point of encrypting the key is that a compromised key is worse than a compromised system (and way more likely to go unnoticed). 2) there's not need for the intruder to scale to administrator powers. If they break the 'postgres' account, they read the key. Even if they break the server at SQL level, i.e. they gain superuser for the database, they may be able to read it with a COPY SQL command, since it's likely the key is accessible to the server (I'm not claiming the latter is feasible - just in theory that's all they need). The problem reported by the OP is a real one, since it prevents the usage of an encrypted key. Not a big one, but still one. The "solution" the OP posted later is not solving the problem at all. BTW, even with the key is encrypted (the .pem way), the cleartext copy must stay in the server memory space (possibly in a locked area). It may be available to root ("debugging" the server), the user or the process itself of course, if "they" manage to execute arbitrary code. There's not way to make it 100% safe. .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] vulnerability/SSL
> > > The EFS encryption as you described it adds nothing but a false > > > sense of security (and the ability to use some more > buzzwords). The > > > level of protection is just the same of a Unix file with > the right > > > permissions. > > > The key point here is that both the 'postgres' user and > > > 'administrator' > > > have _transparent_ access to the file contents. No > password required. > > > > While most of what you wrote is definitly correct, you missed a few > > things about EFS. > > [...stuff on EFS deleted...] > > I agree on that. I'm not saying EFS is a bad idea. I'm only > considering the server key case. Anyway protecting against a > malicious superuser is the subject of a chapter on its own. > There are many ways for a superuser to hijack an user > account. I'm not a Windows guy, but with a similar setup > under Unix, I'd do: > > su - user -c "cp encrypted_file /tmp/unencrypted" > > or anything equivalent. That is, at C level, open(2) the > file, read(2) it (the system gives me plaintext contents) > write(2) it to another file. > The OP said he starts PostgreSQL with 'runas' which I get > being the Windows relative to 'su' (or 'cron'?). If > Administrator can run programs as the user, he can read the > key file. That's the big difference. Yes, that is correct - runas is similar to su. But in order to do "runas", you need the service accounts password. Once you are "root" on a unix system, you can do "su - user" *without* the password. That's a big difference. (You can also use the postgres accounts smartcard, if you are using smartcard logins, but the deal is that you need *something* that is normally private to the account - even if you are an administrator) > > So it does offer a bit of extra security. "Just" to protect the key > > used to set up the SSL sessions, I'm not sure it's worth > it. Because > > again, if they hack your admin account, they can get to your files > > *without* going thruogh getting into the SSL stream. > > I think you're missing two points: > > 1) the purpose of getting the key is _not_ to be able to > decrypt the SSL stream (even if that's a nice consequence). > The worse you can do is _sign_ things with the key, that is, > impersonate the server, mount man in the middle attacks, and > so on. Without anyone notice it for a while. > The whole point of encrypting the key is that a compromised > key is worse than a compromised system (and way more likely > to go unnoticed). Correct, I was looking only at the issue of decrypting the stream. Well, if you have admin on the server, you can replace the whole postmaster binary. Which means game over no matter what. > 2) there's not need for the intruder to scale to administrator powers. > If they break the 'postgres' account, they read the key. That is the difference, though. Replacing the binary would require admin privs, not just postgres privs. (Depending on the installation, of course, but atl east the default install on win32 will explicitlyi deny the service acconut permissions to change it's own binaries) > The problem reported by the OP is a real one, since it > prevents the usage of an encrypted key. Not a big one, but > still one. The "solution" > the OP posted later is not solving the problem at all. Agreed. > BTW, even with the key is encrypted (the .pem way), the > cleartext copy must stay in the server memory space (possibly > in a locked area). It may be available to root ("debugging" > the server), the user or the process itself of course, if > "they" manage to execute arbitrary code. There's not way to > make it 100% safe. If they ahve the postgres superuser, and can somehow send/receive files (COPY for example), they can load a user defined C function, and just read the server key from there. Sure, it's slightly more complicated than reading the file, but not really all that much. I guess we could read in the password ourselves and drop it in our shared memory segment to pass to subprocesses - though that means they can get to the password easier as well. Assuming OpenSSL has the APIs for that, I haven't checked that. I'm unconvinced it makes enough of a difference to be worthwhile, though. (BTW, am I correct in reading this as a problem that only appears on win32, because of the exec nature of the backend, right? Or does it show up on Unix as well?) //Magnus ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] vulnerability/SSL
I don't know, I just tested it on win32. Changyu --- Magnus Hagander <[EMAIL PROTECTED]> wrote: > (BTW, am I correct in reading this as a problem that > only appears on > win32, because of the exec nature of the backend, > right? Or does it show > up on Unix as well?) > > //Magnus > __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] vulnerability/SSL
On Thu, 2005-06-09 at 05:21 -0700, Changyu Dong wrote: > --- Marco Colombo <[EMAIL PROTECTED]> wrote: > > > As long as the 'postgres' user has access to it w/o > > typing any password, > > that's only a detail. Unless someone physically > > steals your disk, the > > fact it's stored encrypted is irrelevant. The only > > thing that matters is > > who can access it, and how. > > > > That's how the permission bits work in Unix. No need > > to encrypt the > > file, we know permission bits actually work as > > expected under Unix. In > > this case encryption adds no extra level of security > > on a running > > system. > > > > I fail to see the difference. On Windows, the > > 'postgres' user can read > > it without password. 'Administrator' has access to > > it, too. > > > > On Unix, with 400 permissions, the 'postgres' user > > can read it without > > password. 'root' has access to it, too. > > > Then how about resore it from a backup to another > system? In this way the permission is bypassed but EFS > still works. Either the Windows backup contains the private key of the user or not. If not, the backup is incomplete and useless (to get the file contents). You may get other files from it, but that's not the point. You may just not include the key file in _that_ backup. If you have two backups, one "normal" and another "safe", just put the keyfile on the safe one, along with the other private keys. You can do the same under Unix of course. If your single backup contains the user private key, EFS is bypassed as well. This is going offtopic. The EFS approach is no different from any encrypted filesystem, nothing new under the sun. It shares the weakness of any system that lets you access the data at runtime w/o password. > > Then the backup is useless. If the secret key of the > > user 'postgres' is > > lost (and it can be, since it is stored elsewhere, I > > think buried > > somewhere where 'Administrator' can find it, maybe > > in user profile), > > you'll never recover then content of the file. > > > Right, but the user's private key can be exported into > a password protected pem file. Save the server key in the same way then. Put the server key and the user key together. [...] > If an intruder can break the postgres or root account, > he can read everything, as have been discussed, not > only the key but also the data file. So in this > situation, it's useless to protect the key only. Yes, it has been discussed: the purpose of the key is not protecting the data, but protecting your identity. If the key is compromised, they can impersonate you. Generally, this is much bigger a damage. They can create fake data, _signed by you_. [...] > Yes, the .pem file can be kept for distribution and > backup, but the working copy has to be plain. > > > The daemon should ask for the password only once, we > > agree on that. > > > Yes, that's the ultimate solution. So we can use > encrypted key without any outside mechanism. We agree on that. That's the _only_ solution if you want that kind of security. > > Storing the key encrypted (in the openssl sense) > > doesn't help much > > against root, if he's able to trick the operator > > into typing the > > password again. If you're able to avoid it, that is > > you're in a highly > > secure environment with operators trained _not_ to > > type the password in > > "just to have the server restarted", .pem encryption > > adds a lot to your > > security. > > > I'm not sure, but windows begins to support smart card > logon, therefore no password will be need and stored. That changes nothing. Somehow the key as to be given, unencrypted, to the server. Be it an operator typing a password, or inserting a smartcard, a patched server can store the key in cleartext anywhere. You have to teach your operators to think twice before performing anything that lets the server access the key. With your solution, you're letting the server access the key automatically. > > The EFS encryption as you described it adds nothing > > but a false sense of > > security (and the ability to use some more > > buzzwords). The level of > > protection is just the same of a Unix file with the > > right permissions. > > The key point here is that both the 'postgres' user > > and 'administrator' > > have _transparent_ access to the file contents. No > > password required. > > > At least it make it impossible to restore the plain > key from backup. The safety of that backup lies only on its incompleteness. If you include the user key in the same backup, there's no security. If you don't include the user key (and thus create an incomplete backup), it's easier not to include the server key either, and put it in the same place you put the user key. They are both "private keys". Including a useless copy of the server key encrypted with the user key (stored elsewhere) is just a perverse way to gain nothing. But I agree that sometimes perverse systems make perverse things look natural. :-) .TM
Re: [GENERAL] vulnerability/SSL
On Thu, 2005-06-09 at 15:04 +0200, Magnus Hagander wrote: [...] > Yes, that is correct - runas is similar to su. But in order to do > "runas", you need the service accounts password. Once you are "root" on > a unix system, you can do "su - user" *without* the password. That's a > big difference. > (You can also use the postgres accounts smartcard, if you are using > smartcard logins, but the deal is that you need *something* that is > normally private to the account - even if you are an administrator) Is that at application level or system level? You know I can install a patched su that asks root for passwords as well, but the problem is with the seteuid() system call, not su. You can (with SELinux) limit root powers a lot, but that's not the point. [...] > I guess we could read in the password ourselves and drop it in our > shared memory segment to pass to subprocesses - though that means they > can get to the password easier as well. Assuming OpenSSL has the APIs > for that, I haven't checked that. I'm unconvinced it makes enough of a > difference to be worthwhile, though. > (BTW, am I correct in reading this as a problem that only appears on > win32, because of the exec nature of the backend, right? Or does it show > up on Unix as well?) Is the Unix version much different? I think the postmaster just forks and execs the backends. But, aren't connections handled by the postmaster? All the SSL thing should happen before the fork I think. Is the Windows model different? Do backends handle SSL negotiation? .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] vulnerability/SSL
> > Yes, that is correct - runas is similar to su. But in order to do > > "runas", you need the service accounts password. Once you > are "root" > > on a unix system, you can do "su - user" *without* the password. > > That's a big difference. > > (You can also use the postgres accounts smartcard, if you are using > > smartcard logins, but the deal is that you need *something* that is > > normally private to the account - even if you are an administrator) > > Is that at application level or system level? You know I can > install a patched su that asks root for passwords as well, > but the problem is with the seteuid() system call, not su. > You can (with SELinux) limit root powers a lot, but that's > not the point. I think it's at the system level. At least there is no API to do it. > > I guess we could read in the password ourselves and drop it in our > > shared memory segment to pass to subprocesses - though that > means they > > can get to the password easier as well. Assuming OpenSSL > has the APIs > > for that, I haven't checked that. I'm unconvinced it makes > enough of a > > difference to be worthwhile, though. > > (BTW, am I correct in reading this as a problem that only > appears on > > win32, because of the exec nature of the backend, right? Or does it > > show up on Unix as well?) > > Is the Unix version much different? I think the postmaster > just forks and execs the backends. It forks. It doesn't exec. As such, it inherits all the memory from the postmaster. > But, aren't connections > handled by the postmaster? All the SSL thing should happen > before the fork I think. Is the Windows model different? Do > backends handle SSL negotiation? On win32 we do fork() + exec(), and the OpenSSL library requires a new initialization, including reading the key. I should say I'm speaking from memory of stuff I looked at a longt ime ago, though - it could probably do with someone looking throuhg exactly how it is now ;-) IIRC, the problem is that the openssl structs contain function pointers, which are not guaranteed to point to the same thing in the child. //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] vulnerability/SSL
--- Marco Colombo <[EMAIL PROTECTED]> wrote: > Either the Windows backup contains the private key > of the user or not. > > If not, the backup is incomplete and useless (to get > the file contents). > You may get other files from it, but that's not the > point. You may just > not include the key file in _that_ backup. > > If you have two backups, one "normal" and another > "safe", just put the > keyfile on the safe one, along with the other > private keys. You can do > the same under Unix of course. > > If your single backup contains the user private key, > EFS is bypassed as well. I don't think we should include everything in ONE backup. Incremental and differential backup are often used, but they need several backup to resore the system. And as I have said, the user's private key can be exported in a standard pem file, encrypted by a password only known by himself(not use EFS, just the same as any encrypted pem file), if you don't know the password, how could you bypassed EFS? > > This is going offtopic. The EFS approach is no > different from any > encrypted filesystem, nothing new under the sun. It > shares the weakness > of any system that lets you access the data at > runtime w/o password. > > > Save the server key in the same way then. Put the > server key and the > user key together. > That is not a good idea, you have to encrypt the server's key and delete the plain key before you backup. > [...] > > If an intruder can break the postgres or root > account, > > he can read everything, as have been discussed, > not > > only the key but also the data file. So in this > > situation, it's useless to protect the key only. > > Yes, it has been discussed: the purpose of the key > is not protecting the > data, but protecting your identity. If the key is > compromised, they can > impersonate you. Generally, this is much bigger a > damage. They can > create fake data, _signed by you_. > But this key is only used for SSL of my postgres, so even it is copromised, the only way the intruder to use it is to decrypt and forge data between client and postgres. If he can access the data directly, why not he do so? > [...] > > Yes, the .pem file can be kept for distribution > and > > backup, but the working copy has to be plain. > > > > > The daemon should ask for the password only > once, we > > > agree on that. > > > > > Yes, that's the ultimate solution. So we can use > > encrypted key without any outside mechanism. > > We agree on that. That's the _only_ solution if you > want that kind of > security. > > > That changes nothing. Somehow the key as to be > given, unencrypted, to > the server. Be it an operator typing a password, or > inserting a > smartcard, a patched server can store the key in > cleartext anywhere. > You have to teach your operators to think twice > before performing > anything that lets the server access the key. With > your solution, you're > letting the server access the key automatically. > It's also a problem with encrypted pem. > > > The EFS encryption as you described it adds > nothing > > > but a false sense of > > > security (and the ability to use some more > > > buzzwords). The level of > > > protection is just the same of a Unix file with > the > > > right permissions. > > > The key point here is that both the 'postgres' > user > > > and 'administrator' > > > have _transparent_ access to the file contents. > No > > > password required. > > > > > At least it make it impossible to restore the > plain > > key from backup. > > The safety of that backup lies only on its > incompleteness. > > If you include the user key in the same backup, > there's no security. > If you don't include the user key (and thus create > an incomplete > backup), it's easier not to include the server key > either, and put it in > the same place you put the user key. They are both > "private keys". > > Including a useless copy of the server key encrypted > with the user key > (stored elsewhere) is just a perverse way to gain > nothing. > But I agree that sometimes perverse systems make > perverse things look > natural. :-) > In your logic, then all the encryption algorithms are "perverse" because they rely on incompleteness, you can never include the key itself in the encrypted data, you always need to keep something secret. cheers, Changyu __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] deadlocks in multiple-triggers environment
On Wed, Jun 08, 2005 at 05:45:45PM +0200, hubert depesz lubaczewski wrote: > hi > i have a stituation a situation where i have multiple tables, and multiple > triggers on all of them. > at least 1 or 2 triggers on at lease 4 different tables does updates to main > cache table. Do say, are there foreign keys on those tables? If there are, that may explain the deadlocks. This is a known problem, fixed in the development version, for which there is no complete solution on current releases (except for dropping the foreign keys completely.) One workaround that may reduce the ocurrence of deadlocks is to make the constraints INITIALLY DEFERRED, so that the checks happen as late as possible in the transaction. -- Alvaro Herrera () Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest "foo" someone someday shall type "supercalifragilisticexpialidocious" (5th Commandment for C programmers) ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Propogating conditions into a query
Dear All, I have a number of complex views for which the typical use is to select exactly one row by id, e.g. "select * from V where id=nnn". Some of these selects run orders of magnitude faster than others. Looking at the output of "explain analyse" it seems that in the fast cases the "id=nnn" condition is passed down to the lower-level operations, while in the slower cases the entire view is created and then filtered using the condition as a final step. I am trying to narrow down what types of query I can use in the views to avoid the poor performance. Here are a couple of things that I have noticed: - One query had a "distinct on (id)" at the top level. This was only to cope with an obscure case where what is normally a one-to-one join could return multiple rows. Removing the "distinct" and discarding the duplicate rows in the calling code means that the "where id=nnn" is now applied as a condition for an index scan where it previously wasn't, reducing execution time by two orders of magnitude. But I can't see a reason why the "id=nnn" condition couldn't have been used inside the query, even in the presence of the "distinct" clause. - In another case I have a LEFT OUTER JOIN which can be made much faster by instead using a normal JOIN. Unfortunately a normal JOIN doesn't do what I want, but I can't see why the condition is propogated into the JOIN but not the LEFT OUTER JOIN. Here is an outline of the query: D left outer join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn That does index scans on M and G and a merge join to create the complete "M join G" table. On the other hand, if I do D join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn then it does conditional index scans on D.id=nnn and M.b=nnn and a nested loop join returning one row, followed by a conditional index scan on G. This is an order of magnitude faster. I don't think this is a problem with statistics; the row-count estimates are all reasonable. I imagine that the restriction is something missing in the query optimiser. Can I rewrite this query somehow? Is there anything else I can do about it? This is with 7.4.2. Cheers, Phil. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] deadlocks in multiple-triggers environment
[snip] > Do say, are there foreign keys on those tables? > > If there are, that may explain the deadlocks. This is a known problem, > fixed in the development version, for which there is no complete Wow, that's a good news :-) Time to drop that nasty patch we're using... I'm not on the developers list, I guess it was discussed there, I'm curios what kind of solution it is ? A special foreign key thing or row level shared locks were implemented ? [snip] Cheers, Csaba. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] "returning" in postgresql request
Hi, I try to deploy an Application with Oracle Database to a solution with postgresql. the Oracle system exists and we use a request which return an int in a variable nb by "returning nb_lock into nb" UPDATE xdb_ancestors_lock SET nb_lock=nb_lock+1 WHERE doc_id=? AND ele_id=? returning nb_lock INTO nb; I'd like to know if there is equivalent solution under postgresql or if i'm obliged to do a select before my update. Best regards, Nico This e-mail is intended only for the above addressee. It may contain privileged information. If you are not the addressee you must not copy, distribute, disclose or use any of the information in it. If you have received it in error please delete it and immediately notify the sender. Security Notice: all e-mail, sent to or from this address, may be accessed by someone other than the recipient, for system management and security reasons. This access is controlled under Regulation of Investigatory Powers Act 2000, Lawful Business Practises. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] deadlocks in multiple-triggers environment
On Thu, Jun 09, 2005 at 04:26:44PM +0200, Csaba Nagy wrote: > [snip] > > Do say, are there foreign keys on those tables? > > > > If there are, that may explain the deadlocks. This is a known problem, > > fixed in the development version, for which there is no complete > > > Wow, that's a good news :-) > Time to drop that nasty patch we're using... > I'm not on the developers list, I guess it was discussed there, I'm > curios what kind of solution it is ? A special foreign key thing or row > level shared locks were implemented ? Shared row locks. It'd be nice if you could give them a try to get some feedback ... (we are a couple of months from beta though) The development group is always looking for ways to annoy users by forcing them to upgrade to newer versions ;-) -- Alvaro Herrera () Bob [Floyd] used to say that he was planning to get a Ph.D. by the "green stamp method," namely by saving envelopes addressed to him as 'Dr. Floyd'. After collecting 500 such letters, he mused, a university somewhere in Arizona would probably grant him a degree. (Don Knuth) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] "returning" in postgresql request
GIROIRE, Nicolas (COFRAMI) wrote: Hi, I try to deploy an Application with Oracle Database to a solution with postgresql. the Oracle system exists and we use a request which return an int in a variable nb by "returning nb_lock into nb" UPDATE xdb_ancestors_lock SET nb_lock=nb_lock+1 WHERE doc_id=? AND ele_id=? returning nb_lock INTO nb; I'd like to know if there is equivalent solution under postgresql or if i'm obliged to do a select before my update. I think something similar has been discussed, but there's nothing like it at the moment. You'll have to SELECT FOR UPDATE; UPDATE -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] vulnerability/SSL
On Thu, 2005-06-09 at 15:53 +0200, Magnus Hagander wrote: [...] > > > I guess we could read in the password ourselves and drop it in our > > > shared memory segment to pass to subprocesses - though that > > means they > > > can get to the password easier as well. Assuming OpenSSL > > has the APIs > > > for that, I haven't checked that. I'm unconvinced it makes > > enough of a > > > difference to be worthwhile, though. > > > (BTW, am I correct in reading this as a problem that only > > appears on > > > win32, because of the exec nature of the backend, right? Or does it > > > show up on Unix as well?) > > > > Is the Unix version much different? I think the postmaster > > just forks and execs the backends. > > It forks. It doesn't exec. As such, it inherits all the memory from the > postmaster. Oh, I need to check it out then. I thought the 'postmaster' executable forks and execs 'postgres', and then they share mem via shmat(2). But now I see they are the same executable - weird how I've never noticed that before. > > But, aren't connections > > handled by the postmaster? All the SSL thing should happen > > before the fork I think. Is the Windows model different? Do > > backends handle SSL negotiation? > > On win32 we do fork() + exec(), and the OpenSSL library requires a new > initialization, including reading the key. I should say I'm speaking > from memory of stuff I looked at a longt ime ago, though - it could > probably do with someone looking throuhg exactly how it is now ;-) > > IIRC, the problem is that the openssl structs contain function pointers, > which are not guaranteed to point to the same thing in the child. Ouch, I see the problem. You do need to pass the unencrypted key around then, assuming openssl supports such a thing. Now I also see it's useless to setup the openssl context only to destroy it with exec. So it has to be the child to handle it. .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Strange transaction-id behaviour? (was Re: [GENERAL] Two updates problem)
Richard Huxton writes: > I'm not sure it's sensible to have the update in the WHERE clause - I > don't know that you can depend on how many times that function will be > called. It's absolutely not very sensible to do that ... note the warnings in http://www.postgresql.org/docs/8.0/static/sql-expressions.html#SYNTAX-EXPRESS-EVAL We have no way to enforce "no functions with side effects in WHERE", but you're not going to get any sympathy at all if you break that rule. > On the other hand, I wouldn't like to say this is the right behaviour - > I'm cc:ing this to the hackers list so they can take a look at it. It is intentional. A given command can only see/update row versions produced by earlier commands --- without this rule, you have the "Halloween problem" that an UPDATE can see (and try to update) its own output rows, leading to an infinite loop. Actually the rule is "you can see row versions produced by commands started earlier than your own command" (cmin < current cid), which means there is another risk involved in this sort of programming: if the function looks at the contents of the table being updated by the outer UPDATE, it will see the partially completed effects of the UPDATE. While I suppose that's exactly what Yuri was after ;-), it's generally considered a bad thing, because there is no guarantee as to the order in which rows are updated, and thus no predictability as to exactly what intermediate states the function will see. As of PG 8.0, things are set up so that this only applies to functions marked VOLATILE; if a function is marked STABLE or IMMUTABLE then it runs with the same cid as the calling query, and therefore it does *not* see any partial effects of that query. Confused yet? ;-) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Starting PostgreSQL on WinXP is not working
Hi, i'm working with PostgreSQL for a long time (about three years), but always on Linux box. But recently, I had to intall PostgreSQL on a WinXP machine! The installation works fine, although the starting service did not works in the finalization of the installation! The installation was done in a WinXP SP1 as a service! My problem is that I installed with admin rights, but created the user postgres for the system and user postgres for the database. The user postgres for the system is supposed to be used to start the service, but i think it is not happening, the system tries to start the service with the user that login in the system (the admin). If I trie to start the service with a user without admin privileges it gives me this error: 2005-06-08 15:14:07 NOTICE: Unknown win32 socket error code: 10106 2005-06-08 15:14:07 LOG: could not create IPv4 socket: Invalid argument 2005-06-08 15:14:07 WARNING: could not create listen socket for "localhost" 2005-06-08 15:14:07 FATAL: no socket created for listening I googled a lot and dind't find anything with this kind of error. So, I really need some help! Thanks a lot!! ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Backup Compatibility between minor versions.
Howard Cole <[EMAIL PROTECTED]> writes: > Interestingly, this is the latest tsearch2 function that ships with > 8.0.3 - note the slightly different syntax to the one below. It looks > like the compatibility issue is caused by this. Read the 8.0.3 release notes ... http://www.postgresql.org/docs/8.0/static/release.html#RELEASE-8-0-3 regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] "returning" in postgresql request
Am Donnerstag, den 09.06.2005, 16:30 +0200 schrieb GIROIRE, Nicolas (COFRAMI): > Hi, > > I try to deploy an Application with Oracle Database to a solution with > postgresql. > the Oracle system exists and we use a request which return an int in a > variable nb by "returning nb_lock into nb" > > UPDATE xdb_ancestors_lock SET nb_lock=nb_lock+1 WHERE doc_id=? AND > ele_id=? returning nb_lock INTO nb; > > I'd like to know if there is equivalent solution under > postgresql or if i'm obliged to do a select before my update. > > Best regards, > > Nico > Looks like you really want: UPDATE xdb_ancestors_lock SET nb_lock=nextval('nb_lock_sequence') WHERE doc_id=? AND ele_id=?; SELECT currval('nb_lock_sequence'); if you created a sequence and want assign just another free key. If not, you need SELECT ... FOR UPDATE instead. -- Tino Wildenhain <[EMAIL PROTECTED]> ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Postgre "idle" process using 100% CPU
Jernej Kos <[EMAIL PROTECTED]> writes: > i am using postgresql version 8.0.1 on Gentoo Linux and from time to time a > postgres process that is marked as idle - "postgres: user db IP(34079) idle" > - starts using 100% CPU. There is nothing in the logs, so i don't have a clue > what could be the problem. For a long time? I'd expect it to go busy on receiving a command somewhat before changing the PS status, because command parsing happens first (else it can't know what to set the status to ...). If you are in the habit of sending enormously complex SQL commands then maybe this state would last long enough to notice. If you don't think that's it, maybe you could attach to the busy backend with gdb and get a stack trace to find out what it's doing? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] database auto-commit
IIRC there is no autocommit in postgresql itself, so the autocommit is probably from whatever connection library/method you're using. Note that PSQL does have an autocommit option, but I don't see how that would affect this case. On Tue, May 31, 2005 at 03:12:26PM +0200, FERREIRA, William (COFRAMI) wrote: > > hi > > i writing a j2ee application working with postgresql and i meet a problem > with autocommit > from my j2ee application i call a perl function and i get an error : > in french : Les Large Objects ne devraient pas ?tre utilis?s en mode > auto-commit. > in english : Large Objects should not be used in mode auto-committed. > > and i didn't find how to disable autocommit. > > Thanks in advance > > William > > This mail has originated outside your organization, either from an external > partner or the Global Internet. Keep this in mind if you answer this message. > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] "returning" in postgresql request
> > deploy an Application with Oracle Database to a solution with postgresql. > > ... > > UPDATE xdb_ancestors_lock SET nb_lock=nb_lock+1 WHERE doc_id=? AND > > > ele_id=? returning nb_lock INTO nb; > Looks like you really want: > > UPDATE xdb_ancestors_lock SET nb_lock=nextval('nb_lock_sequence') WHERE > doc_id=? AND ele_id=?; > SELECT currval('nb_lock_sequence'); We have similar code in our Oracle-but-hopefully-soon-to-be-PostgreSQL apps. However, in our case the sequence generator is used in an insert trigger to populate a column. So, although I could use "currval" after the insert to see what the trigger used, that would force the currval- invoking code to know the internals of the insert trigger. This is a bit of an abstraction violation, I think. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Propogating conditions into a query
Phil Endecott <[EMAIL PROTECTED]> writes: > I have a number of complex views for which the typical use is to select > exactly one row by id, e.g. "select * from V where id=nnn". Some of > these selects run orders of magnitude faster than others. Looking at > the output of "explain analyse" it seems that in the fast cases the > "id=nnn" condition is passed down to the lower-level operations, while > in the slower cases the entire view is created and then filtered using > the condition as a final step. > I am trying to narrow down what types of query I can use in the views to > avoid the poor performance. When in doubt, use the source ;-) ... most sorts of things like this are pretty well commented, if you can find the relevant code. In this case what you are wanting is that the view subquery either get "pulled up" into the calling query, or that conditions from the calling query get "pushed down" into the subquery. The former transformation is done in src/backend/optimizer/prep/prepjointree.c, and the principal conditions are checked in is_simple_subquery(): /* * Can't currently pull up a query with setops. Maybe after querytree * redesign... */ if (subquery->setOperations) return false; /* * Can't pull up a subquery involving grouping, aggregation, sorting, * or limiting. */ if (subquery->hasAggs || subquery->groupClause || subquery->havingQual || subquery->sortClause || subquery->distinctClause || subquery->limitOffset || subquery->limitCount) return false; /* * Don't pull up a subquery that has any set-returning functions in * its targetlist.Otherwise we might well wind up inserting * set-returning functions into places where they mustn't go, such as * quals of higher queries. */ if (expression_returns_set((Node *) subquery->targetList)) return false; /* * Hack: don't try to pull up a subquery with an empty jointree. * query_planner() will correctly generate a Result plan for a * jointree that's totally empty, but I don't think the right things * happen if an empty FromExpr appears lower down in a jointree. Not * worth working hard on this, just to collapse SubqueryScan/Result * into Result... */ if (subquery->jointree->fromlist == NIL) return false; return true; The push-down optimization is done in src/backend/optimizer/path/allpaths.c, and that makes tests on both the subquery involved and the qualification condition to be pushed down: /* * subquery_is_pushdown_safe - is a subquery safe for pushing down quals? * * subquery is the particular component query being checked. topquery * is the top component of a set-operations tree (the same Query if no * set-op is involved). * * Conditions checked here: * * 1. If the subquery has a LIMIT clause, we must not push down any quals, * since that could change the set of rows returned. * * 2. If the subquery contains EXCEPT or EXCEPT ALL set ops we cannot push * quals into it, because that would change the results. * * 3. For subqueries using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can * push quals into each component query, but the quals can only reference * subquery columns that suffer no type coercions in the set operation. * Otherwise there are possible semantic gotchas. So, we check the * component queries to see if any of them have different output types; * differentTypes[k] is set true if column k has different type in any * component. */ /* * qual_is_pushdown_safe - is a particular qual safe to push down? * * qual is a restriction clause applying to the given subquery (whose RTE * has index rti in the parent query). * * Conditions checked here: * * 1. The qual must not contain any subselects (mainly because I'm not sure * it will work correctly: sublinks will already have been transformed into * subplans in the qual, but not in the subquery). * * 2. The qual must not refer to any subquery output columns that were * found to have inconsistent types across a set operation tree by * subquery_is_pushdown_safe(). * * 3. If the subquery uses DISTINCT ON, we must not push down any quals that * refer to non-DISTINCT output columns, because that could change the set * of rows returned. This condition is vacuous for DISTINCT, because then * there are no non-DISTINCT output columns, but unfortunately it's fairly * expensive to tell the difference between DISTINCT and DISTINCT ON in the * parsetree representation. It's cheaper to just make sure all the Vars * in the qual refer to DISTINCT columns. * * 4. We must not push down any quals that refer to subselect outputs that * return sets, else we'd introduce functions-returning-sets into the * subquery's WHERE/HAVING quals. */ regards, tom lane ---(end of broadcast)
Re: [GENERAL] Starting PostgreSQL on WinXP is not working
> Hi, i'm working with PostgreSQL for a long time (about > three years), but always on Linux box. But recently, I had to > intall PostgreSQL on a WinXP machine! > The installation works fine, although the starting service > did not works in the finalization of the installation! The > installation was done in a WinXP SP1 as a service! > My problem is that I installed with admin rights, but > created the user postgres for the system and user postgres > for the database. The user postgres for the system is > supposed to be used to start the service, but i think it is > not happening, the system tries to start the service with the > user that login in the system (the admin). If I trie to start > the service with a user without admin privileges it gives me > this error: > > 2005-06-08 15:14:07 NOTICE: Unknown win32 socket error code: 10106 > 2005-06-08 15:14:07 LOG: could not create IPv4 socket: > Invalid argument > 2005-06-08 15:14:07 WARNING: could not create listen socket > for "localhost" > 2005-06-08 15:14:07 FATAL: no socket created for listening > > I googled a lot and dind't find anything with this kind of > error. So, I really need some help! > Thanks a lot!! 10106 is "The requested service provider could not be loaded or initialized.". it looks like your TCP stack is broken somehow. I'd look into firewall and antivirus software. Either it has some blocking functions that returns weird results, or it's just broken in general. //Magnus ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Pb with linked tables on PG8
Hello. I have experoenced the same problem. It seems to be common problem with Access connectiong to ODBC data source. It seems that Access has some problems to determine primary key... You should not use textual fields as primary key. Insted, add some bigserial (integer autoincrement field) into your tables and make such field primary key. Then relink your tables. This will solve your problem. Also, it is good to add timestamp field into tables. Also, be aware that your tables names should not be too long, because if they are long you will have problems with relinking. Access would not relink correctly (preassuming that you will use DSN-less and relinking on each startup). Bye. Zlatko For your information, this is explanation from MSDN: " ACC: "#Deleted" Errors with Linked ODBC Tables View products that this article applies to. Article ID : 128809 Last Review : May 6, 2003 Revision : 1.0 This article was previously published under Q128809 On this page SYMPTOMS CAUSE RESOLUTION MORE INFORMATION Steps to Reproduce Behavior APPLIES TO SYMPTOMS When you retrieve, insert, or update records in a linked ODBC table, each field in a record contains the "#Deleted" error message. When you retrieve, insert, or update records using code, you receive the error message "Record is deleted." Back to the top CAUSE The Microsoft Jet database engine is designed around a keyset-driven model. This means that data is retrieved, inserted, and updated based on key values (in the case of a linked ODBC table, the unique index of a table). After Microsoft Access performs an insert or an update of a linked ODBC table, it uses a Where criteria to select the record again to verify the insert or update. The Where criteria is based on the unique index. Although numerous factors can cause the select not to return any records, most often the cause is that the key value Microsoft Access has cached is not the same as the actual key value on the ODBC table. Other possible causes are as follows: Having an update or insert trigger on the table, modifying the key value. Basing the unique index on a float value. Using a fixed-length text field that may be padded on the server with the correct amount of spaces. Having a linked ODBC table containing Null values in any of the fields making up the unique index. These factors do not directly cause the "#Deleted" error message. Instead, they cause Microsoft Access to go to the next step in maintaining the key values, which is to select the record again, this time with the criteria based on all the other fields in the record. If this step returns more than one record, Microsoft Access returns the "#Deleted" message because it does not have a reliable key value to work with. If you close and re-open the table or choose Show All Records from the Records menu, the "#Deleted" errors are removed. Microsoft Access uses a similar process to retrieve records from an linked ODBC table. First, it retrieves the key values and then the rest of the fields that match the key values. If Microsoft Access is not able to find that value again when it tries to find the rest of the record, it assumes that the record is deleted. Back to the top RESOLUTION The following are some strategies that you can use to avoid this behavior: Avoid entering records that are exactly the same except for the unique index. Avoid an update that triggers updates of both the unique index and another field. Do not use a Float field as a unique index or as part of a unique index because of the inherent rounding problems of this data type. Do all the updates and inserts by using SQL pass-through queries so that you know exactly what is sent to the ODBC data source. Retrieve records with an SQL pass-through query. An SQL pass-through query is not updateable, and therefore does not cause "#Delete" errors. Avoid storing Null values within any field making up the unique index of your linked ODBC table. Back to the top MORE INFORMATION Note: In Microsoft Access 2.0, linked tables were called attached tables. Steps to Reproduce Behavior 1. Open the sample database Northwind.mdb (or NWIND.MDB. in Microsoft Access 2.0) 2. Use the Upsizing Tools to upsize the Shippers table. NOTE: This table contains an AutoNumber field (or Counter field in Microsoft Access 2.0) that is translated on SQL Server by the Upsizing Tools into a trigger that emulates a counter. 3. Open the linked Shippers table and enter a new
Re: [GENERAL] postgreSQL 7.3.8, pg_dump not able to find large o
> We've been getting errors similar to the following (the specific large > object that is "missing" is different every time) during our nightly > pg_dump: > > pg_dump: dumpBlobs(): could not open large object: ERROR: > inv_open: large > object 48217896 not found > After doing a bunch of testing and experimenting, we're pretty sure that the problem we were having is due to the large objects being deleted while the pg_dump was running. The entire pg_dump process takes about 2 hours, with about 1 1/2 hours of that spent on blobs. My question is this: How are other PostgreSQL users with constant large object insertions/deletions handling their backup process? (And is this something that I missed in documentation somewhere?) Is this a problem that is handled differently in PostgreSQL 8? Thanks, -ron ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Propogating conditions into a query
Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: I have a number of complex views for which the typical use is to select exactly one row by id, e.g. "select * from V where id=nnn". Some of these selects run orders of magnitude faster than others. Looking at the output of "explain analyse" it seems that in the fast cases the "id=nnn" condition is passed down to the lower-level operations, while in the slower cases the entire view is created and then filtered using the condition as a final step. When in doubt, use the source ;-) ... most sorts of things like this are pretty well commented, if you can find the relevant code. Good plan. * 3. If the subquery uses DISTINCT ON, we must not push down any quals that * refer to non-DISTINCT output columns, because that could change the set * of rows returned. OK, so this is why my query with DISTINCT ON was failing. I can fix that. I don't see anything in there about LEFT OUTER JOIN though. Any ideas? --Phil. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Propogating conditions into a query
Phil Endecott <[EMAIL PROTECTED]> writes: > I don't see anything in there about LEFT OUTER JOIN though. Any ideas? Oh, I missed that part of your message. Hmm, I think the issue is that in >> D join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn the planner deduces M.b=nnn by transitivity, but when the join is an outer join it can't make the same deduction. [ thinks some more... ] If we distinguished conditions that hold below the join from those that hold above it, we could deduce that M.b=nnn can be enforced below the join even though it might not be true above it. There's no such mechanism in existence now, though. A possible workaround is to generate your query like D left join (M join G on (M.g=G.id)) on (D.id=M.b AND M.b=nnn) where D.id=nnn but I don't know how practical that is for you. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Propogating conditions into a query
Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: D join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn A possible workaround is to generate your query like D left join (M join G on (M.g=G.id)) on (D.id=M.b AND M.b=nnn) where D.id=nnn I don't suppose it would work if I did D left join (M join G on (M.g=G.id)) on (D.id=M.b) where (D.id=nnn AND (M.b=nnn or M.b IS NULL)) would it? Otherwise it breaks the view, and makes the calling code rather more messy. --Phil. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] postgreSQL 7.3.8, pg_dump not able to find large o
Ron Snyder <[EMAIL PROTECTED]> writes: > We've been getting errors similar to the following (the specific large > object that is "missing" is different every time) during our nightly > pg_dump: > > pg_dump: dumpBlobs(): could not open large object: ERROR: > inv_open: large object 48217896 not found > After doing a bunch of testing and experimenting, we're pretty sure that the > problem we were having is due to the large objects being deleted while the > pg_dump was running. Sounds plausible. I proposed years ago that we ought to fix large objects to be MVCC-compliant: http://archives.postgresql.org/pgsql-hackers/2002-05/msg00875.php but the issue seems to have fallen through the cracks. (It was hard to fix at the time because there wasn't any easy way for the LO functions to lay their hands on a suitable snapshot, but as of 8.0 I think ActiveSnapshot would work.) > Is this a problem that is handled differently in PostgreSQL 8? Nope. I'm feeling a strong urge to go fix it for 8.1 though. The question from the previous mail still stands: would anybody's applications be broken if we change the MVCC behavior of large objects? regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] postgreSQL 7.3.8, pg_dump not able to find large o
Nope. I'm feeling a strong urge to go fix it for 8.1 though. The question from the previous mail still stands: would anybody's applications be broken if we change the MVCC behavior of large objects? Could you provide an instance where it might? I had always assumed (I know, never assume) that large objects were MVCC safe. All of our applications that work with binary data always use Large Objects. Sincerely, Joshua D. Drake regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) -- Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240 PostgreSQL Replication, Consulting, Custom Programming, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] postgreSQL 7.3.8, pg_dump not able to find large o
"Joshua D. Drake" <[EMAIL PROTECTED]> writes: >> The question from the previous mail still stands: would anybody's >> applications be broken if we change the MVCC behavior of large objects? > Could you provide an instance where it might? I had always assumed (I > know, never assume) that large objects were MVCC safe. All of our > applications that work with binary data always use Large Objects. It seems likely that such a change would fix more things than it broke, in any case. I'm just giving people a fair chance to object ;-) regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Postgresql fails to start? (Update)
Just an update . . . I tried upgrading kernel to 2.6.11.11 (kernel.org) but was unsuccessful (due to i2c and gcc 4.0 issue). So I settled for the one on the fedora development repository. But that didn't help, upgraded other stuff too like selinux-policy-targeted and initscripts. Beginning to pull my hair out at this point. Tried relabeling filesystem per selinux faq. Used fixfiles restore, which looked promising because now ls -Z on my /home actually showed some stuff. Now the big test: # touch /home/postgres/pgstartup.log # ls -Z /home/postgres/pgstartup.log -rw-r--r-- root root /home/postgres/pgstartup.log # chown postgres:postgres /home/postgres/pgstartup.log # chmod go-rwx /home/postgres/pgstartup.log # chcon -u system_u -r object_r -t postgresql_log_t /home/postgres/pgstartup.log chcon: can't apply partial context to unlabeled file /home/postgres/pgstartup.log # ls -Z /home/postgres/pgstartup.log -rw--- postgres postgres /home/postgres/pgstartup.log Argh . . . But found this odd thing while googling: # touch /home/postgres/pgstartup.log # ls -Z /home/postgres/pgstartup.log -rw-r--r-- root root /home/postgres/pgstartup.log # chown postgres:postgres /home/postgres/pgstartup.log # chmod go-rwx /home/postgres/pgstartup.log # chcon system_u:object_r:postgresql_log_t /home/postgres/pgstartup.log # ls -Z /home/postgres/pgstartup.log -rw--- postgres postgres system_u:object_r:postgresql_log_t /home/postgres/pgstartup.log Voila, it works. I kind of have to move on now. Since this is just a development box, this will do for now. When the final Fedora Core 4 comes out will install that and see what happens. Thank very much again for the great help. dianne
[GENERAL] monetary data
I was just curious how you guys implement storage / calculation of monetary data in postgresql. The docs say to use the numeric data type, but I'm curious what precision is typically defined for storing monetary data in the numeric data type. Thanks for any info... ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] monetary data
John Browne wrote: I was just curious how you guys implement storage / calculation of monetary data in postgresql. The docs say to use the numeric data type, but I'm curious what precision is typically defined for storing monetary data in the numeric data type. We use numeric(10,2) Sincerely, Joshua D. Drake Thanks for any info... ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org -- Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240 PostgreSQL Replication, Consulting, Custom Programming, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ERROR: cannot cast type text to bit varying
> It's curious to me that the following is fine: > > beginning of output= > test=# select '1001'::bit varying; > varbit > > 1001 > (1 row) > end of output Okay, I guess I'm not so curious, thanks to http://www.postgresql.org/docs/8.0/static/sql-expressions.html#SQL- SYNTAX-TYPE-CASTS, which tells me: "A cast applied to an unadorned string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for any type" But, I still have my initial problem: > test=# select translate ('YNNY', 'YN', '10')::bit varying & translate > ('NYYN', 'YN', '10')::bit varying; > ERROR: cannot cast type text to bit varying Ideas? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] ERROR: cannot cast type text to bit varying
I'm trying to interpret strings of Y's and N's as bit vectors and perform bitwise ops on them. It's not working: beginning of output= test=# select version (); version -- PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-49) (1 row) test=# select translate ('YNNY', 'YN', '10')::bit varying & translate ('NYYN', 'YN', '10')::bit varying; ERROR: cannot cast type text to bit varying end of output= It's curious to me that the following is fine: beginning of output= test=# select '1001'::bit varying; varbit 1001 (1 row) end of output= As what data type is the literal '1001' being treated, that it can be cast to bit varying, while text data (e.g. the result of "translate") cannot be so cast? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgresql fails to start? (Update)
Dianne Yumul <[EMAIL PROTECTED]> writes: > # chcon -u system_u -r object_r -t postgresql_log_t > /home/postgres/pgstartup.log > chcon: can't apply partial context to unlabeled file > /home/postgres/pgstartup.log > [but this works:] > # chcon system_u:object_r:postgresql_log_t /home/postgres/pgstartup.log That's pretty interesting --- I'd say it's a flat-out bug in the FC4 version of chcon. Filed as https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=159973 I'm not sure whether the quickest fix from a Red Hat POV is to fix chcon or just alter the initscript for Postgres. But certainly your best bet to have something working today is to edit the initscript. Or of course you can just manually create the file with the right context ... regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Version Control?
Hi- Any general tips on using version control (CVS, SVN) while doing database design? My thought was to do a text-mode dump (including populated code tables) from PGAdmin. How do people do this? -- Peter Fein [EMAIL PROTECTED] 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: **SPAM** Re: [GENERAL] Pb with linked tables on PG8
Many thanks for this very usefull information. Luc - Original Message - From: Zlatko Matic To: Ets ROLLAND ; pgsql-general@postgresql.org Sent: Thursday, June 09, 2005 6:52 PM Subject: **SPAM** Re: [GENERAL] Pb with linked tables on PG8 Hello. I have experoenced the same problem. It seems to be common problem with Access connectiong to ODBC data source. It seems that Access has some problems to determine primary key... You should not use textual fields as primary key. Insted, add some bigserial (integer autoincrement field) into your tables and make such field primary key. Then relink your tables. This will solve your problem. Also, it is good to add timestamp field into tables. Also, be aware that your tables names should not be too long, because if they are long you will have problems with relinking. Access would not relink correctly (preassuming that you will use DSN-less and relinking on each startup). Bye. Zlatko For your information, this is explanation from MSDN: " ACC: "#Deleted" Errors with Linked ODBC Tables View products that this article applies to. Article ID : 128809 Last Review : May 6, 2003 Revision : 1.0 This article was previously published under Q128809 On this page SYMPTOMS CAUSE RESOLUTION MORE INFORMATION Steps to Reproduce Behavior APPLIES TO SYMPTOMS When you retrieve, insert, or update records in a linked ODBC table, each field in a record contains the "#Deleted" error message. When you retrieve, insert, or update records using code, you receive the error message "Record is deleted." Back to the top CAUSE The Microsoft Jet database engine is designed around a keyset-driven model. This means that data is retrieved, inserted, and updated based on key values (in the case of a linked ODBC table, the unique index of a table). After Microsoft Access performs an insert or an update of a linked ODBC table, it uses a Where criteria to select the record again to verify the insert or update. The Where criteria is based on the unique index. Although numerous factors can cause the select not to return any records, most often the cause is that the key value Microsoft Access has cached is not the same as the actual key value on the ODBC table. Other possible causes are as follows: Having an update or insert trigger on the table, modifying the key value. Basing the unique index on a float value. Using a fixed-length text field that may be padded on the server with the correct amount of spaces. Having a linked ODBC table containing Null values in any of the fields making up the unique index. These factors do not directly cause the "#Deleted" error message. Instead, they cause Microsoft Access to go to the next step in maintaining the key values, which is to select the record again, this time with the criteria based on all the other fields in the record. If this step returns more than one record, Microsoft Access returns the "#Deleted" message because it does not have a reliable key value to work with. If you close and re-open the table or choose Show All Records from the Records menu, the "#Deleted" errors are removed. Microsoft Access uses a similar process to retrieve records from an linked ODBC table. First, it retrieves the key values and then the rest of the fields that match the key values. If Microsoft Access is not able to find that value again when it tries to find the rest of the record, it assumes that the record is deleted. Back to the top RESOLUTION The following are some strategies that you can use to avoid this behavior: Avoid entering records that are exactly the same except for the unique index. Avoid an update that triggers updates of both the unique index and another field. Do not use a Float field as a unique index or as part of a unique index because of the inherent rounding problems of this data type. Do all the updates and inserts by using SQL pass-through queries so that you know exactly what is sent to the ODBC data source. Retrieve records with an SQL pass-through query. An SQL pass-through query is not updateable, and therefore does not cause "#Delete" errors. Avoid storing Null values within any field making up the unique index of your linked ODBC table.
Re: [GENERAL] Version Control?
Up until the database goes into production, keep files: schema.sql (table creation), views.sql, functions.sql triggers.sql trigfunctions.sql in cvs/svn. Afterwards any changes to the schema are in change01.sql, change02.sql,... The change scripts hold the alter table statements for schema changes. They must be cumulative. Ideally you'd have corresponding undochange01.sql but that is icing. Never let anyone change the database without creating the appropriate change script. --elein On Thu, Jun 09, 2005 at 03:25:14PM -0500, Peter Fein wrote: > Hi- > > Any general tips on using version control (CVS, SVN) while doing > database design? My thought was to do a text-mode dump (including > populated code tables) from PGAdmin. > > How do people do this? > > -- > Peter Fein [EMAIL PROTECTED] 773-575-0694 > > Basically, if you're not a utopianist, you're a schmuck. -J. Feldman > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Version Control?
Peter Fein wrote: Hi- Any general tips on using version control (CVS, SVN) while doing database design? My thought was to do a text-mode dump (including populated code tables) from PGAdmin. How do people do this? Currently we just store a dump of the data structure. However, what I think is really needed is a specialist diff tool which works out the commands needed to move from one schema to another. That would be *extremely* useful, but would also probably require a separate implementation for each database backend. There's also the problem of configuration data: you might want some rows to be version-controlled too because they contain lookup validation data which is vital to the operation of the system, rather than being user data. It's an interesting problem which I've not yet seen a usable solution to. I've been looking for an OSS project to either start or contribute to. Maybe that's one itch that I might consider scratching... -- Russ. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Version Control?
How would you handle the migration of the data with these user scripts? Dump it to a temp table? On 6/9/05, elein <[EMAIL PROTECTED]> wrote: > Up until the database goes into production, > keep files: schema.sql (table creation), > views.sql, functions.sql triggers.sql trigfunctions.sql > in cvs/svn. > > Afterwards any changes to the schema are in > change01.sql, change02.sql,... > > The change scripts hold the alter table statements > for schema changes. They must be cumulative. > Ideally you'd have corresponding undochange01.sql > but that is icing. > > Never let anyone change the database without creating > the appropriate change script. > > --elein > > On Thu, Jun 09, 2005 at 03:25:14PM -0500, Peter Fein wrote: > > Hi- > > > > Any general tips on using version control (CVS, SVN) while doing > > database design? My thought was to do a text-mode dump (including > > populated code tables) from PGAdmin. > > > > How do people do this? > > > > -- > > Peter Fein [EMAIL PROTECTED] 773-575-0694 > > > > Basically, if you're not a utopianist, you're a schmuck. -J. Feldman > > > > ---(end of broadcast)--- > > TIP 5: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Version Control?
Russ Brown wrote on 09.06.2005 23:12: Currently we just store a dump of the data structure. However, what I think is really needed is a specialist diff tool which works out the commands needed to move from one schema to another. That would be *extremely* useful, but would also probably require a separate implementation for each database backend. I have just implemented such a feature in my SQL Tool. It will output the difference between two schemas as an XML file which in turn can be transformed in the correct SQL scripts. The tool can be downloaded from http://www.sql-workbench.net (You are looking for the WbDiff command) An (very basic) XSLT to transform the XML output into a PG SQL script is also available there (look in the XSLT section). You might want to use the latest development build because I have done some tweaks and enhancements to the output. This is a first implementation (and mainly tested with Oracle). If you have feedback or suggestions on how to improve it, feel free to contact me at: support (at) sql (dash) workbench (dot) net Best regards Thomas ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] CREATE TEMP TABLE AS SELECT/ GET DIAGNOSTICS ROW_COUNT
Alvaro Herrera <[EMAIL PROTECTED]> writes: > On Tue, May 31, 2005 at 03:43:56PM -0400, Tom Lane wrote: >> OK, next question: is this a bug fix we should back-patch into 7.4, >> or just change it in HEAD? > I guess apply only in HEAD, and provide the patch for MLikharev so he > can solve his immediate problem. Done. Here is the patch (against CVS tip, but it should apply with some fuzz in 8.0 or 7.4). regards, tom lane *** src/backend/executor/spi.c.orig Fri May 6 15:59:49 2005 --- src/backend/executor/spi.c Thu Jun 9 16:59:37 2005 *** *** 1497,1502 --- 1497,1503 _SPI_pquery(QueryDesc *queryDesc, long tcount) { int operation = queryDesc->operation; + CommandDest origDest = queryDesc->dest->mydest; int res; Oid save_lastoid; *** *** 1548,1554 ExecutorEnd(queryDesc); ! if (queryDesc->dest->mydest == SPI) { SPI_processed = _SPI_current->processed; SPI_lastoid = save_lastoid; --- 1549,1556 ExecutorEnd(queryDesc); ! /* Test origDest here so that SPI_processed gets set in SELINTO case */ ! if (origDest == SPI) { SPI_processed = _SPI_current->processed; SPI_lastoid = save_lastoid; ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] CREATE TEMP TABLE AS SELECT/ GET DIAGNOSTICS ROW_COUNT
Thanks. Alvaro Herrera <[EMAIL PROTECTED]> writes: > On Tue, May 31, 2005 at 03:43:56PM -0400, Tom Lane wrote: >> OK, next question: is this a bug fix we should back-patch into 7.4, >> or just change it in HEAD? > I guess apply only in HEAD, and provide the patch for MLikharev so he > can solve his immediate problem. Done. Here is the patch (against CVS tip, but it should apply with some fuzz in 8.0 or 7.4). regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Version Control?
Thomas Kellerer wrote: Russ Brown wrote on 09.06.2005 23:12: Currently we just store a dump of the data structure. However, what I think is really needed is a specialist diff tool which works out the commands needed to move from one schema to another. That would be *extremely* useful, but would also probably require a separate implementation for each database backend. I have just implemented such a feature in my SQL Tool. It will output the difference between two schemas as an XML file which in turn can be transformed in the correct SQL scripts. The tool can be downloaded from http://www.sql-workbench.net (You are looking for the WbDiff command) Very interesting. I'll have a closer look at this tomorrow morning at work. Thanks! -- Russ. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Now() function
Windows XP SP2 Java SDK V1.4.2_08 JDBC 7.4.216.jdbc3 When I use now in an update it is giving me a very odd value in the database. This is what PGAdminIII shows 2005-06-09 13:52:46.259715 I am not expecting the decimal seconds. I am getting an out of range error in java when I read the column. I am porting from a V7.1 server over to 7.4 Yes I will soon move it to 8 but I have a working 7.4 server now. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Pushing limit into subqueries of a union
Dear Experts, Here is another "how can I rewrite this to go faster" idea. I have two tables T1 and T2 and a view V that is the UNION ALL of T1 and T2. The tables have an editdate field, and I want to get the n most recently changed rows: select * from V order by editdate desc limit 40; This seems to unconditionally read the whole of T1 and T2, so it is slow. T1 and T2 both have indexes on the editdate attribute, so if I write (select * from T1 order by editdate desc limit 40) union all (select * from T2 order by editdate desc limit 40) order by editdate desc limit 40; I get the same results, about 1000 times faster. I presume that PostgreSQL doesn't try to push the limit clause into the subqueries of a UNION ALL in this way. I believe it is safe, isn't it? Cheers, Phil. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Version Control?
PG Lightning Admin also has function version control. http://www.amsoftwaredesign.com Russ Brown wrote: Thomas Kellerer wrote: Russ Brown wrote on 09.06.2005 23:12: Currently we just store a dump of the data structure. However, what I think is really needed is a specialist diff tool which works out the commands needed to move from one schema to another. That would be *extremely* useful, but would also probably require a separate implementation for each database backend. I have just implemented such a feature in my SQL Tool. It will output the difference between two schemas as an XML file which in turn can be transformed in the correct SQL scripts. The tool can be downloaded from http://www.sql-workbench.net (You are looking for the WbDiff command) Very interesting. I'll have a closer look at this tomorrow morning at work. Thanks! ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ERROR: cannot cast type text to bit varying
On Thu, 2005-06-09 at 20:05 +, Matt Miller wrote: > I'm trying to interpret strings of Y's and N's as bit vectors and > perform bitwise ops on them. Well, I ended up writing a bunch of code to accomplish what I initially thought would be just some casting and bitops on built-in types. I really thought that the bit string types would help me, but I just couldn't get them to work. Here is my code in case I've reinvented the wheel and anyone wants to make fun of me: create or replace FUNCTION flagset2num (flagset varchar) returns integer AS $$ -- -- interpret a flagset (a string of Y's and -- N's) as a bit vector and convert it into -- a number -- DECLARE l_flagset varchar (32) := trim (flagset); len integer := length (l_flagset); ret_val integer; BEGIN ret_val := 0; for i in reverse len .. 1 loop if (substr (l_flagset, i, 1) = 'Y') then ret_val := ret_val + power (2, len - i); end if; end loop; return ret_val; END; $$ language plpgsql; create or replace FUNCTION num2flagset (num integer, flagset_len integer) returns varchar AS $$ -- -- interpret a decimal number as a bit -- vector and convert it into a flagset -- (a string of Y's and N's) of the -- specified length -- DECLARE ret_val varchar (16) := ''; hex_num varchar (4); nibble varchar (4); BEGIN -- -- use built-in to convert decimal number -- to hex string, easing conversion to -- binary -- hex_num := to_hex (num); -- -- convert hex string to binary string -- using digit substitution -- for i in 1 .. length (hex_num) loop nibble := case (substr (hex_num, i, 1)) when '0' then '' when '1' then '0001' when '2' then '0010' when '3' then '0011' when '4' then '0100' when '5' then '0101' when '6' then '0110' when '7' then '0111' when '8' then '1000' when '9' then '1001' when 'a' then '1010' when 'b' then '1011' when 'c' then '1100' when 'd' then '1101' when 'e' then '1110' when 'f' then '' end; ret_val := ret_val || nibble; end loop; -- -- convert string of binary digits to -- flagset -- return translate (ret_val, '01', 'NY'); END; $$ language plpgsql; create or replace FUNCTION flagset_bitand ( flagset1 varchar, flagset2 varchar ) returns varchar AS $$ -- -- perform bitwise "and" on flagsets, -- returning a flagset -- BEGIN return num2flagset (flagset2num (flagset1) & flagset2num (flagset2), length (flagset1)); END; $$ language plpgsql; select flagset_bitand ('NYYN', 'NNYY'); ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Version Control?
On Thu, Jun 09, 2005 at 10:12:25PM +0100, Russ Brown wrote: > Peter Fein wrote: > >Hi- > > > >Any general tips on using version control (CVS, SVN) while doing > >database design? My thought was to do a text-mode dump (including > >populated code tables) from PGAdmin. > > > >How do people do this? > > > > Currently we just store a dump of the data structure. However, what I > think is really needed is a specialist diff tool which works out the > commands needed to move from one schema to another. That would be > *extremely* useful, but would also probably require a separate > implementation for each database backend. I've spent quite a lot of time thinking about this and it's certainly not trivial, and probably _hard_. If anyone has even half-functional code to do it I'd be interested in looking at it. > It's an interesting problem which I've not yet seen a usable solution > to. I've been looking for an OSS project to either start or contribute > to. Maybe that's one itch that I might consider scratching... Yeah... Cheers, Steve ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Version Control?
On Thu, Jun 09, 2005 at 04:16:46PM -0500, John Browne wrote: > How would you handle the migration of the data with these user > scripts? Dump it to a temp table? > If your scripts are correct, you should be able to load your base scripts and apply each change script in order and have the result be the exact same database schema. If they are not, checkpoint with a schema dump and start again with the change scripts. Of course getting the scripts wrong is against the point of the whole exercise, but it is not easy and requires vigilance. --elein = [EMAIL PROTECTED]Varlena, LLCwww.varlena.com (510)655-2584(o) (510)543-6079(c) PostgreSQL Consulting, Support & Training PostgreSQL General Bits http://www.varlena.com/GeneralBits/ == I have always depended on the [QA] of strangers. > > On 6/9/05, elein <[EMAIL PROTECTED]> wrote: > > Up until the database goes into production, > > keep files: schema.sql (table creation), > > views.sql, functions.sql triggers.sql trigfunctions.sql > > in cvs/svn. > > > > Afterwards any changes to the schema are in > > change01.sql, change02.sql,... > > > > The change scripts hold the alter table statements > > for schema changes. They must be cumulative. > > Ideally you'd have corresponding undochange01.sql > > but that is icing. > > > > Never let anyone change the database without creating > > the appropriate change script. > > > > --elein > > > > On Thu, Jun 09, 2005 at 03:25:14PM -0500, Peter Fein wrote: > > > Hi- > > > > > > Any general tips on using version control (CVS, SVN) while doing > > > database design? My thought was to do a text-mode dump (including > > > populated code tables) from PGAdmin. > > > > > > How do people do this? > > > > > > -- > > > Peter Fein [EMAIL PROTECTED] 773-575-0694 > > > > > > Basically, if you're not a utopianist, you're a schmuck. -J. Feldman > > > > > > ---(end of broadcast)--- > > > TIP 5: Have you checked our extensive FAQ? > > > > > >http://www.postgresql.org/docs/faq > > > > > > > ---(end of broadcast)--- > > TIP 6: Have you searched our list archives? > > > >http://archives.postgresql.org > > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ERROR: cannot cast type text to bit varying
On Thu, Jun 09, 2005 at 10:18:27PM +, Matt Miller wrote: > On Thu, 2005-06-09 at 20:05 +, Matt Miller wrote: > > I'm trying to interpret strings of Y's and N's as bit vectors and > > perform bitwise ops on them. > > Well, I ended up writing a bunch of code to accomplish what I initially > thought would be just some casting and bitops on built-in types. I > really thought that the bit string types would help me, but I just > couldn't get them to work. I imagine you could have done something involving textout() and varbit_in(), like alvherre=# select varbit_in(textout(translate('YYNY', 'YN', '10')), 123::oid, 32); varbit_in --- 1101 (1 fila) The OID parameter is unused, give it anything except NULL. The integer is the maximum length of the resulting varbit field. -- Alvaro Herrera () "La soledad es compañía" ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] ERROR: cannot cast type text to bit varying
> > I ended up writing a bunch of code to accomplish what I initially > > thought would be just some casting and bitops on built-in types. > I imagine you could have done something involving textout() and > varbit_in(), like > > alvherre=# select varbit_in(textout(translate('YYNY', 'YN', '10')), 123::oid, > 32); > varbit_in > --- > 1101 > (1 fila) Ah... This is more like it. Thank you. I'll explore this more tomorrow. Are "varbit_in" and "textout" documented? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Pushing limit into subqueries of a union
Phil Endecott <[EMAIL PROTECTED]> writes: > I presume that PostgreSQL doesn't try to push the limit clause into the > subqueries of a UNION ALL in this way. I believe it is safe, isn't it? Hmm. You don't actually want to push the LIMIT as such into the subplan --- that would create an extra level of plan node that would simply waste time at runtime. What you want is for the subquery to be planned on the assumption that only a small number of tuples will be fetched, so that "fast start" plans are preferred. We have all the mechanism for this, but prepunion.c isn't exploiting it. I'll see what I can do. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Setting all elements in an Bool[] array to the same value
Hello ! Is there any way to set all elements in a long boolean array (bool[]) to the same value ? update testbool set "all elements" = false;or so ? ;) Any ideas ? Thanks /Otto Blomqvist ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] INHERITS and planning
Greetings! Is there an issue when a large number of INHERITS tables exist for planning? We have 2 base tables, and use INHERITS to partition the data. When we get around 2000-2200 sub-tables (approx 1000-1100 per base table), planning a SELECT statement on the base table (ie, to search all sub-tables) will start slowing down dramatically (ie, feels like something exponential OR some kind of in-memory to on-disk transition). I haven't done enough to really plot out the planning times, but definitely around 1600 tables we were getting sub-second plans, and around 2200 we were above 30 seconds. Also, is there any plans to support proper partitioning/binning of data rather than through INHERITS? I know it has been mentioned as upcoming sometime similar to Oracle. I would like to put in a vote to support "auto-binning" in which a function is called to define the bin. The Oracle model really only supports: (1) explicit partitioning (ie, every new partition must be defined), or (2) hash binning. What we deal with is temporal data, and would like to bin on the hour or day "automatically", hopefully to support truncating whole bins. This helps us 2 ways: (1) data deletion is bulk (we currently drop a full inherited table), (2) cancelling a VACUUM/SELECT doesn't take forever while the execution engine finishes "this table" (we have had cancels take 2 hours because the VACUUM was on a very large single table). Regards! Ed ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Now() function
On Jun 10, 2005, at 7:07 AM, David Siebert wrote: When I use now in an update it is giving me a very odd value in the database. This is what PGAdminIII shows 2005-06-09 13:52:46.259715 I am not expecting the decimal seconds. I am getting an out of range error in java when I read the column. If you don't want fractional seconds ever, you can change the column datatype to timestamp(0), which will give you a precision of 0 (no fractional seconds). Changing a column datatype pre-v8.0 involves either (a) adding a new column with the datatype you want, updating the new column to have the data you want, and dropping the old column; or (b) hacking the PostgreSQL system catalog. A short term solution would be to update the column using something like update foo set foo_timestamp = date_trunc(foo_timestamp). http://www.postgresql.org/docs/7.4/interactive/functions- datetime.html#FUNCTIONS-DATETIME-TRUNC You can use date_trunc(current_timestamp) in place of now() to make sure that future inserts and updates also truncate fractional seconds if you don't change the column datatype. (current_timestamp is the SQL-spec-compliant spelling of now() ) As a side note, it appears you're using timestamp rather than timestamptz. To be on the safe size, you may want to consider using timestamptz, which records time zone information as well. Hope this helps. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Now() function
On Jun 10, 2005, at 11:37 AM, Michael Glaesemann wrote: A short term solution would be to update the column using something like update foo set foo_timestamp = date_trunc(foo_timestamp). Sorry. That isn't clear (or correct!) Complete example at the bottom of the email. UPDATE foo SET foo_timestamp = date_trunc('second',foo_timestamp); http://www.postgresql.org/docs/7.4/interactive/functions- datetime.html#FUNCTIONS-DATETIME-TRUNC Sorry for any confusion. Michael Glaesemann grzm myrealbox com test=# create table foo (foo_id serial not null unique, foo_timestamp timestamptz not null) without oids; NOTICE: CREATE TABLE will create implicit sequence "foo_foo_id_seq" for serial column "foo.foo_id" NOTICE: CREATE TABLE / UNIQUE will create implicit index "foo_foo_id_key" for table "foo" CREATE TABLE test=# insert into foo (foo_timestamp) values (current_timestamp); INSERT 0 1 test=# insert into foo (foo_timestamp) values (current_timestamp); INSERT 0 1 test=# insert into foo (foo_timestamp) values (current_timestamp); INSERT 0 1 test=# insert into foo (foo_timestamp) values (current_timestamp); INSERT 0 1 test=# select * from foo; foo_id | foo_timestamp +--- 1 | 2005-06-10 11:55:48.459675+09 2 | 2005-06-10 11:55:49.363353+09 3 | 2005-06-10 11:55:49.951119+09 4 | 2005-06-10 11:55:50.771325+09 (4 rows) test=# update foo set foo_timestamp = date_trunc ('second',foo_timestamp); UPDATE 4 test=# select * from foo; foo_id | foo_timestamp + 1 | 2005-06-10 11:55:48+09 2 | 2005-06-10 11:55:49+09 3 | 2005-06-10 11:55:49+09 4 | 2005-06-10 11:55:50+09 (4 rows) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] CPU-intensive autovacuuming
Phil, If you complete this patch, I'm very interested to see it. I think I'm the person Matthew is talking about who inserted a sleep value. Because of the sheer number of tables involved, even small values of sleep caused pg_autovacuum to iterate too slowly over its table lists to be of use in a production environment (where I still find its behavior to be preferable to a complicated list of manual vacuums performed in cron). -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Strategic Open Source: Open Your i™ http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005 On Jun 7, 2005, at 6:16 AM, Phil Endecott wrote: Matthew T. O'Connor wrote: Phil Endecott wrote: > Could it be that there is some code in autovacuum that is O (n^2) in > the number of tables? Browsing the code using webcvs, I have found this: for (j = 0; j < PQntuples(res); j++) { tbl_elem = DLGetHead(dbs->table_list); while (tbl_elem != NULL) { Have I correctly understood what is going on here? Indeed you have. I have head a few similar reports but perhaps none as bad as yours. One person put a small sleep value so that it doesn't spin so tight. You could also just up the sleep delay so that it doesn't do this work quite so often. No other quick suggestions. I do wonder why autovacuum is keeping its table list in memory rather than in the database. But given that it is keeping it in memory, I think the real fix is to sort that list (or keep it ordered when building or updating it). It is trivial to also get the query results ordered, and they can then be compared in O(n) time. I notice various other places where there seem to be nested loops, e.g. in the update_table_list function. I'm not sure if they can be fixed by similar means. --Phil. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Wrong select results after transaction (HELP PLS)
Hi, Help me please to resolve the problem: Just After commiting transaction - writing ,say 90 rows,I try to select the same 90 rows - and get wrong set of rows (some of them: 1-2 replaced by unknown data). But after 10-20 seconds the result of selecting the neccessary 90 rows return right result. Tell me pls what to do. Thanks and Have a nice day! -- Igor mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] CREATE TEMP TABLE AS SELECT/ GET DIAGNOSTICS ROW_COUNT
Hi! >Done. Here is the patch (against CVS tip, but it should apply with >some fuzz in 8.0 or 7.4). Is this patch about CREATE TEMP TABLE AS SELECT only, or about SELECT INTO TEMP TABLE as well? -- Best regards Ilja Golshtein ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] INHERITS and planning
Edmund Dengler <[EMAIL PROTECTED]> writes: > Is there an issue when a large number of INHERITS tables exist for > planning? Well, there are a number of issues whenever a single query references a whole lot of tables in any fashion. It's only with Neil Conway's rewrite of the List package in 8.0 that we had any hope of less than O(N^2) behavior for practically any measure of query complexity N. I have been spending some time over the past week or so attacking other O(N^2) behaviors, but it's not a finished project yet. I tried to reproduce your issue by doing create table p1 (f1 int, f2 bigint); create table c0() inherits (p1); create table c1() inherits (p1); create table c2() inherits (p1); ... create table c2298() inherits (p1); create table c2299() inherits (p1); and then profiling select * from p1; With no data in the tables, of course this is just measuring planning time and executor startup/shutdown overhead. But I suppose that you don't have a whole lot of data in the tables either, because the data fetching stage is surely pretty linear and you'd not be complaining about overhead if there were much data to be read. What I see in the profile is % cumulative self self total time seconds secondscalls s/call s/call name 42.04 15.5815.58 9214 0.00 0.00 list_nth_cell 20.29 23.10 7.52 34524302 0.00 0.00 SHMQueueNext 8.34 26.19 3.0929939 0.00 0.00 LockCountMyLocks 5.64 28.28 2.09 2960617 0.00 0.00 AllocSetAlloc 2.37 29.16 0.88 2354 0.00 0.00 AllocSetCheck 2.29 30.01 0.85 302960 0.00 0.00 hash_search 2.13 30.80 0.79 2902873 0.00 0.00 MemoryContextAlloc The list_nth operations are all coming from rt_fetch() macros, so we could probably fix that by replacing rangetable Lists by arrays. This seems doable, but also tedious and bug-prone; there are too many places that figure they can randomly add onto rtable lists. What I'm more interested in at the moment are the next two entries, SHMQueueNext and LockCountMyLocks --- it turns out that almost all the SHMQueueNext calls are coming from LockCountMyLocks, which is invoked during LockAcquire. This is another O(N^2) loop, and it's really a whole lot nastier than the rangetable ones, because it executes with the LockMgrLock held. I spent a little time trying to see if we could avoid doing LockCountMyLocks altogether, but it didn't look very promising. What I am thinking though is that we could implement LockCountMyLocks as either a scan through all the proclocks attached to the target proc (the current way) or as a scan through all the proclocks attached to the target lock (proclocks are threaded both ways). There is no hard upper bound on the number of locks a proc holds, whereas there is a bound of MaxBackends on the number of procs linked to a lock. (Well, theoretically it could be 2*MaxBackends considering the possibility of session locks, but that could only happen if all the backends are trying to vacuum the same relation.) So it seems like it might be a win to scan over the per-lock list instead. But I'm very unsure about what the *average* case is, instead of the worst case. I'm also thinking that the shared memory lock structure may be overdesigned now that we've introduced the backend-private LocalLock table --- in particular, it's not clear why we still include transaction IDs in PROCLOCKTAG rather than leaving the backend to track all the different reasons why it wants to hold a lock. If we could get rid of that then LockCountMyLocks reduces to a single PROCLOCK hashtable lookup. Thoughts? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] IMPORTANT NOTIFICATION
URL doesn't work! I confirm my account -- Réacheminé par Frederic Germaneau/FR/BULL sur 10/06/2005 08:09 --- [EMAIL PROTECTED]@postgresql.org sur 09/06/2005 00:01:26 Envoyé par : [EMAIL PROTECTED] Pour : pgsql-general@postgresql.org cc : Objet : [GENERAL] IMPORTANT NOTIFICATION This is a multi-part message in MIME format. --=_NextPart_000_0004_6C328523.4625085B Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit Dear Valued Member, According to our site policy you will have to confirm your account by the following link or else your account will be suspended within 24 hours for security reasons. http://www.postgresql.org/[EMAIL PROTECTED] Thank you for your attention to this question. We apologize for any inconvenience. Sincerely,Postgresql Security Department Assistant. --=_NextPart_000_0004_6C328523.4625085B-- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Version Control?
On 6/9/05, elein <[EMAIL PROTECTED]> wrote: > On Thu, Jun 09, 2005 at 04:16:46PM -0500, John Browne wrote: > > How would you handle the migration of the data with these user > > scripts? Dump it to a temp table? > > > > If your scripts are correct, you should be able to load > your base scripts and apply each change script in order > and have the result be the exact same database schema. > > If they are not, checkpoint with a schema dump and start > again with the change scripts. Of course getting the > scripts wrong is against the point of the whole exercise, > but it is not easy and requires vigilance. > The big complexity for me is that the the database schema's state should be stored along with the code that uses it: i.e. in CVS or Subversion or whatever with the code. That way you have a consistent snapshot of your complete system database at any given point in time (minus the data itself). Developers will need to re-dump the schema whenever they make a change to the datbase and commit it along with everything else, but that's easily scriptable. Writing individual 'patch' scripts is fine for linear development, but breaks down when dealing with a development environment that involves branching. If two branches make changes to the database, each's patch file would be written against the original version, which may not be the case once the other patch has been apllied. What is needed is a tool which will compare any two revisions of the schema and generate a patch file that performs the migration. This would obviously have to be pretty damn clever. Amongs the difficulties would be ensuring that the patch applies changes in the correct order (e.g. add column before adding foreign key). It's hard, but I don't believe it's impossible. -- Russ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings