Re: [GENERAL] High-availability
On Mon, Jun 04, 2007 at 04:21:32PM +0200, Chander Ganesan wrote: > I think you'll typically find that you can get one or the other - > synchronous replication, or load balancing...but not both. On the other Hi, I am in very similar position, but I am more failover oriented. I am considering using pgcluster, which shall resolve both at the cost of slight transaction overhead. Does anyone have any experience with this? What problems may I expect in this setup? Kind regards, Bohdan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] auditing question
Hello, I am restricted to 8.0.7 version of postgresql and I am facing two problems when trying to build generic auditing function. I went through documentation http://www.postgresql.org/docs/8.0/interactive/plpgsql-trigger.html example: Example 35-3. A PL/pgSQL Trigger Procedure For Auditing had rewritten it into format: audit. is copy of without constraints and also inherits some data from generic auditing table (as name, when, ...) = CREATE OR REPLACE FUNCTION audit_table() RETURNS trigger AS $$ DECLARE _name TEXT; BEGIN -- Get current user SELECT INTO _name CURRENT_USER; IF TG_OP = 'DELETE' THEN EXECUTE 'INSERT INTO audit.' || TG_RELNAME || ' SELECT _name, now(), OLD.*;'; RETURN OLD; ELSIF TG_OP = 'INSERT' THEN EXECUTE 'INSERT INTO audit.' || TG_RELNAME || ' SELECT _name, now(), NEW.*;'; RETURN NEW; ... = and binding the procedure to trigger AFTER INSERT,DELETE, UPDATE, gives me problem: ERROR: NEW used in query that is not in a rule CONTEXT: SQL statement "INSERT INTO audit.communities SELECT _name, now(), row(NEW);" Thus do I do something wrong or example is not compatible with my version? Even tried to remove EXECUTE and limited it to one table only - no help. I like to use this approach instead of rules, while I can setup SECURITY DEFINER on procedure and therefore do not need to solve permissions on audit tables. Going further rules and triggers give me NEW and OLD records at my disposal. Can these records be comparable "at-once"? Imagine table, which has significant number of attributes so you do not want to list them explicitly in condition. Especially when you have more such tables. Even any generic function which will take two records as parameter, compare them and return BOOLEAN is enough Thank you for help, Bohdan ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Re:[GENERAL] auditing question - PARTIALY SOLVED
Hello, with help of Bricklen, I have found that the problem is the scope of call EXECUTE. When this is called it seems to be "launched" outside of the trigger's scope and OLD/NEW is not defined. (Manual suggests it) The remaining question is how to compare OLD.* and NEW.* in generic way for 8.0.x version Regards, Bohdan > > IF TG_OP = 'DELETE' THEN > EXECUTE 'INSERT INTO audit.' || TG_RELNAME || > ' SELECT _name, now(), OLD.*;'; > RETURN OLD; > = > > and binding the procedure to trigger AFTER INSERT,DELETE, UPDATE, gives me > problem: > > ERROR: NEW used in query that is not in a rule > CONTEXT: SQL statement "INSERT INTO audit.communities SELECT _name, now(), > row(NEW);" ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] PAM + Password authentication
Hello, Can the PGSQL database be configured that it performs authentication against PAM and if fails the it tries against internal mechanizm? I would like to migrate to PAM, but I do not want to promote some users to system wide. Till now I am able to do one or the other way. Thank you, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Password safe web application with postgre
Hello, I have the following problem. A multiuser app has authentization and authorization done based on pgsql. The frontend is web based so it is stateless; it is connecting to database on every get/post. There is also a requirement that the user is transparently logged in for some period of time. Tha most easy way is to store login credentials into the session. The drawback is that session is stored in file, so the credentials are readable. I want to avoid it. My first step was hashing the password with the same mechanizm as pgsql does, but I am not able to pass it to the server. I did some research with mighty google and found reply by Tom Lane: "No, you need to put the plain text of the password into the connInfo. Knowing the md5 doesn't prove you know the password. " Thus the next logical step is keeping sessions in servers memory rather than files. Memory dump could compromise it, but this is acceptable risk. I would like to ask you, if someone had solved this problem is some more elegant way. Thank you, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Password safe web application with postgre
Hello, thank you everyone for the answers. I went through and I forgot add one thing. The web-app is frontend, thus basically PL/PGSQL launcher and all changes are audited, so common login is unwelcome. On Thu, May 15, 2008 at 05:40:49PM +0200, Steve Manes wrote: > I keep the user's login credentials in a TripleDES-encrypted, > non-persistent cookie, separate from session data. > This is the approach I am/will be heading to. Having the cookie with login and password encrypted on user side, HTTPS connection, and what was said in previous emails about not storing credentials in cookies any ideas of weak sides? Moreover if parts of decryption keys will be unique to the sessions and stored in session on a server? PS. Appologies for going slightly OT as this is becoming more general than pgsql. Thank you, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Vacuuming on heavily changed databases
Hello, I would like to ask an opinion on vacuuming general. Imagine situation that you have single table with 5 fields (one varchar). This table has during the day - cca 620 000 inserts - 0 updates - cca 620 000 deletes The table is vacuumed daily, but somehow after several months I got to size of ~50GB Result of VACUUM FULL VERBOSE ANALYZE is: Nonremovable row versions range from 102 to 315 bytes long. There were 218253801 unused item pointers. Total free space (including removable row versions) is 4062705 bytes. 4850610 pages are or will become empty, including 0 at the end of the table. 5121624 pages containing 40625563500 free bytes are potential move destinations. CPU 161.85s/35.51u sec elapsed 1191.17 sec. This means 80% wasted space that could be reused. Right now, I am doing vacuum full but this requires exclusive lock. During that time the database is locked so I am missing "inserts and deletes" ;-) I would like to avoid this in future, so I would like to prepare strategy how to do it next time or avoid. Basically I have the follwing limitations: 1) sometimes deletes vs vacuum analyze does not help, extra space is not relcaimed. Do not know why this is happening, but maybe vacuum cannot get lock 2) manualy evoked vacuum full requires bringing database long-time offline 3) There were suggestions (in archives) doing dump and then restore on dropped database, but still requires downtime. What would be your strategy for the database maintenance like this? What tweaking of vacuuming can I make, so I do not get those "forgotten" records? Thank you, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Vacuuming on heavily changed databases
On Mon, May 19, 2008 at 04:59:42PM +0200, Harald Armin Massa wrote: > do not vacuum DAILY. set up autovacuum to run AT LEAST every minute. > autovacuum will flag the "deleted" rows as to be reusable by next > insert. Make sure to use 8.3., it's much more easy to setup > autovacuum then before. Hello Harald, Thank you, will look at that. My problem is I have to use 8.0.x, but it should be supported. Regards, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Vacuuming on heavily changed databases
On Mon, May 19, 2008 at 08:38:09PM +0200, Scott Marlowe wrote: > OK. Assuming that the 50G is mostly dead space, there are a few > possibilities that could be biting you here, but the most likely one > is that your Free Space Map settings aren't high enough to include all > the rows that have been deleted since the last vacuum was run. If you > can't take down the server to change those settings, then running > vacuum more often will help. > > The autovacuum daemon is your friend. Even with the default non > aggresive settings it comes with, it would have caught this long > before now. I can bring down the DB for short time, but I am stuct with 8.0. Found that autovacuum is part of contrib, thus will try Thank you all for the opinion Regards, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] nested view with outer joins - best practices
Hello, I have noted one very strange thing which I would like to discuss with you. I have noted that outer joins on nested views takes heavily longer than the inner ones. Example: REATE VIEW ports_view AS SELECT ports.pid, nodes.nname FROM ports JOIN nodes ON nodes.nid = ports.pnode; EXPLAIN ANALYZE SELECT * FROM services_subints LEFT JOIN ports_view as prts ON services_subints.port = prts.pid http://explain-analyze.info/query_plans/2078-query-plan-811 but if I rewrote the view as: SELECT * FROM (services_subints LEFT JOIN ports as prts ON services_subints.port = prts.pid) INNER JOIN nodes AS prn ON prts.pnode = prn.nid http://explain-analyze.info/query_plans/2079-query-plan-812 if I revert to original nested view and use inner join I get similar plan as above. Here is my question: 1) What are the best practices, if I want to use nested views? 2) Will my plan get better with new version of pgsql ( currently its 8.0.x ) Thank you, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] nested view with outer joins - best practices
On Mon, Jun 09, 2008 at 04:41:16PM +0200, Tom Lane wrote: > 8.0 is incapable of reordering outer joins, which is likely the cause of > your problem. Thank you, will try. Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ER diagram software
I have done some research recently and found on acceptable: - DBdesigner4, which is depreceated and replaced by MySQL workbench. Is OSS, but no linux version yet. Also has clunky pgsql support - Aqua data studio (www.aquafold.com). It's java app which I am using for some time already. Originally it had dual license, free for non-commercial use, but after looking at the site they changed it to more restrictive licensing. ER diagrams are acceptable quallity (but far from perfect). Additionally its not cheap anymore :-( Regards, Bohdan On Tue, Jul 22, 2008 at 12:36:39PM +0200, Brandon Metcalf wrote: > I've been able to find a couple of packages, but wondering if there is > a good system out there what will create an ER diagram of an existing > PostgreSQL DB. Open source would be nice. > > Thanks. > > -- > Brandon > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Weird pg_ctl behaviour via ssh
Hello, I am fiddling around with pgpool-II and online recovery. Recovery depends on remote starting of a cluster. This means I need to ssh into a box, start clustern (with PITR recovery) and terminate that ssh connection. If I use the following script: ssh -T remote "export LD_LIBRARY_PATH=/opt/postgres-8.3.3/lib; nohup /opt/postgres-8.3.3/bin/pg_ctl -w -D /data/pg833-data start > /dev/null 2>&1;exit;" the script terminates earlier than the DB is up: /opt/postgres-8.3.3/bin/psql -h remote -p postgres psql: FATAL: the database system is starting up which is problem for pgpool. But if I use command: ssh -T remote "export LD_LIBRARY_PATH=/opt/postgres-8.3.3/lib; nohup /opt/postgres-8.3.3/bin/pg_ctl -w -D /data/pg833-data start 2>&1;exit;" the ssh never terminates. Which is, again problem for pg_pool. The outoput will be as bellow. How can I terminate the script really at the moment when DB is up? Thank you, Bohdan ... .FATAL: the database system is starting up .scp: /data/archive_log/0004.history: No such file or directory could not start server scp: /data/archive_log/0005.history: No such file or directory scp: /data/archive_log/0006.history: No such file or directory scp: /data/archive_log/0007.history: No such file or directory scp: /data/archive_log/0008.history: No such file or directory scp: /data/archive_log/0009.history: No such file or directory scp: /data/archive_log/000A.history: No such file or directory scp: /data/archive_log/000B.history: No such file or directory scp: /data/archive_log/000C.history: No such file or directory scp: /data/archive_log/000D.history: No such file or directory scp: /data/archive_log/000E.history: No such file or directory scp: /data/archive_log/000F.history: No such file or directory scp: /data/archive_log/0010.history: No such file or directory scp: /data/archive_log/0011.history: No such file or directory LOG: selected new timeline ID: 17 scp: /data/archive_log/0001.history: No such file or directory LOG: archive recovery complete LOG: autovacuum launcher started LOG: database system is ready to accept connections -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Weird pg_ctl behaviour via ssh
On Thu, Jul 31, 2008 at 11:24:35AM +0200, Bohdan Linda wrote: > /opt/postgres-8.3.3/bin/psql -h remote -p postgres > psql: FATAL: the database system is starting up > I am attaching additional info. The /dev/null is understable, but what I am worried is that if I query status of a server via: ssh -T remote "export LD_LIBRARY_PATH=/opt/postgres-8.3.3/lib; nohup /opt/postgres-8.3.3/bin/pg_ctl -w -D /data/pg833-data status 2>&1 I get: pg_ctl: server is running (PID: 14478) /opt/postgres-8.3.3/bin/postgres -D /data/pg833-data But still psql is returning: psql: FATAL: the database system is starting up Why we have such inconsistency? How to avoid it? Thank you, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] archive_timeout, checkpoint_timeout
Hello, > If you just want to ship segments to a standby server on a timely basis, > the setting to tune should be archive_timeout, no? just curious, how would the stand-by DB process the segments? Regards, Bohdan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Getting cozy with weekly PITR
pg_start_backup will flush old transactions, thus you have full DB backup. Unless you want them archived, no need to keep them Regards, Bohdan On Mon, Sep 22, 2008 at 09:41:47AM +0200, Joey K. wrote: > During week 2, after the base backup, can we remove week 1's base and WAL > files? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql replication
I would have a slight offtopic question, this is issue only of pgsql or there are some other db solutions which have good performance when doing this kind of replication across the world. Regards, Bohdan On Thu, Aug 25, 2005 at 09:01:49AM +0200, William Yu wrote: > It provides pseudo relief if all your servers are in the same building. > Having a front-end pgpool connector pointing to servers across the world > is not workable -- performance ends up being completely decrepit due to > the high latency. > > Which is the problem we face. Great, you've got multiple servers for > failover. Too bad it doesn't do much good if your building gets hit by > fire/earthquake/hurricane/etc. > > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] detection of VACUUM in progress
Hello, Is there any way how to detect running command VACUUM by reading pg_* tables? The idea is to detectect when table is not accessible due maintainance. The approach of explicitely setting a flag into status table is not very convenient, while I want to cover also non-systematic launching of this command Regards, Bohdan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] detection of VACUUM in progress
On Tue, Aug 30, 2005 at 06:07:24PM +0200, Michael Fuhr wrote: > tables, and a VACUUM might start or complete immediately after you > issue the query but before you read the results). This method is > therefore unreliable. I intend to do the VACUUM FULL during quiet hours, thus the chance of fitting exactly to the time that VACUUM started and it is not reflected in the tables is quite small. And even if it would happend, very likely it will affect only one user, who may get around hitting "refresh" button. > > > What problem are you trying to solve? If we knew what you're really > trying to do then we might be able to make suggestions. I have database, which gets around 240 000 new lines each day and about the same is also deleted each day. The table has something around 8M lines in average and simple query takes about 70s to complete(V210 1x UltraSPARC-IIIi). As this time is quite high, I need "defragment" database on daily basis. These queries get visualized in web application. My problem is, how to make the web application aware that maintainace (VACUUM FULL) is in place, but the database is not down. I really would not like to do it via extra status table, while sometimes it may happend, that someone will run VACUUM FULL ad-hoc-ly in good-faith and will forget to update the status table. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] detection of VACUUM in progress
> > From the postgresql manual > http://www.postgresql.org/docs/8.0/interactive/maintenance.html : > " The standard form of VACUUM is best used with the goal of maintaining > a fairly level steady-state usage of disk space. If you need to return > disk space to the operating system you can use VACUUM FULL ? but what's > the point of releasing disk space that will only have to be allocated > again soon? Moderately frequent standard VACUUM runs are a better > approach than infrequent VACUUM FULL runs for maintaining > heavily-updated tables." > > From this I conclude that an ordinary VACUUM is sufficent to your > purpose cause you insert/delete almost the same amount of data daily. > > But then again I can be mistaken so if anyone can back me up here or > throw the manual on me will be nice ;P If I vacuum only the table, the records may be used by new lines, that is fine. Problem is, that when creating select on such table, it takes more pages to be read from the IO (it will read laso deleted rows) thus the select will last a bit longer. regards, Bohdan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] REVOKE question
Hello, I have encountered on (for me) wierd thing. When dropping an user, the database will not forget his permissions. After his recreation he has the original permissions. I use an approach of dropping all users when recreating the database environment and user recreation to avoid any unwanted/temporary changes to permissions. Is there any way, how to revoke all permission for the user on any type in any schema in the database? I think this is essential for securying of access control of users. I tried to look in the doc, but found nothing about that. Thank you, bohdan ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Access management for DB project.
Hi, I started thinking of some security access management. Basically imagine this scenario according users: 1) Writer does only inserts to black hole. 2) Reader does only reports on inserted data, cannot modify or add anything 3) Maintainer can run a task on the data, but cannot read or add anything. The task has to have read/write access to the tables. The first 2 types are easily solvable, but with the third type I have problem. I have created task in plpgsql, I granted permissions to an user to execute the task, but revoked on him all rights to tables. Logically task failed. The task sits in different schema, but operates on tables in other schema. How would you solve this task? Regards, Bohdan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Partial commit within the trasaction
Hello, I have read that 7.x version had a model "all or nothing" in transactions. Thus I have upgraded to version 8 and would like to do the following: plpgsq code does time intensive data manipulation + database vacuuming. This data manipulation is splitted logically into several steps. After each step I would like to give a message to the status table, what the procedure is performing. Currently I pass the information to the table via insert, but this is also the limitation. I would like to allow another user see the progress of the current pgplsql procedure, but no insert is commited, till procedure ends. How this can be solved? Regards, Bohdan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[EMAIL PROTECTED]: Re: [GENERAL] Access management for DB project.]
Tanks guys, That was exactly what I was looking for. B. --- Begin Message --- On 8/9/05 11:08 am, "Bohdan Linda" <[EMAIL PROTECTED]> wrote: > > Hi, > > I started thinking of some security access management. Basically imagine > this scenario according users: > > 1) Writer does only inserts to black hole. > > 2) Reader does only reports on inserted data, cannot modify or add > anything > > 3) Maintainer can run a task on the data, but cannot read or add anything. > The task has to have read/write access to the tables. > > The first 2 types are easily solvable, but with the third type I have > problem. I have created task in plpgsql, I granted permissions to an user > to execute the task, but revoked on him all rights to tables. Logically > task failed. You could create the function with SECURITY DEFINER, that way the function will have the permissions of the user that creates it as opposed to the user that runs it CREATE my_func(int) RETURNS int SECURITY DEFINER AS '. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. --- End Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Partial commit within the trasaction
On Thu, Sep 08, 2005 at 02:53:47PM +0200, Michael Fuhr wrote: > One way would be to use contrib/dblink to open another connection > to the database so the status messages could be inserted in a > separate transaction. This could do the trick for logging, even writting of a package that would do all the stuff should not be hard. But what if you want to flush something processed to db. Consider you are doing massive updates/deletes. Again in logical blocks. You as a programmer may decide: "ok so far I am done and even if I crash I want to preserve these changes." It happened me, that db aborted processing such huge updates with out of memory message. Would calling stored procedure from stored procedure solved this? Or if parent procedure is not commited then even called procedure will not commit? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Partial commit within the trasaction
On Thu, Sep 08, 2005 at 04:35:51PM +0200, Michael Fuhr wrote: > On Thu, Sep 08, 2005 at 03:39:50PM +0200, Bohdan Linda wrote: > commit it now." You have to do some extra bookkeeping and you can't > commit several prepared transactions atomically (as far as I know), > but that's one way you could make changes durable without actually > committing them until later. In case of durable transactions, would they be released from memory? Thus could the transaction be more respectfull to the HW when processing too much data? And what about nested transactions? Are they planned? The point is connected to my previous question of the secured access to stored procedures. If I move part of database logic to the client, I will have to introduce parameters to the procedures. This may be potentialy abusable. If I try to use dblink from server to server (both are the same), is there some perfromance penalty? How big? Regards, Bohdan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] pgclient hostbased authentication
Hello, may I ask, how(or which) ip is checked against pg_hba.conf IP entry in NAT environment? Could it be, that psql client packs IP address of the client into athentication data? Regards, Bohdan ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pgclient hostbased authentication
> No. Why? Describe your problem. > I have got response like bellow, when connecting to server in completely different network than 172.x.x.x -- org.postgresql.util.PSQLException: Connection rejected: FATAL: no pg_hba.conf entry for host "172.x.x.x", user "XxXxXx", database "yYyYyY", SSL off Regards, Bohdan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Securing Postgres
On Thu, Oct 06, 2005 at 11:57:32AM +0200, Martijn van Oosterhout wrote: > This is the bit that's been bugging me this whole thread. Who owns the > data? I've had to help people out with programs where they could type > data in but couldn't get the reports they wanted out. Furtunatly, > Access's access control is, uh, simplistic and I created the reports > they needed. > > If someone tried to sell me a system where I couldn't even get in table > format the raw info I had entered, I'd tell them to go away. Like you > say, the data is way more important that whatever program you're using. It is not that easy. I.e. most(if not all) world manufacturers of SDH or DWDM technologies know that the data is very important. They know that at certain time customers will want to have an access to *their* raw data, so they all have (mostly limited) interfaces to their EMSes. But they have to be purchased for incredible ammounts of money. In such cases TelCos cannot say: "go away" regards, Bohdan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] SHA1 authentication
Hello all, I would like to use password authentication for pgsql users for remote backup purposes. I don't like the fact storing cleartext password on a system. From documentation, i have learnt that passwords can be encrypted by md5 and crypt methods. But we know, that md5 is rather weak encryption, so I am asking is there any feasible way, how we can use SHA1 instead MD5? Cheers, Bohdan ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SHA1 authentication
Thank you for the explanation. Cheers, Bohdan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Little Offtopic: Database frontends
Hello, I am sorry for this little offtopic, but recently I was looking for some matured DB frontend. I am licensed user of Aquafold Datastudio and before I will start to spend another money for next licenses, I would like to ask you if there are some similar frontends out there, but with three restrictions - Open Source, multiple DB backends, Linux port. There were few references in history here, but most of them were comercial license types. The ones, I know: ADS - non-free for commercial use. TOra - seems to me dying GPL software. Seems to me very unstable right now and full of hacks SquirelSQL -missing stored procedures support, but seems interesting. What are yours favourite? Thank you, Bohdan ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match