Re: [GENERAL] why use SCHEMA? any real-world examples?
Hi, Citing Miles Keaton <[EMAIL PROTECTED]>: > I just noticed PostgreSQL's schemas for my first time. > (http://www.postgresql.org/docs/current/static/ddl-schemas.html) > > I Googled around, but couldn't find any articles describing WHY or > WHEN to use schemas in database design. When your data are too similar to be split into two databases but at the same time too different to fit into common tables, a schema comes in handy to keep your db tidy. > Since the manual says HOW, could anyone here who has used schemas take > a minute to describe to a newbie like me why you did? We had agricultural experiments running here at our institute. We had both field experiments outside and pot experiments in the greenhouse. The data collected was mostly the same for both sets of experiments (plant nutrient content, growth parameters like shoot length...), but the number of samples taken per measured parameter was quite different (e.g. for the pot experiments, nutrient data was available from each and every plant, whereas we took samples in the field only from a subset of plants). So both sets of experiments did not fit into one clean normalized, relational model. On the other hand, it was quite desirable to have all the data in one db, to be able to run queries across both datasets at the same time from one connection. Schemas provided a nice way to keep this all clean and simple. > What benefits > did they offer you? Clean, logical organization of one projects data in one db. > Any drawbacks? If you have tables with the same name in several schemas, only the ones, which are in the first schema in the search path are shown on \dt from psql. Not a major problem, but keep this in mind when designing the db. Regards, Daniel ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] rules
Hi, Citing Jamie Deppeler <[EMAIL PROTECTED]>: > sample sql querty > > INSERT INTO schema.table2 > ( > "field1", > "field2", > "field3", > "field4", > "field5", > "field6", > "field7", > "field8", > "field9", > ) > VALUES > ( > SELECT > table1."name", > table1.notes, > table1.hrs, > table1.days, > table3.value, > table31.value, > table4.ratename, > table4.maxhrs, > table4.appliesafter, > table4.rate, > table5.value, > table5."name" > FROM > (inner joins) > Where > primary = number > ) > > which i would like to make into a rule if possible First of all, you should fix your query. Problems I see: - You're inserting more values than you specified fields. - On postgresql you don't have to use the insert into foo (...) values (select ...) construct, you can instead say: insert into foo (...) select ... For the rule stuff: - do you want it to happen on update, on delete or on insert? You should be clear about that. You might want to read Bruce Momjian's book "PostgreSQL - Introduction and Concepts" (available in your book store or on the web - I leave it up to you to google it up or to order it). It's a really fine book for beginners and helped me a lot to learn how to write rules and do other stuff with postgresql. Regards, Daniel ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] change natural column order
Hi, Joolz, you already got quite a few answers, that the frontend is probably not properly designed, if it relies on a certain column ordering. I agree completely with that. However your question got me curious, and I've digged around a bit in the system tables. You might be interested in my findings. See below. Citing Joolz <[EMAIL PROTECTED]>: > I agree. Only I think this wouldn't require new functionality, I > have a gut feeling that this is possible as it is. Now only find out > how :) > > I'll have a look at the system tables (that's where the answer must > be) but maybe someone who has done this can save me the time... If you do: set search_path=information_schema; \d columns and look at the Columns defined, you'll find a column called ordinal_position, which incidentally corresponds to the position of the columns on output. If you dig a bit further and look at the definition of the columns view, you'll find, that this column comes from a column attnum in pg_attribute. As database superuser, you can actually change the values of attnum, however doing so results in: ERROR: invalid memory alloc request size 4294967295 on queries on the tables for which you changed attnum. So: 1.) obviously PostgreSQL does not like it at all (at least on my platform, which is OpenBSD 3.6) 2.) I wouldn't risk messing with a system table, which I can only write to if I'm superuser if I don't completely understand what's happening behind the scenes (at least not for production use). 3.) changing that behaviour is probably a lot more work than changing the frontend. Regards, Daniel ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] When to encrypt
Hi, Citing Derek Fountain <[EMAIL PROTECTED]>: > Indeed, but I'm still interested in the general answer. There is no general answer. Depends on how deep you get into trouble, if the data is compromised. > The server I have been looking at was hopelessly insecure and SQL > injection is only one of its problems. There were several othe > ways in! Assume, for example, an attacker > can write his own script directly into the website document tree. In > this case prepared queries don't help protect what's in the database. > The attacker can use them himself if he likes! A chain of security measures is only as strong as its weakest link. If cryptography will help you in this case really depends very much on the level of system access an attacker can gain and on the encryption scheme you use. If an attacker can gain root, it is quite probable, that your cryptographic keys will be compromised (because he will very probably be able to read physical memory), so cryptography will not help you at all. If an attacker can not gain root, it depends on if you use encryption on the file system level or on record level in the db. File system level encryption does not help much against attacks from the network on a running system, because the file system will very probably be mounted, and thus readable. record level encryption might help, depending on how it is implemented (when you implement it, ask yourself: are keys/passwords which are floating around between database server/ web server/client app unreadable by eavesdroppers on all stages of processing?) > Given this type of mess, having logins, passwords, credit card info and the > like encrypted in the DB will add another layer of protection. The question > is, do people normally add this layer, just in case. In general, I would, if there was medical/payment/other personal data in the db. The country I live in has quite strict regulations concerning protection of people's private data... (which is a good thing IMHO. Anyways just to make the point for you, that this is more than just a technical matter ;-) Legal matters and economics play a role here, too.) But discussion above and conclusion below should show you, that there's a bunch of problems elsewhere, which cannot be solved just by using cryptography. > or do they assume that > all the previous layers will do the job? Key thing is to find the weakest layer and strengthen it. Strongest security measure does no good, if an attacker can easily bypass it by gaining higher level system access by breaking another (weaker) security layer. >From your description of the problem, I would conclude, that your client's app needs fixing elsewhere first. Regards, Daniel ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] When to encrypt
Hi, Citing Greg Stark <[EMAIL PROTECTED]>: > Martijn van Oosterhout <[EMAIL PROTECTED]> writes: > > Actually, hard disk encryption is useful for one thing: so if somebody > > kills the power and takes the hard disk/computer, the data is safe. > > While it's running it's vulnerable though... > > Where do you plan to keep the key? Well, where do you plan to keep the key for your encrypted backup tapes, like you suggested in another post in this thread ;-) That's pretty much the same problem. Anyways, there are a bunch of solutions to this problem. All the good ones require manual intervention (key entry, not necessarily by hand) in case of the encrypted partition being brought from the unmounted into the mounted state and rely on a certain person or a group of people being trusted. Problem one (man. intervention) will not be a problem at all, if the data is really valuable. Problem two (trust) is more difficult. The more you distribute a single key across different people and media, the less trust you will need in every single person, but the more difficult will it be to conveniently access the data. Regards, Daniel ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Connect to Postgres 7.4 via ODBC
Hi, Citing Nadia Kunkov <[EMAIL PROTECTED]>: > Locally I have no problems connecting to the database and running psql. > I need to set up an ODBC connection from my Windows box. > Here is what I've done: > > 1.In postgresql.conf > tcpip_socket = true > (By the way does this eliminate the need to start postgres with -i option, > see item 3 below ?) yes > 2.in pg_hba.conf > > local all all trust > hostall all 10.1.9.0255.255.255.0 trust > hostall all 127.0.0.1 255.255.255.255 trust looks good to me, mine looks like this, and it works: local all all trust hostall all 127.0.0.1 255.255.255.255 trust hostall all 10.195.0.0255.255.0.0 md5 > 3.in /etc/rc.d/init.d/postgresql > > su -l postgres -s /bin/sh -c"/usr/bin/pg_ctl -D $PGDATA -o '-i' -p > /usr/bin/postmaster start > /dev/null 2>&1 " Note, I do have -i option. Turn on logging (e.g. by using the -l option to pg_ctl) and turn the line in postgresql.conf, which says #log_connections = false into log_connections=true Try to connect and look, if something shows up in your logfiles. > 4. I ran netstat -na | grep 5432 and it tells me that postgres is listening > on that port. Something filtering inbetween? Logging connections as described above might give you a clue what's happening. Regards, Daniel ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] handing created and updated fields
Hi, Citing "Jim C. Nasby" <[EMAIL PROTECTED]>: > ON INSERT: force created and updated to be current_timestamp > ON UPDATE: deny updated created. force updated to be set to > current_timestamp [snip] > Does anyone have an example of the best way to handle this scenario? Something along the lines of the following should work (but test first anyways, though I have copied smaller parts of this from the definitions in one of my databases here, I have made modifications to fit your specific task, so typos/errors might have sneaked in): create function update_trigger() returns trigger as 'begin new.created := old.created; new.updated := CURRENT_TIMESTAMP; return new; end;' language 'plpgsql'; create trigger update_trigger BEFORE UPDATE ON your_table_name FOR EACH ROW EXECUTE PROCEDURE update_trigger(); create function insert_trigger() returns trigger as 'begin new.created := CURRENT_TIMESTAMP; new.updated := CURRENT_TIMESTAMP; return new; end;' language 'plpgsql'; create trigger insert_trigger BEFORE INSERT ON your_table_name FOR EACH ROW EXECUTE PROCEDURE insert_trigger(); HTH, Regards, Daniel ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Statically linking against libpq
Hi, Looks to me like you have to add -lkrb5 and -lssl to your compile flags as well. Perhaps some more others (you can use nm in conjunction with grep to find which libraries define the missing symbols (On OpenBSD like so: 'cd /usr/lib && nm -o * | grep krb5_free_context' but that might differ on Fedora Core 3). Regards, Daniel Citing Matthew Metnetsky <[EMAIL PROTECTED]>: > On Mon, 2005-01-17 at 14:10 -0500, Matthew Metnetsky wrote: > > The library is currently linked against libpq like > > so `gcc -shared -lpq`. It compiles and runs great as long as people > > have libpg installed. I'm trying to find a way to remove this > > dependency so I googled around and found that `-shared` needed to be `- > > static` and to add `-lcrypt` also. However, the linker is now failing > > and tossing me lots of "undefined reference" messages to things like > > "krb5_free_context" (I've attached the full log). > > > > I'd appreciate any suggestions on getting a statically linked library > > against libpq. I'm currently compiling on a Fedora Core 3 machine with > > gcc-2.95.3 against the postgresql-libs-7.4.6-1.FC3.2 package. [snipped] > /usr/local/gcc-2.95.3/bin/gcc -static -ggdb3 -Wall -Wno-unknown-pragmas > -DOPT_TYPE=\"debugging\" -DCVAR_DEBUG=1 debug.nix/api_info.o > debug.nix/commands_meta.o debug.nix/dllapi.o debug.nix/metamod.o > debug.nix/engine_api.o debug.nix/h_export.o debug.nix/linkent.o > debug.nix/linkgame.o debug.nix/log_meta.o debug.nix/cmn.o > debug.nix/cmn_player.o debug.nix/cmn_usermsg.o debug.nix/cmn_events.o > debug.nix/cmn_team.o debug.nix/osdep.o debug.nix/sdk_util.o > debug.nix/studioapi.o debug.nix/support_meta.o debug.nix/avl.o -o > debug.nix/cmn_i386.so -ldl -lm -lpq -lcryptdebug.nix/metamod.o > (.gnu.linkonce.t.DLOPEN(char const *)+0x21): In function `DLOPEN(char > const *)': > /home/mimetnet/Projects/CMN/hlds-plugin/src/osdep.h:133: warning: Using > 'dlopen' in statically linked applications requires at runtime the > shared libraries from the glibc version used for linking > /usr/lib/libpq.a(thread.o)(.text+0x6b): In function `pqGetpwuid': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/thread.c:113: warning: > Using 'getpwuid_r' in statically linked applications requires at runtime > the shared libraries from the glibc version used for linking > /usr/lib/libpq.a(ip.o)(.text+0x9e): In function `getaddrinfo_all': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/ip.c:81: warning: Using > 'getaddrinfo' in statically linked applications requires at runtime the > shared libraries from the glibc version used for linking > /usr/lib/crt1.o(.text+0x18): In function `_start': > : undefined reference to `main' > /usr/lib/libpq.a(fe-misc.o)(.text+0x8ae): In function `pqSocketCheck': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-misc.c:973: > undefined reference to `SSL_pending' > /usr/lib/libpq.a(fe-secure.o)(.text+0xd9): In function `load_dh_file': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:575: > undefined reference to `PEM_read_DHparams' > /usr/lib/libpq.a(fe-secure.o)(.text+0xef):/usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:580: > undefined reference to `DH_size' > /usr/lib/libpq.a(fe-secure.o)(.text+0x11d):/usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:586: > undefined reference to `DH_check' > /usr/lib/libpq.a(fe-secure.o)(.text+0x15e): In function > `load_dh_buffer': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:611: > undefined reference to `BIO_new_mem_buf' > /usr/lib/libpq.a(fe-secure.o)(.text+0x186):/usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:614: > undefined reference to `PEM_read_bio_DHparams' > /usr/lib/libpq.a(fe-secure.o)(.text+0x190):/usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:615: > undefined reference to `BIO_free' > /usr/lib/libpq.a(fe-secure.o)(.text+0x1e7): In function `tmp_dh_cb': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:684: > undefined reference to `DH_size' > /usr/lib/libpq.a(fe-secure.o)(.text+0x20e):/usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:685: > undefined reference to `DH_generate_parameters' > /usr/lib/libpq.a(fe-secure.o)(.text+0x362): In function > `pqsecure_destroy': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:883: > undefined reference to `SSL_CTX_free' > /usr/lib/libpq.a(fe-secure.o)(.text+0x3aa): In function `close_SSL': > /usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:1003: > undefined reference to `SSL_shutdown' > /usr/lib/libpq.a(fe-secure.o)(.text+0x3b8):/usr/src/build/498045- > i386/BUILD/postgresql-7.4.6/src/interfaces/libpq/fe-secure.c:1004: > undef
[GENERAL] Howto cite postgresql in scientific publications?
Hi, Is there an official way to cite PostgreSQL in scientific publications? This might sound somewhat strange to some of you, but e.g. the R project (http://www.r-project.org, open source environment for statistical computing as a side note) has a FAQ entry like this: 2.8 Citing R To cite R in publications, use @Manual{, title= {R: A language and environment for statistical computing}, author = {{R Development Core Team}}, organization = {R Foundation for Statistical Computing}, address = {Vienna, Austria}, year = 2004, note = {3-900051-07-0}, url = {http://www.R-project.org} } Citation strings (or BibTeX entries) for R and R packages can also be obtained by citation(). Is there something similar "official" for PostgreSQL? Regards, Daniel ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] /usr/sbin/useradd is needed by postgresql-server-8.0.1-PGDG
Hi, Citing Rey Arqueza <[EMAIL PROTECTED]>: > after running: rpm -i ./postgresql-server-8.0.1-1PGDG.i686.rpm, I get the > following error: > > error: Failed dependencies: > /usr/sbin/useradd is needed by postgresql-server-8.0.1-PGDG > > I would guess that /usr/sbin needs to be in my path? if I guessed correctly, > how do I do that? If you're using a Bourne shell/Korn shell derivative (bash, ksh...): export PATH=$PATH:/usr/sbin If you're using a C-Shell derivative (csh, tcsh...): setenv PATH $PATH':/usr/sbin' > If this is not the case, then what could possibly cause > this error message? Does /usr/sbin/useradd exist? If not, you could be missing the rpm which contains it. Regards, Daniel ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Pass parameters to SQL script
Hi, Citing Fuchs Clemens <[EMAIL PROTECTED]>: > is it possible to pass parameters to a SQL script, which I launch via the > psql shell? yes > In Oracle it works like that: > > sqlplus myscript.sql myschema > > and within the script I can use the parameter like this: > > CONCAT . > CREATE TABLE &1..test (name VARCHAR2(100)); > > Is there a counterpart for PostgreSQL? call psql like this to set a variable named your_variable_name to my_table: psql -v your_variable_name=my_table to expand the variable your_variable_name to its value (my_table in this case) in the sql script, precede its name with a colon, like so: select * from :your_variable_name; which will expand to: select * from my_table; the psql manpage has more info on all this. Regards, Daniel ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: libpq: passwords WAS: [GENERAL] scripting & psql issues
Hi, Citing Tom Lane <[EMAIL PROTECTED]>: > Daniel Martini <[EMAIL PROTECTED]> writes: > > Now how would this work, if it would be possible to send hashed passwords > > from libpq: > > user sends username/password, this gets hashed by the cgi, then the hashed > > value is sent by libpq. Session id is generated and > > stored together with the hashed password in the mapping. Now attacker gets > > hold of the mapping. Assuming he does only have access as the user the cgi > > is running as, he would not have gained anything (except having > compromised > > the current sessions, which is less bad than having all passwords in > > cleartext), as he only has the hashed passwords (a brute force attack on > > the hashed values would be possible, but that is at least additional > effort > > for the attacker). If he had root, he could install a backdoor allowing > > him to use the hashed passwords, but a compromise like this is much easier > > detected than a compromise based on spied passwords. > > What backdoor? AFAICS you are proposing that we add a *front* door for > use of hashed passwords. Maybe the attacker won't know what the > original cleartext was, but that adds zero security as far as exploits > against the database go. If the webserver can log in with it, so can he. No he can't: Only if he is able to install a program on the webserver to actually login with a hashed password. If he wants to log in over the cgi, this won't work, because the hashed value he gained by reading the mapping will get hashed again and this will produce a wrong value. Direct logins to the database from his machine won't work either, because the database only allows connections from the webserver. Regards, Daniel ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: libpq: passwords WAS: [GENERAL] scripting & psql issues
Hi Tom, Thanks a lot, that was the kind of clarification I needed. On Fri, Aug 20, 2004 at 10:32:59AM -0400, Tom Lane wrote: > Daniel Martini <[EMAIL PROTECTED]> writes: > > No he can't: > > Only if he is able to install a program on the webserver to > > actually login with a hashed password. If he wants to log in over the > > cgi, this won't work, because the hashed value he gained by reading the > > mapping will get hashed again and this will produce a wrong value. > > True, but if he can break into the webserver's memory, most likely he can > do that too. With a plain cgi, I probably can't store the data in memory, as the cgi-process dies as soon as the request is handled (or am I wrong here?). It would have to be filebased, and a file might be read without being root (as the cgi has to read it as well and will not be running as root). Anyways, the solution you proposed below is much better than hashed passwords in libpq: > You could > for instance keep each user/password pair encrypted in the webserver's > memory, using a separate key for each session that is not stored in that > memory (perhaps it is in the cookie you use to identify the session). > Or for that matter, never keep the user/password pair in webserver > memory at all, but instead in the cookie (again encrypted, but this time > the webserver holds the key). That looks to me like the most secure way to defend against the database server being taken fast as well, if the webserver is compromised (as passwords are nowhere stored in cleartext and can be encrypted quite strongly and flexible (e.g. different keys for every session etc.). > The fact that the password is hashed in a particular way is an > implementation detail that's subject to change, so I don't wish to > expose it as part of the API without darn good reason. I don't see > a darn good reason here... Agreed. Keep libpq simple. I learned that there are better workarounds against my original problem than adding this to the libpq API. Regards, Daniel ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] pg_dump in stand alone backend
Hi, Citing Ulrich Wisser <[EMAIL PROTECTED]>: > I would like to stop the postmaster every night and run > > vacuum > pg_dump > reindex > > in the stand alone backend. > > Vacuum and reindex seem to be quite easy, as I can setup a small script > with both commands. But what about pg_dump. That seems "somewhat" more > complex. What exactly is your problem about putting pg_dump in a (shell)script as well? In the simplest case you could use something like: #!/bin/sh psql --command vacuum your_database cd /somewhere/with/write/access pg_dump your_database > dump_`date %Y%m%d%H%M%S`.bak psql --command 'reindex whatever_you_want_to_reindex' your_database which will vacuum, dump to a file named dump_timewhendumpoccured.bak and reindex whatever_you_want_to_reindex. All kinds of stuff could be added (like mailing command output to you, loading the backup up to another machine for storage etc. etc.), but above script does basically what you were asking for. Regards, Daniel ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] postgres "on in the internet"
Hi, Citing Paul Tillotson <[EMAIL PROTECTED]>: > At my company we are looking at deploying clients for our client/server > app outside our firewall, which will then require our postgres box to be > internet-accessible. > > Does anyone out there have experience with this or recommended best > practices? > We have been looking at either > (a) tunnelling everything over ssh, or This, implementation of a VPN or using the builtin ssl-support in postgresql is the way to go, because *everything* that goes through the channel will be encrypted. With SSL you have the additional value of being able to guarantee the identity of the server. The information in the source code distribution in src/backend/libpq/README.SSL might prove useful to make your decision. > (b) just making sure that users have "strong" passwords and > requiring "md5" authentication in pg_hba.conf. Too weak, IMHO. md5 is there to protect the password data stored in the database, not the password transmission (someone already mentioned tcp replay attacks). Regards, Daniel ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]