[GENERAL] Postgres & Commercial
> Hi all, > > I don't think we're gonna be in war with mSQL or MySQL. They're our pals > in this crusade. The way of free and open source is really great. No one > here works for any benefit but we still need to succeed. In here, the > success declare as the number of user running Postgres which somebody > think that it's not enough. The problem is our direction is not quite > clear. What we're gonna be? A lot of gorgeous features have been added to > Postgres while we still need to comply some standards. I don't think those > magnificent features are not good but it's not an essencial. Once a > developer wants to pick a DBMS what he has to concern? A lot of unseen > features are interesting, but what for if you ain't gonna use it? IMHO, we > should build up a stronger foundation first. Make it to comply all SQL92 > and improve other cool features later. For me, MySQL has found its place, > an easy web connection where a true DBMS is no need. From our background > in here, the target should be "the alternative for ones who want a TRUE > DBMS". > > Cho >
Re: [GENERAL] How to know when to vacuum
OK, so there's been quite a bit of traffic about vacuuming databases as well as more than one suggestion on how to do it. But there really hasn't been an answer to the question of how to know when to vacuum. I now vacuum the databases every night, but this seems somewhat inefficient... I know that some of the more active databases could use some extra vacuuming, but which ones? how often? If there is a way to ask postgres whether a specific database could use a vacuuming I will be happy to write something to automate it, but I see no way to do so. Anyone have any suggestions? Thanks, Chris
Re: [GENERAL] Postgres vs commercial products
Hi, let me add one thing here ... (if this comes a second time, of missed the first one ...) Wouldn't it be possible to get some backing from one of the commercial Linux distribution makers? Like Redhat supports Gnome or S.u.S.e supports XFree? What about GNU? Anyone listening? This would be major leap for PostgreSQL if someone financed two months of Vadim for example ... Ciao Ulrich Ciao Das Boersenspielteam. --- http://www.boersenspiel.de Das Boersenspiel im Internet *Realitaetsnah* *Kostenlos* *Ueber 20.000 Spieler* ---
Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on Linux]
> > > >Let's remove the "I don't want to think" utilities like > > > >{create,destroy}{db,user} and force DBA's to actually use the *proper* > > > >functions. > > IMHO (actually make that IMVeryHO) This is probably a bad idea... We > should just update the man pages to detail the SQL code that can be used > instead of the command. It doesn't hurt anything/anyone to leave the > programs as they are, and can even be helpful to people writing scripts to > automate management of their servers. man pages already updated. -- Bruce Momjian | 830 Blythe Avenue [EMAIL PROTECTED] | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup.| (610) 853-3000(h)
Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on Linux]
> On Sun, 26 Jul 1998, Herouth Maoz wrote: > > > At 23:36 +0300 on 24/7/98, Richard Lynch wrote: > > > > > > > Well, let's try crontab -e and mess around to find out the answers to these > > > questions. Or at least to see if I can just ignore this whole MAILTO thing > > > for now. I can always do crontab -r and blow it away. > > > > I nearly fell off my chair laughing... You know, on my personal TODO list > > there's a little item "Write crons for VACUUM", which has been there for > > months, and I keep postponing it. > > I'm another one of those people... so I wrote a quick & dirty c program to > vacuum ALL my databases. I just mailed it to Marc Fournier > ([EMAIL PROTECTED]) because I didn't know what to do with it, but if anyone > else wants it just drop me a line directly. > Here is a little shell script to do it. -- Bruce Momjian | 830 Blythe Avenue [EMAIL PROTECTED] | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup.| (610) 853-3000(h) psql -l -A -q -t|cut -d"|" -f1 | while read DATABASE do psql -q -c vacuum $DATABASE done
Re: [GENERAL] Postgres vs commercial products
At 20:20 +0300 on 27/7/98, Chris Johnson wrote: > Come on - be reasonable... The person that asked if you would be willing > to pay some money to get the development of features you want was not > suggesting that you would have a special version of PostgreSQL. Any > additions made would wind up back in Postgres itself for everyone to use. > > Now since you have avoided the question posed by that other person I will > ask again. What feature or features were you looking for? He didn't ask me what the features were. I'm quite willing to specify them. Yes, I'm quite aware that many of them are in the pipeline... But hey, if I install Oracle/Informix, I'd have them all *now*, tested and debugged by many users before me. Functionality: - Row-level locking. - Transaction isolation levels. - PL/SQL - triggers not in C - inner/outer joins - standard referetial integrity - BLOBs with less hassle Administration: - Convenient tools for backing up, including scheduling of backups. Same for vacuum and any other periodical maintenance. - Backups include BLOBs. - Replication - Support for raw devices (my sysadmin prefers it). Those are just off the top of my head (I didn't have a chance to ask my sysadmin at length). Herouth -- Herouth Maoz, Internet developer. Open University of Israel - Telem project http://telem.openu.ac.il/~herutma
Re: [GENERAL] Can I store bitmap graphics in a table?
At 11:06 +0300 on 28/7/98, Henning Hucke wrote: > > > [... Storing graphics in a postgresql db ...] > > You can't. There is a limit on the tuple size, restricting it to 8k. If you > > could guarantee that your images are no more than, say, 1K - I'd say you > > can uuencode them or translate to hex, and put it in a text column. A byte > > array is another option. But in the 50k to 100k, it's large objects. > > Even without having tried it I am rather shure that you are wrong. The 8k > barrier applies AFAIK only to the normal datatypes like text and so. AFAIK > the LOI is capable to store big graphics in the database. Well, he asked for a way to do it as a normal datatype, without using the (awkward) lo interface. Of course you can do it with LOs. Herouth -- Herouth Maoz, Internet developer. Open University of Israel - Telem project http://telem.openu.ac.il/~herutma
Re: [GENERAL] Postgres vs commercial products
On Tue, 28 Jul 1998, Herouth Maoz wrote: > He didn't ask me what the features were. I'm quite willing to specify them. > Yes, I'm quite aware that many of them are in the pipeline... But hey, if I > install Oracle/Informix, I'd have them all *now*, tested and debugged by > many users before me. > > - Convenient tools for backing up, including scheduling of > backups. Same for vacuum and any other periodical maintenance. What to give me pointers to these? We use Oracle here, and the SOP from before I got here was to shutdown Oracle, backup the system and then restart Oracle up again. We're moving our backups to a centralized system, and using Oracle modules for doing this, but from our DBA's perusal of the Oracle documentation, there is nothing "convienent" about setting it up... > - Support for raw devices (my sysadmin prefers it). This one I just checked about, and Oracle still appears to recommend using raw devices, as they claim it can be up to 50% faster...but, how would one implement this in PostgreSQL? So far, I believe you are the only one that is asking for it, so don't hold your breath on it ever getting done, but I'm curious, unless you wanted to implement it yourself... Under Oracle, its easy, since you 'pre-size' your tables when you create your databases, and the tables will not grow beyond that size. With PostgreSQL, there is no bounds, except hard disk, to how large your tables will grow. I don't know how Oracle implements this, as we don't use it here... The other problem with trying to implement RAW devices, and, granted, I could be over cmplicating it, but how do you implement it across X operating systems running Y platforms? Doesn't each of them access drives differently? And, in some cases, multiply that by two for IDE vs SCSI...or...vs...?
Re: [GENERAL] Postgres vs commercial products
At 14:46 +0300 on 28/7/98, The Hermit Hacker wrote: > On Tue, 28 Jul 1998, Herouth Maoz wrote: > > > He didn't ask me what the features were. I'm quite willing to specify them. > > Yes, I'm quite aware that many of them are in the pipeline... But hey, if I > > install Oracle/Informix, I'd have them all *now*, tested and debugged by > > many users before me. > > > > - Convenient tools for backing up, including scheduling of > > backups. Same for vacuum and any other periodical maintenance. > > What to give me pointers to these? We use Oracle here, and the > SOP from before I got here was to shutdown Oracle, backup the system and > then restart Oracle up again. We're moving our backups to a centralized > system, and using Oracle modules for doing this, but from our DBA's > perusal of the Oracle documentation, there is nothing "convienent" about > setting it up... Personally, I don't know. All I know is that our computer center backs up anything, even if it doesn't need backing up. Since we don't use Oracle as yet, I can only give you the hearsay. That is, that most people seem to be satisfied by Oracle's backup facilities. > > - Support for raw devices (my sysadmin prefers it). > > This one I just checked about, and Oracle still appears to > recommend using raw devices, as they claim it can be up to 50% > faster...but, how would one implement this in PostgreSQL? So far, I > believe you are the only one that is asking for it, so don't hold your > breath on it ever getting done, but I'm curious, unless you wanted to > implement it yourself... I didn't. That was the whole point, wasn't it? I expect nothing from a free database. Whatever is given, is great. PostgreSQL is far better featured than other freebies, and that's why I use it. Functionality is missing? Tough baby for me. But if there is functionality that I miss, and the commercial product offers, then Postgres is no match. The only thing that stands for it is the price. So, eventually, you weigh functionality, reliability, cross-product compatibility, on-site support, localization and training - against price. Oracle is a much better, more mature RDBMS. Informix, too. Years will pass before Postgres catches up - if ever. For its price, it's great. Trying to compete against the commercial products by merit of functionality is not serious. On the other issue - about raw devices: the reason why it is supposed to work better is because it cuts on the overhead of the system library calls and implementation. That makes sense. There was a claim that the system's library calls are optimized for the given hardware, and therefore make a better solution. The truth is, however, that these calls and data structures are optimized to handle relatively small files. This makes sense, if I recall correctly how INodes work (three levels of indirection for large files, wasn't it?) True, raw devices require that the programmer will be able to anticipate which devices will be used. But Oracle runs on almost every platform, does it not? So it's possible to do it. Herouth -- Herouth Maoz, Internet developer. Open University of Israel - Telem project http://telem.openu.ac.il/~herutma
Re: [GENERAL] Postgres vs commercial products
> The other problem with trying to implement RAW devices, and, > granted, I could be over cmplicating it, but how do you implement it > across X operating systems running Y platforms? Doesn't each of them > access drives differently? And, in some cases, multiply that by two for > IDE vs SCSI...or...vs...? That is the real problem. I have to assume whenever you start implementing raw devices, you go far far away from DBMS design to OS design. Perhaps you do not have to write the process control part of an OS, but (I mean this jokingly) in one's arrogance one must think you can access the HD's more efficiently than the OS can. Hey, maybe some can do this, and have enough knowledge about HD's and controllers to do this, but making this portable is got to a _lot_ of work. Not that it couldn't be done, or that it wouldn't be eventually a good idea (*though I am not certain about that)...james
Re: [GENERAL] How to know when to vacuum
On Tue, 28 Jul 1998, Chris Johnson wrote: > > OK, so there's been quite a bit of traffic about vacuuming databases as > well as more than one suggestion on how to do it. But there really hasn't > been an answer to the question of how to know when to vacuum. > > I now vacuum the databases every night, but this seems somewhat > inefficient... I know that some of the more active databases could use > some extra vacuuming, but which ones? how often? If there is a way to ask > postgres whether a specific database could use a vacuuming I will be happy > to write something to automate it, but I see no way to do so. > > Anyone have any suggestions? There are two reasons, that I can think of, to run vacuum: 1. update statistics used for the optimizer 2. clean up "fragmentation" 1. the optimizer decides whether or not to use indices, and which ones it uses, based on a miriad of values, but one of them is based on statistics that vacuum generates. ie. if the table is small, it might be faster to just do a sequential scan vs using an index. As such, a vacuum should be performed after a large amount of inserts/deletes or updates have been performed, so that the optimizer has reasonably accurate numbers to work with. VACUUM ANALYZE can be used for this one, which, in the future, will hopefully not lock down the database while its being performed. 2. the server currently doesn't "reuse" deleted rows, but just keeps appending them to the end. running a straight VACUUM will perform a de-fragmentation by essentially re-writing the database and then performing equivalent to an 'ftruncate()' at the end to shrink the table size back down again. The only time you should have to do a full VACUUM is after a massive amount of DELETEs to a table...and, hopefully, the requirement for that will decrease over time too, as there has been talk about adding in functionality to reuse delete rows..
[GENERAL] row oids as "foreign keys" in other tables ?
I'm relatively new to postgres and I've had a couple of questions for a while now. This post made me worry about them again: > 2. the server currently doesn't "reuse" deleted rows, but just keeps >appending them to the end. running a straight VACUUM will perform a >de-fragmentation by essentially re-writing the database and then >performing equivalent to an 'ftruncate()' at the end to shrink the >table size back down again. The only time you should have to do a full >VACUUM is after a massive amount of DELETEs to a table...and, >hopefully, the requirement for that will decrease over time too, as >there has been talk about adding in functionality to reuse delete >rows.. I started to make a database and I wanted to simulate foreign keys somehow. So I decided to simply insert the oid of a row in one table into the "foreign key" column in another table. For example, create table concert ( day_of_show date, venue text); create table song ( song_name text, author_fname text, author_lname text); create table concert_song ( concert_oid oid, song_oid oid); Then I have a perl script that does my inserts so that whenever I insert a concert and the songs played, I take the appropriate row oids from concert and song and insert them into concert_song. You say that vacuum "re-writes" the database. Does it alter row oids??? If so, my scheme completely corrupts my database whenever I do a vacuum, since in concert and song the row oids would change, but my inserted values would remain the same in concert_song, right? If vacuum does not alter row oids, then I have another question. How does postgres re-use oids? I've seen the numbers grow and grow, but despite deletes, etc, I have never seen a lower oid get re-used. How does this work? Thanks a bunch, Matt
Re: [GENERAL] How to know when to vacuum
Yes, but... does postgres maintain some statistics that could be queried to determine whether vacuuming would be helpful? For Case 1 I would need to know how many records were added since the last vacuum relative to the total number of records in each table. For case 2 I guess you really only need to know how many records have been deleted. Any way to get that information? Chris On Tue, 28 Jul 1998, The Hermit Hacker wrote: > On Tue, 28 Jul 1998, Chris Johnson wrote: > > > > > OK, so there's been quite a bit of traffic about vacuuming databases as > > well as more than one suggestion on how to do it. But there really hasn't > > been an answer to the question of how to know when to vacuum. > > > > I now vacuum the databases every night, but this seems somewhat > > inefficient... I know that some of the more active databases could use > > some extra vacuuming, but which ones? how often? If there is a way to ask > > postgres whether a specific database could use a vacuuming I will be happy > > to write something to automate it, but I see no way to do so. > > > > Anyone have any suggestions? > > There are two reasons, that I can think of, to run vacuum: > > 1. update statistics used for the optimizer > 2. clean up "fragmentation" > > 1. the optimizer decides whether or not to use indices, and which ones it >uses, based on a miriad of values, but one of them is based on >statistics that vacuum generates. ie. if the table is small, it might >be faster to just do a sequential scan vs using an index. As such, a >vacuum should be performed after a large amount of inserts/deletes or >updates have been performed, so that the optimizer has reasonably >accurate numbers to work with. VACUUM ANALYZE can be used for this >one, which, in the future, will hopefully not lock down the database >while its being performed. > > 2. the server currently doesn't "reuse" deleted rows, but just keeps >appending them to the end. running a straight VACUUM will perform a >de-fragmentation by essentially re-writing the database and then >performing equivalent to an 'ftruncate()' at the end to shrink the >table size back down again. The only time you should have to do a full >VACUUM is after a massive amount of DELETEs to a table...and, >hopefully, the requirement for that will decrease over time too, as >there has been talk about adding in functionality to reuse delete >rows.. > > > > >
Re: [GENERAL] Postgres vs commercial products
On Tue, 28 Jul 1998, Boersenspielteam wrote: > Hi, > > let me add one thing here ... (if this comes a second time, of missed > the first one ...) > > Wouldn't it be possible to get some backing from one of the > commercial Linux distribution makers? Like Redhat supports Gnome or > S.u.S.e supports XFree? What about GNU? Anyone listening? RedHat supports Gnome vs KDE because Gnome is GPLd. GNU would require a GPLd source code. We fall under Berkeley's CopyRight, not GPLs CopyLeft, and we're quite happy with that... > This would be major leap for PostgreSQL if someone financed two > months of Vadim for example ... As for '2 months' of Vadim...its more difficult for 1 person/company to pay for '2 months' of his time (or Bruce's or Thomas' or Byron's, or...) then it would be for 10 ppl to share that financing... I'm working on something that might make that a reality, hopefully have something to "announce" early next week...
Re: [GENERAL] Postgres vs commercial products
> That is the real problem. I have to assume whenever you start implementing raw > devices, you go far far away from DBMS design to OS design. Perhaps you do not > have to write the process control part of an OS, but (I mean this jokingly) in > one's arrogance one must think you can access the HD's more efficiently than the > OS can. Hey, maybe some can do this, and have enough knowledge about HD's and > controllers to do this, but making this portable is got to a _lot_ of work. > > Not that it couldn't be done, or that it wouldn't be eventually a good idea > (*though I am not certain about that)...james this reminds me so much of the multi threading issue Sybase faced. When they were adding support for multi threading, every Unix has different level of support that they end up added their own threading engine. I for one, don't see RAW device support in Postgres has a "critial" missing feature, in my thinking, there are other things that are more important. Though for something like RDBMS, where each person's need could be so different, it truly is YMMV.
Re: [GENERAL] row oids as "foreign keys" in other tables ?
Matt McClure wrote: > > You say that vacuum "re-writes" the database. Does it alter row oids??? ^^ No. > If so, my scheme completely corrupts my database whenever I do a vacuum, > since in concert and song the row oids would change, but my inserted > values would remain the same in concert_song, right? > > If vacuum does not alter row oids, then I have another question. How does > postgres re-use oids? I've seen the numbers grow and grow, but despite It doesn't. > deletes, etc, I have never seen a lower oid get re-used. How does this > work? Vadim