[GENERAL] insert continue on error
Hello everybody. I'm trying to migrate our data mine from a MS Access backend to a postgresql backend, and I'm bumping into problems with the conversion. I've got the basic db and tables set up. I want to transfer data from one table to another with insert into ... select from. The from table and into table both have the same field names, but the types for the first are all text, and the second are specific (boolean, numeric(10,2), etc), with cast and custom conversion functions doing the translation. However every once in a while a record will contain data that I didn't forsee/wasn't cast properly and the record will bounce. This of course aborts the entire process. What I'd like to know is if there's a way to have postgresql still insert the other rows, and either bounce the bad row to another table, or log the error in another table/file. I'm sure there's a way to do this, but I'm still pretty new to postgresql. TIA for any assistance you can give me. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] find out data types using sql or php
Hey guys. This is probably a n00b question, but here goes anyway. I have a set of csv files that I COPY t o a number of import tables (same field layout as live tables, but with all datatypes 'text') then use an INSERT INTO ... SELECT FROM statement to transfer the rows over to the live table (at times filtering out, at times not). Unfortunately if any of the data is not type perfect (letters in number fields, etc.) then the entire query bombs and nothing gets loaded. What I'd like to do is add a field is_ok and then use sql or php (or whatever else, if there's an easier way) to determine the field datatype (text, numeric, bool, etc.) and then use some regex or something along those lines to attempt to verify that the data is good, and then mark the is_ok field (obviously a bool) as true, and use is_ok = TRUE in the insert/select statement. Can somebody give me a push in the right direction? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] find out data types using sql or php
Replying to yourself is so depressing... Anyway, I managed to google myself into a solution, I just wanted to share it with the list in case anybody else was interested. Using the INFORMATION SCHEMA and a query like SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'table'; I get results similar to column_name| data_type ---+--- atm_acct_mess | text atm_acct_num | numeric atm_acct_tp1 | text atm_acct_tp2 | text atm_acct_tp3 | text atm_acct_tp4 | text atm_acct_tp5 | text atm_acct_tp6 | text atm_acct1_stcd| text atm_acct2_stcd| text atm_acct3_stcd| text atm_acct4_stcd| text atm_acct5_stcd| text atm_acct6_stcd| text atm_atm/ach_cd| integer atm_atm/ach_id| numeric atm_atm/ach_tp| integer atm_cn_num| integer atm_date_opened | date atm_id1 | text atm_id2 | text atm_id3 | text atm_id4 | text atm_id5 | text atm_id6 | text atm_last_act_date | date atm_next_rec | integer atm_stat_cd | integer atm_trn_acct_id | text atm_trn_acct_num | numeric atm_trn_acct_tp | text atm_trn_cn_num| integer atm_trn_date | date atm_trn_reg_e | integer atm_trn_term_id | text atm_trn_trace | text atm_trn_trn_num | integer (37 rows) Which I can then of course parse with php and do some testing from there. I hope this helps somebody, I know I could have used this information about 20 minutes ago :-) Tom Hart wrote: Hey guys. This is probably a n00b question, but here goes anyway. I have a set of csv files that I COPY t o a number of import tables (same field layout as live tables, but with all datatypes 'text') then use an INSERT INTO ... SELECT FROM statement to transfer the rows over to the live table (at times filtering out, at times not). Unfortunately if any of the data is not type perfect (letters in number fields, etc.) then the entire query bombs and nothing gets loaded. What I'd like to do is add a field is_ok and then use sql or php (or whatever else, if there's an easier way) to determine the field datatype (text, numeric, bool, etc.) and then use some regex or something along those lines to attempt to verify that the data is good, and then mark the is_ok field (obviously a bool) as true, and use is_ok = TRUE in the insert/select statement. Can somebody give me a push in the right direction? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] php and postgres - too many queries too fast?
andy wrote: Tom Hart wrote: [snip] OK, enough of the background, here's my issue. For some lovely reason, even though my script reports running an UPDATE query 1563 times (out of 1566 rows), only 316 rows have is_ok set to TRUE. I've tried a few times, changing this and that, and it always updates those 316 rows (no reason for this, the data is actually really good when it comes in, I'm just trying to build an extra layer of assuredness). Of particular note, I'm trying to keep the script fairly uniform and work for all our tables, so the UPDATE statement looks something like UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. Thomas R. Hart II [EMAIL PROTECTED] Have you run one of these queries via psql or something other than php? I doubt its a "too many too fast" thing. I'd guess a logic error someplace. Why 1563 queries? Can you get the row's modified per query? If you're tables looks like: var1 | var2 | var3 a | b | a a | b | c Would you fire of two query's like: UPDATE table SET is_ok = 'TRUE' WHERE var1 = 'a' AND var2 = 'b' and var3 = 'a; UPDATE table SET is_ok = 'TRUE' WHERE var1 = 'a' AND var2 = 'b' and var3 = 'c; if so, do you generate the update's on the fly? -Andy ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match I thought I had run the queries through psql (it's been a long one :-) and when I tried to verify, I was able to find my problem (yes, I am an idiot). It turns out that for some reason it didn't like to UPDATE when I was using a text type field (specifically an empty text field) in the WHERE clause. To remedy this, I instructed PHP to not use a field in the WHERE clause if the destination type was 'text', and now we're working beautifully (2.405 seconds to run the script through 1566 rows, running updates on 1563 of them). Now I just need to figure out what's going on with those 3 rogue rows. Sorry I hadn't checked all the bases thoroughly, but now they definitely are belong to us. Thanks for the help and have a good night. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] php and postgres - too many queries too fast?
Hey everybody. I'm running postgresql 8.2 on a windows 2k3 server machine. I have a table (two tables actually, in table and table_import format). The _import table has all text type fields, while the main table has datatypes defined. I wrote a PHP script that checks the various type fields (e.g. integer is a number, date is a date, etc.) and sets a bool flag is_ok to true for the row if the data all checks out. The script gets a dump of the data from a SELECT * statement, then takes each row, verifies the various data fields (successfully, I've tested) and sets a variable baddata. At the end of the checking, if baddata still equals 0, then it crafts an UPDATE statement to change the value of is_ok. There are a relatively small amount of rows (~1500, small time to you guys I'm sure), and the script runs fairly fast. OK, enough of the background, here's my issue. For some lovely reason, even though my script reports running an UPDATE query 1563 times (out of 1566 rows), only 316 rows have is_ok set to TRUE. I've tried a few times, changing this and that, and it always updates those 316 rows (no reason for this, the data is actually really good when it comes in, I'm just trying to build an extra layer of assuredness). Of particular note, I'm trying to keep the script fairly uniform and work for all our tables, so the UPDATE statement looks something like UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. for every field in the record (I would have it base it on the primary key, but the field names and locations are different for each table). Is it possible that I'm trying to run too many queries at once (or rather rapid succession)? I've tried encapsulating the queries in a BEGIN .. COMMIT transaction which improved my speed quite a bit, but it's still updating only those rows. I know that it's entirely possible that the problem lies in the PHP, or the network, or the web server configuration, or the moon phase, but is there anything here that jumps out at anybody as a possible cause? TIA Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] pg_dumpall and authentication
I'm sure you guys have heard this about 100 times, and I've done some research on Google and found out some things, but I still have a couple questions. As I'm sure you may have guessed from the subject, I'm trying to schedule (under windows) pg_dumpall to run each night/morning/full moon/whatever. The hitch in this is that it asks for a password for each database as it dumps it. I know I can use the PGPASS environment variable, or a ~/.pgpass file. What I'm wondering is what's considered 'best practice' in practical applications. What solutions do you guys use? Is it worth changing PGPASSFILE to point to a different .pgpass? Thanks in advance for any assistance. Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_dumpall and authentication
Steve Atkins wrote: On Nov 9, 2007, at 8:52 AM, Tom Hart wrote: I'm sure you guys have heard this about 100 times, and I've done some research on Google and found out some things, but I still have a couple questions. As I'm sure you may have guessed from the subject, I'm trying to schedule (under windows) pg_dumpall to run each night/morning/full moon/whatever. The hitch in this is that it asks for a password for each database as it dumps it. I know I can use the PGPASS environment variable, or a ~/.pgpass file. What I'm wondering is what's considered 'best practice' in practical applications. What solutions do you guys use? Is it worth changing PGPASSFILE to point to a different .pgpass? Any of those approaches should be fine. I'd probably stick with the default pgpass file, just for the sake of whoever may have to maintain it next. I tend to create a unix user just for doing backups and other scheduled maintenance, then give that user access to the database via ident authentication from the local system only. If PG-on-Windows has equivalent functionality that's another approach to consider. Ok, here's what I think is going to work best for me. I'm going to create a user with very little access rights, and then I'm going to set up a scheduled task in windows to run pg_dumpall on the machine that houses the db (and where the backup will be stored). On that machine I'll have a pgpass file that I've placed somewhere where only the dummy backup account can access it and pointed to it with the PGPASSFILE environment variable. I think I'll be able to run a scheduled task associated with that name without giving them login abilities. Sound pretty solid to you guys? BTW, this isn't protecting a ton of data, but the data itself is pretty sensitive, so security is a concern. I appreciate your help in building as solid a system as I can. BTW2, I already told somebody today that Windows and security is like a cherry pie with gravy on top, but I don't get a choice here, so try to understand :-) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] automating backups with windows scheduled tasks and pg_dumpall
Tom Lane wrote: Tom Hart <[EMAIL PROTECTED]> writes: Do scheduled tasks have a problem writing to files with > ? I tried an "echo this is a test test.txt" and that didn't run either. I know zip about Windows, but try using pg_dump's -f switch instead of ">" and see if it gets happier. Good tip, and something I hadn't tried yet, but still not working. I'm sure the problem lies within strange behavior of scheduled tasks, but they were not super helpful on the microsoft newsgroups, and I was hoping to find somebody that had a working windows-based automated backup "in the wild" if you will. Thanks anyway though, I'll keep trying. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] automating backups with windows scheduled tasks and pg_dumpall
[EMAIL PROTECTED] wrote: One other oddity I ran into recently on one system only, is that a perl program (.pl extension) worked fine from the command-line, double-clicked on, etc. but in the scheduled job, I had to put "perl xxx.pl" for it to work. Maybe if all else fails, try "cmd.exe xxx.bat" or "command.com xxx.bat" or similar in the scheduled task? Thanks for your reply. I just got this running right before I read this email. A shame, because this tip precisely would have helped me. Turns out that on the command line you can just type foobar.bat, but in scheduled tasks, you must place a cmd.exe /c before it, instructing windows to spawn a shell first (why windows isn't intelligent enough to figure this out on it's own, I have no idea). Anyway if anybody else is curious I didn't change anything in the batch file I posted previously, but I changed the scheduled task command from "c:\foo\bar\pg_backup.bag" to "cmd.exe /c c:\foo\bar\pg_backup.bat". Hope this can help somebody else avoid the hours of confusion. On a side note, you seem to have a pretty strange email client setup, because your text came through extremely small (almost unreadable) to me, yet your signature came out just fine. Odd... Susan Cassidy Tiered Data Protection Made Simple http://www.overlandstorage.com/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] automating backups with windows scheduled tasks and pg_dumpall
Martin Gainty wrote: Hi Tom In windows you would need to include the runas command so your bat file needs to execute the command via runas as in runas /user:postgres "bin\pg_dumpall.exe -U foo_postgres" HTH Martin Actually, schtasks.exe and the windows task scheduler both have this functionality built in. Also, when using runas.exe from a command prompt, there's no way to give it a password, thus rendering it incapable of running unattended in a scheduled task setup. I believe that's why they placed the runas functionality into schtasks.exe actually. Thanks anyway. Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] automating backups with windows scheduled tasks and pg_dumpall
Tom Hart wrote: [EMAIL PROTECTED] wrote: One other oddity I ran into recently on one system only, is that a perl program (.pl extension) worked fine from the command-line, double-clicked on, etc. but in the scheduled job, I had to put "perl xxx.pl" for it to work. Maybe if all else fails, try "cmd.exe xxx.bat" or "command.com xxx.bat" or similar in the scheduled task? Thanks for your reply. I just got this running right before I read this email. A shame, because this tip precisely would have helped me. Turns out that on the command line you can just type foobar.bat, but in scheduled tasks, you must place a cmd.exe /c before it, instructing windows to spawn a shell first (why windows isn't intelligent enough to figure this out on it's own, I have no idea). Anyway if anybody else is curious I didn't change anything in the batch file I posted previously, but I changed the scheduled task command from "c:\foo\bar\pg_backup.bag" to "cmd.exe /c c:\foo\bar\pg_backup.bat". Hope this can help somebody else avoid the hours of confusion. On a side note, you seem to have a pretty strange email client setup, because your text came through extremely small (almost unreadable) to me, yet your signature came out just fine. Odd... Side note 2: This could very easily have been an issue with my email client as well. I did not intend to sound as if it were an issue with your setup :-) Susan Cassidy Tiered Data Protection Made Simple http://www.overlandstorage.com/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] "field doesn't exist" even though I'm sure it does
Sam Mason wrote: On Fri, Nov 16, 2007 at 03:13:48PM -0500, Tom Hart wrote: column "isactive" of relation "membermailingaddress" does not exist Table "public.membermailingaddress" Column | Type | Modifiers -+--+--- IsActive| boolean | If you don't enclose an identifier in double quotes (") then PG will make the identifier lowercase before looking for it. This is in an attempt to support case-insensitive general usage, while still allowing case-sensitive usage if you really want it. Ok, that makes sense. Like I said I created this db to mimic our previous db, but I think the best solution here would be to continue my trend of using all lowercase column names so that I'm not faced with this problem. Thank you for your reply, and you too Joshua. Sorry for such an easy question :-) Sam ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] "field doesn't exist" even though I'm sure it does
Hey everybody. I've got a strange one today. I'm trying to convert an extremely messy access sql query into something that can be used with our postgresql database (the pgsql db is being built to replace the access db). I had barely begun trying to convert it when I was confronted with this error message: column "isactive" of relation "membermailingaddress" does not exist However, a quick \d in psql let's me know that there is indeed a field "isactive" on that specific table. I can't begin to imagine why it would throw an error like this. Here's my psql output, with a simple test query to reproduce the issue, and a \d membermailingaddress afterwards. Anybody have any ideas? current=# insert into membermailingaddress( current(# mb_cn_num, current(# mb_acct_num, current(# IsActive, current(# IsMember current(# ) current-# VALUES current-# ( current(# 1, current(# 1500.0, current(# TRUE, current(# FALSE current(# ) current-# ; ERROR: column "isactive" of relation "membermailingaddress" does not exist LINE 4: IsActive, ^ current=# \d membermailingaddress Table "public.membermailingaddress" Column | Type | Modifiers -+--+--- mb_cn_num | integer | mb_acct_num | numeric(6,1) | IsActive| boolean | IsMember| boolean | IsMail | boolean | IsPromoMail | boolean | HouseholdID | numeric | HouseholdSize | integer | Name1 | text | Name2 | text | Addr1 | text | Addr2 | text | Addr3 | text | City| text | State | text | Zip | text | Zip5| text | xxPrimaryName | text | xxJointName | text | xxHouseholdHash | text | current=# TIA Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] automating backups with windows scheduled tasks and pg_dumpall
Hello everybody. I'm having a bit of trouble automating pg_dumpall to do nightly backups. I have a batch file whose contents are below SET PGPASSFILE=C:\foo\bar\PG_BACKUP\PGPASSFILE\pgpass.conf "C:\Program Files\PostgreSQL\8.2\bin\pg_dumpall.exe" -U foo_postgres > C:\foo\bar\PG_BACKUP\db.out Now this file, when I double click on the .bat, works wonderfully. It sets the variable correctly, reads the pw file with no problem, and performs the dump. However when this same bat is built into a scheduled task (same username, administrator in both cases), even though the task claims to have completed correctly, I have no evidence of it running, and it certainly isn't doing the backup. I'm sure many of you have experience in scheduling backups in this manner (though admittedly a large number of you are probably using cron in *nix). Is there anything that jumps out at you? Do scheduled tasks have a problem writing to files with > ? I tried an "echo this is a test > test.txt" and that didn't run either. What's confusing is that windows is claiming everything ran fine, in under 5 seconds no less (my db isn't that big, but it takes longer than that to dump the entire thing). Any ideas? TIA Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] convert access sql to postgresql
Hey guys. I have a long piece of sql that I'm trying to take out of an existing Access db and modify to work with Postgresql. I've started trying to convert it, but I've come across a problem that I don't even know how to describe, let alone google. Here's the function INSERT INTO MemberMailingAddress ( mb_cn_num, mb_acct_num, isactive, ismember, ismail, ispromomail, ... ... ) SELECT mb_cn_num, mb_acct_num, mb_stat_cd<>1 as isactive, mb_stat_cd=0 as ismember, isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail, ... ... FROM member ORDER BY mb_cn_num, mb_acct_num ; Specifically I'm looking at these two lines isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail, which appear to use other fields it's preparing to insert as variables in the determination of the values of other fields (I told you I couldn't figure out how to explain it). Does anybody have any idea what I'm talking about? I sure don't. Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] convert access sql to postgresql
Peter Eisentraut wrote: Tom Hart wrote: Specifically I'm looking at these two lines isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail, which appear to use other fields it's preparing to insert as variables in the determination of the values of other fields (I told you I couldn't figure out how to explain it). I can't tell more without the exact table definitions, but this should work just fine. The error I'm receiving is ERROR: column "isactive" does not exist LINE 26: isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ^ Here are the table definitions for member (from) and membermailingaddress (to) current=# \d member Table "public.member" Column | Type | Modifiers ---+---+--- mb_acct_closed_dt | date | mb_acct_num | numeric(6,2) | not null mb_alt_num| text | mb_backwithhold_perc | integer | mb_birth_dt | date | mb_cell_phone | text | mb_census_tract | numeric(6,2) | mb_city | text | mb_cn_num | integer | not null mb_credit_score | text | mb_credit_score_date | date | mb_ctr_cd | text | mb_drivers_license| text | mb_email_acct_num | numeric(6,2) | mb_email_address | text | mb_email_cn_num | integer | mb_email_rec | integer | mb_employ_phone | text | mb_employer | text | mb_first_name | text | mb_flag4 | text | mb_frst_99r_rec | integer | mb_frst_at_rec| integer | mb_frst_cl_rec| integer | mb_frst_ct_rec| integer | mb_frst_df_rec| integer | mb_frst_ira_rec | integer | mb_frst_ln_rec| integer | mb_frst_sh_rec| integer | mb_frst_tran_rec | integer | mb_head_of_household | text | mb_hire_dt| date | mb_home_phone | text | mb_homebank_status| text | mb_income | numeric(10,2) | mb_jand/or_cd | text | mb_jfirst_name| text | mb_jlast_name | text | mb_jmid_init | text | mb_join_dt| date | mb_joint_birth_dt | date | mb_joint_relationship | text | mb_jssn_num | text | mb_jsuff | text | mb_last_name | text | mb_life_ins_amt | numeric(10,2) | mb_life_save_amt | numeric(10,2) | mb_loc_cd | text | mb_lst_act_dt | date | mb_lst_tran_cd| text | mb_lst_tran_rec | text | mb_m/f_cd | text | mb_mail_addr1 | text | mb_mail_addr2 | text | mb_mail_cd| integer | mb_mail_city | text | mb_mail_state | text | mb_mail_zip_cd| text | mb_master_rec | integer | mb_mid_init | text | mb_misc_cd| text | mb_ofac_verified | text | mb_payroll_num| integer | mb_privacy_code | integer | mb_prn_state | text | mb_send_estatement| text | mb_soc_emp_cd | integer | mb_soc_sec_num| text | mb_stat_cd| integer | mb_state_cd | text | mb_statement_pages| text | mb_str_addr1 | text | mb_str_addr2 | text | mb_suff | text | mb_tot_payroll| numeric(10,2) | mb_vru_status | integer | mb_w9_ver_cd | integer | mb_withold| text | mb_zip_cd | text | Indexes: "member_pkey" PRIMARY KEY, btree (mb_cn_num, mb_acct_num) current=# \d membermailingaddress Table "public.membermailingaddress" Column | Type | Modifiers -+--+--- mb_cn_num | integer | mb_acct_num | numeric(6,1) | isactive| boolean | ismember| boolean | ismail | boolean | ispromomail | boolean | householdid | numeric | householdsize | integer | name1 | text | name2 | text | addr1 | text | addr2 | text | addr3 | text | city| text | state | text | zip | text | zip5| text | xxprimaryname | text | xxjointname | text | xxhouseholdha
Re: [GENERAL] convert access sql to postgresql
Tom Hart wrote: Peter Eisentraut wrote: Tom Hart wrote: Specifically I'm looking at these two lines isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail, which appear to use other fields it's preparing to insert as variables in the determination of the values of other fields (I told you I couldn't figure out how to explain it). I can't tell more without the exact table definitions, but this should work just fine. The error I'm receiving is ERROR: column "isactive" does not exist LINE 26: isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ^ I've been doing some googling on sql aliases (my sql knowledge is far from impressive) and it appears that column aliases can be great for displaying different column names in your output. However I was unable to find any information regarding using column aliases as variables, like in the code above. I'm not sure why this works in access, but does postgreSQL support using a column alias like a variable within the query that the alias was defined in? I suppose the lines mb_stat_cd<>1 as isactive, mb_stat_cd=0 as ismember, (isactive and (mb_mail_cd=0 or mb_mail_cd=1)) as ismail, (ismail and (mb_stat_cd=0 or mb_stat_cd=2)) as ispromomail Could be written mb_stat_cd<>1 as isactive, mb_stat_cd=0 as ismember, ((mb_stat_cd<>1) and (mb_mail_cd=0 or mb_mail_cd=1)) as ismail, (((mb_stat_cd<>1) and (mb_mail_cd=0 or mb_mail_cd=1)) and (mb_stat_cd=0 or mb_stat_cd=2)) as ispromomail But if postgreSQL does indeed support this, and I can get it to work, I'd like to be able to use the syntax in the first set. TIA Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Tom thinks it's bad code was 8.3 vs 8.2 sql compatibility issue
Joshua D. Drake wrote: in the least. If you post publicly something that is that ugly, then it is going to get critiqued. It is that simple. You don't like it, don't post. I have more than once taken my beatings on this list. It is time for you to either grow a thicker skin or unsubscribe. Joshua D. Drake - -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 PostgreSQL solutions since 1997 http://www.commandprompt.com/ UNIQUE NOT NULL Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHPMrkATb/zqfZUUQRAgyOAJ42Oy+3/UKmJ5IErZCnEJGcPIm2qgCeMX6p s9+5JIvhxPAPNyIeX+7+LtI= =j4lt -END PGP SIGNATURE- I absolutely agree. The fact of the matter is that you came here asking for help, and there are a number of very qualified very generous people who are not only quite proficient with pgsql, but they are donating their time to help you (in this case by letting you know your SQL was poorly written) and to thank them with anger and sarcasm is not only ungrateful, but quite rude. Not to mention the large number of people who probably took the time to read that post only to find out that they've wasted a (small, admittedly) portion of their day to read your drivel. The two things I've learned from this group are PostgreSQL is an excellent piece of software, capable of many things, and the PostgreSQL community is an amazing group, also capable of many things, one of which is selflessly giving their time to help the users of this list. Even the ungrateful ones like you, or the lazy ones that haven't heard of google, or people like me. With a lack of experience and a lack of formal training, this group has managed to help me build a decent data mine solution, something I would have been unable to do without them. Now that I'm done ranting, let me just reiterate. Thank you to everybody who has helped me, and anybody else coming to this list seeking guidance. I'm sorry that not everybody appreciates it. On a side note, everytime I hit reply to try to post back to the list, the reply-to is set to the original sender of the message, not the list address, and since this is opposite behavior to the other mailing lists I participate in, I often forget and send a message straight to the previous poster. If I have sent a long diatribe meant for somebody else to you, I apologize :-) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] convert access sql to postgresql
Shane Ambler wrote: Tom Hart wrote: Hey guys. I have a long piece of sql that I'm trying to take out of an existing Access db and modify to work with Postgresql. I've started trying to convert it, but I've come across a problem that I don't even know how to describe, let alone google. Here's the function INSERT INTO MemberMailingAddress ( mb_cn_num, mb_acct_num, isactive, ismember, ismail, ispromomail, ... ... ) SELECT mb_cn_num, mb_acct_num, mb_stat_cd<>1 as isactive, mb_stat_cd=0 as ismember, isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail, ... ... FROM member ORDER BY mb_cn_num, mb_acct_num ; Specifically I'm looking at these two lines isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail, ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail, Lets's clarify something - the insert puts data into the MemberMailingAddress table which includes the column isactive The select retrieves and calculates data from the member table. The select calculates a value for the third column that you alias to be named isactive. That's correct. Are you expecting the 5th and 6th column (as ismail - as ispromomail) to calculate from the 3rd and 5th column of the select or from the columns of MemberMailingAddress? It's not that I expect pgSQL to do this, as much as I've seen access behave this way, in which column aliases were able to be used within the same select query in a "variable" type fashion. If you expect the later you need to add a join to the MemberMailingAddress table to get those columns. (or use a sub-select to get the data) If you are only calculating from the member table then you will need to repeat the calculations instead of referring to the alias. And if this is the case how does the insert fit with the issue of the select? I figured I would have to repeat the calculations (I've tried this with a small chunk of the sql with success). What I was really looking for was an answer on whether postgreSQL behaved this way, or if this was standard SQL. The answer I've been hearing is no on both counts, just wanted to be sure. Thank you for your reply. Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL]
Reply from 127.0.0.1: bytes=32 time=51ms TTL=241 Josh Harrison wrote: ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] best way to handle enum type
Hey everybody. I have a field that, in my earlier mySQL days would have been an enum('q','y','m','c'), and I'm wondering what's the best way to handle this in pgsql. I've googled enough to find out that pgsql doesn't have a built in enum type (it's of course possible that what I read was outdated, please let me know if it was), but I've found a couple popular workarounds, the first being CHECK (favourite_colour IN ('red', 'blue', 'yellow', 'purple')) as well as a suggestion to create another small table containing the possible values and then placing color text references color in the create table sql. Now this field doesn't absolutely have to be an enum, I'm sure I could work with matching the values to numbers and making it an int. What I'm wondering is what's generally considered 'best practice' for this situation. I'd like to have a solid db more than an enum type, what's my best move? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] ODBC, access, and joins
Hi again everybody. I've got kind of a weird one today (the more complicated my db gets, the stranger the questions). I've built a data mine backend to replace our access data mine (eww, I know), but we'd like to still be able to use access to get to the tables, and write queries/reports/etc. against them. I've set up the ODBC driver (the "official" driver, from http://pgfoundry.org/projects/psqlodbc/) and it connects and pulls the data out just fine, considerably quicker than access natively. The problem I'm having is that anything defined as a "text" datatype in postgres gets converted to a "memo" datatype in access, and when we attempt to perform a join query on that field access starts complaining about not being able to join on a field with a "memo" type. I think this might be a bug in the odbc driver, but I want to check my bases. Has anybody else run into this issue, or heard anything about the odbc driver having problems with datatypes in access? There's a couple other weird conversions, like boolean to text (with a number value) but I've been able to manage most of them. Also, I did try ODBCng after I read a couple reports on the performance numbers, but I found it didn't have the same ldap support the official one did, and that's a pretty big necessity for us. Any help would be much appreciated. TIA Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] best way to handle enum type
Bruce Momjian wrote: Tom Hart wrote: On a side note, I was just reading through the 8.3 changelog, (where I read about the enum datatype) and I noticed this line * Widen the MONEY data type to 64 bits (D'Arcy Cain) This greatly increases the range of supported MONEY values. I may be mistaken, but when I started developing this system (a data mine for a financial institution) I was told that the money datatype was deprecated and should not be used. Is this datatype still being worked on, and would it be viable to use in my development, as it is currently or in preparation for 8.3? Uh, yea, we did discuss this during 8.3 development, and you will see in the 8.3 docs that MONEY is no longer marked as depricated. http://momjian.us/main/writings/pgsql/sgml/datatype-money.html Basically MONEY had some major limitations but now someone is working on improve it so we probably will keep it. We still have these TODO items for MONEY: * Add locale-aware MONEY type, and support multiple currencies http://archives.postgresql.org/pgsql-general/2005-08/msg01432.php http://archives.postgresql.org/pgsql-hackers/2007-03/msg01181.php * MONEY dumps in a locale-specific format making it difficult to restore to a system with a different locale Ok, that's pretty much what I was told. Our data mine doesn't need to be aware of other locales/monetary formats atm, but given the changes that are happening with the datatype, I think I'll just have to stick with numeric(12,2). Are those TODO items scheduled to be worked on in 8.4 or are they more of a 'someday' item? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] best way to handle enum type
Peter Eisentraut wrote: Tom Hart wrote: Hey everybody. I have a field that, in my earlier mySQL days would have been an enum('q','y','m','c'), and I'm wondering what's the best way to handle this in pgsql. If it's an option, upgrade to 8.3 and use the new enum support. Oops, I think I just got caught not doing my homework :-) Thanks for being nice about it Peter. I don't think I'll be able to convince my supervisor to install a beta while we're still developing the system, but once it becomes more stable (both my system and 8.3) then it's definitely something we'll look at. Thanks for your reply. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] best way to handle enum type
Tom Hart wrote: Peter Eisentraut wrote: Tom Hart wrote: Hey everybody. I have a field that, in my earlier mySQL days would have been an enum('q','y','m','c'), and I'm wondering what's the best way to handle this in pgsql. If it's an option, upgrade to 8.3 and use the new enum support. Oops, I think I just got caught not doing my homework :-) Thanks for being nice about it Peter. I don't think I'll be able to convince my supervisor to install a beta while we're still developing the system, but once it becomes more stable (both my system and 8.3) then it's definitely something we'll look at. Thanks for your reply. On a side note, I was just reading through the 8.3 changelog, (where I read about the enum datatype) and I noticed this line * Widen the MONEY data type to 64 bits (D'Arcy Cain) This greatly increases the range of supported MONEY values. I may be mistaken, but when I started developing this system (a data mine for a financial institution) I was told that the money datatype was deprecated and should not be used. Is this datatype still being worked on, and would it be viable to use in my development, as it is currently or in preparation for 8.3? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] speed up insert query
Hey everybody. I'm trying to speed up a query (not general optimization, one query in particular), and I'm not sure if there's any way to get it to go faster. The query looks like this INSERT INTO transaction ( "tr_acct_num", "tr_acct_typ", "tr_atm_rec", "tr_audit_seq", "tr_branch_cd", "tr_cash_amt", ... "tr_tran_time", "tr_trn_rev_point", "tr_typ", "tr_typ_cd", "atm_trn_reg_e", "dataset" ) SELECT iq_numeric("tr_acct_num"), "tr_acct_typ", iq_numeric("tr_atm_rec"), iq_numeric("tr_audit_seq"), iq_numeric("tr_branch_cd"), iq_numeric("tr_cash_amt"), ... cast("tr_tran_time" as time), iq_numeric("tr_trn_rev_point"), iq_numeric("tr_typ"), iq_numeric("tr_typ_cd"), "atm_trn_reg_e", 0 FROM transaction_import WHERE is_ok = 'TRUE' ; There's not a lot I seem to be able to do about the select portion of this query (index on is_ok, the planner didn't even want to use it), but is there anything I can do to speed up the import? This is the EXPLAIN ANALYZE on the query QUERY PLAN - Seq Scan on transaction_import (cost=0.00..30953.68 rows=69239 width=434) (actual time=0.146..2974.609 rows=68913 loops=1) Filter: is_ok Total runtime: 179091.119 ms (3 rows) The query is inserting ~70,000 rows into a table with ~1.8 million rows already in it. Anybody have any idea how I can keep this query from taking so long? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] speed up insert query
Martin Gainty wrote: 2 things tr_tran_time needs to be already in 'time format' is_ok needs to be indexed (preferably bitmapped index) HTH/ Martin The data is COPY'ed from csv's that our internal software creates, and we don't have control over output format. Is coaxing tr_tran_time into proper time format on the _import table going to be less costly than doing it on the fly in the query? Also, there are a couple more casts in the query (as date). Are casts extremely costly? The iq_numeric function uses regex to determine whether to return a number or null. How costly are regex based functions used like this? I can't see it being more efficient to edit this data while it's in a table with all text fields, no key (have to permit duplicates at this stage), and as of yet no indexes. (As I said I tried an index on is_ok, both a btree and a hash, and the planner seems completely uninterested). Also, I'm sure you've heard this, but the date on your email client is drastically wrong. I appreciate your assistance but I can only imagine that there are quite a few people missing your good advice because they're not looking through the new posts from 2000. Hey everybody. I'm trying to speed up a query (not general optimization, one query in particular), and I'm not sure if there's any way to get it to go faster. The query looks like this INSERT INTO transaction ( "tr_acct_num", "tr_acct_typ", "tr_atm_rec", "tr_audit_seq", "tr_branch_cd", "tr_cash_amt", ... "tr_tran_time", "tr_trn_rev_point", "tr_typ", "tr_typ_cd", "atm_trn_reg_e", "dataset" ) SELECT iq_numeric("tr_acct_num"), "tr_acct_typ", iq_numeric("tr_atm_rec"), iq_numeric("tr_audit_seq"), iq_numeric("tr_branch_cd"), iq_numeric("tr_cash_amt"), ... cast("tr_tran_time" as time), iq_numeric("tr_trn_rev_point"), iq_numeric("tr_typ"), iq_numeric("tr_typ_cd"), "atm_trn_reg_e", 0 FROM transaction_import WHERE is_ok = 'TRUE' ; -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] speed up insert query
Tom Hart wrote: Martin Gainty wrote: 2 things tr_tran_time needs to be already in 'time format' is_ok needs to be indexed (preferably bitmapped index) HTH/ Martin The data is COPY'ed from csv's that our internal software creates, and we don't have control over output format. Is coaxing tr_tran_time into proper time format on the _import table going to be less costly than doing it on the fly in the query? Also, there are a couple more casts in the query (as date). Are casts extremely costly? The iq_numeric function uses regex to determine whether to return a number or null. How costly are regex based functions used like this? I can't see it being more efficient to edit this data while it's in a table with all text fields, no key (have to permit duplicates at this stage), and as of yet no indexes. (As I said I tried an index on is_ok, both a btree and a hash, and the planner seems completely uninterested). Also, I'm sure you've heard this, but the date on your email client is drastically wrong. I appreciate your assistance but I can only imagine that there are quite a few people missing your good advice because they're not looking through the new posts from 2000. I found at least one major optimization (or rather de-optimization already in place, if the english language doesn't mind being flexed a bit). My supervisor was playing with indexes on the rather large transaction table. It turns out he had two multi-column indexes that were composed of fields that were already indexed. These two indexes didn't seem to be helping queries against the table much, but removing just those two (there are still 4 or 5 single column indexes) cut my execution time by 70%. That brings it to a much more manageable amount of time. Thanks for your reply. Hope this helps somebody else :-) Hey everybody. I'm trying to speed up a query (not general optimization, one query in particular), and I'm not sure if there's any way to get it to go faster. The query looks like this INSERT INTO transaction ( "tr_acct_num", "tr_acct_typ", "tr_atm_rec", "tr_audit_seq", "tr_branch_cd", "tr_cash_amt", ... "tr_tran_time", "tr_trn_rev_point", "tr_typ", "tr_typ_cd", "atm_trn_reg_e", "dataset" ) SELECT iq_numeric("tr_acct_num"), "tr_acct_typ", iq_numeric("tr_atm_rec"), iq_numeric("tr_audit_seq"), iq_numeric("tr_branch_cd"), iq_numeric("tr_cash_amt"), ... cast("tr_tran_time" as time), iq_numeric("tr_trn_rev_point"), iq_numeric("tr_typ"), iq_numeric("tr_typ_cd"), "atm_trn_reg_e", 0 FROM transaction_import WHERE is_ok = 'TRUE' ; -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to automate password requests?
Marten Lehmann wrote: Hello, I'm trying to automate some postgresql scripts, but I cannot find a way to pass a password directly to commands like pg_dump psql and so on. Even a echo "password" | psql doesn't work, the password prompt of psql is still waiting. mysql has the -p option. What would be the postgresql equivalent? I don't want to enter passwords dozend times. Regards Marten ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Read about something called the .pgpass file. There's a way to set a file that contains the password (pgAdmin will create one autmoatically) that pgsql will look for before it asks for your password. It's stored in ~/ The solution I use is a bat file that redefines an environment variable (PGPASSFILE) that points to the file, then runs pg_dumpall. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Online Oracle to Postgresql data migration
Josh Harrison wrote: Hi We have an Oracle production database with some terbytes of data. We wanted to migrate that to Postgresql (rigt now...a test database and not production) database. What are the good options to do that? Please advise me on where to look for more information on this topic thanks josh You can start here: http://en.wikipedia.org/wiki/Extract%2C_transform%2C_load -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] advocacy: drupal and PostgreSQL
Joshua D. Drake wrote: Robert Treat wrote: There's been a big move in the php community to push people towards php5 (one of which was EOL of php4), which has started to pay off. I'd guess that if they wanted to, they could switch to PDO with Drupal 7 and not hurt themselves too much. When I spoke with Dries about this issue one of the big hold backs wasn't PHP 4 but actually MySQL 3. When Drupal 6, MySQL 3 is not longer supported. So they can actually do some nicer stuff (like foreign keys) etc.. I am sure that with PHP5 things will improve as well. Sincerely, Joshua D. Drake Let me just sneak in a quick rant here, from somebody who really doesn't matter. We run drupal for our corporate intranet (currently being built) and we use postgreSQL as the backend. Some of the modules and things don't work perfectly, but drupal supported it and that made me happy enough to work with it. Now after reading this garbage, I'm extremely disappointed. Completely dropping postgresql capability might not affect them too largely in the huge run, because a large amount of their user base is using mySQL, but it would send a message to those of us that believe in choice. I'm afraid that they're choosing the route of convenience over their users, and every time I think about it I want to go looking for replacements. It'd be easier to build drupal to only run on mySQL, but then again it'd be easy to build postgreSQL to only run on linux and forget about the windows users. I know it's not their duty to make drupal work with postgresql, but if they drop it like they're talking about, I'll be making a push here and to everyone I know who uses drupal to switch to another system, whether they're running postgres or not. If drupal 6 absolutely doesn't support postgres, then I'm dropping my drupal 5 install on the spot. This is a cold move drupal, and you should be ashamed. Sorry, I'll end the rant here. BTW, I'm a PHP developer who uses postgreSQL almost exclusively and I'm on this list as well as other postgres lists constantly (even if as a reader most of the time). If they have this big of an issue, why not ask for help? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] advocacy: drupal and PostgreSQL
Bill Moran wrote: In response to Tom Hart <[EMAIL PROTECTED]>: Let me just sneak in a quick rant here, from somebody who really doesn't matter. We run drupal for our corporate intranet (currently being built) and we use postgreSQL as the backend. Some of the modules and things don't work perfectly, but drupal supported it and that made me happy enough to work with it. Now after reading this garbage, I'm extremely disappointed. Completely dropping postgresql capability might not affect them too largely in the huge run, because a large amount of their user base is using mySQL, but it would send a message to those of us that believe in choice. I'm afraid that they're choosing the route of convenience over their users, and every time I think about it I want to go looking for replacements. I run my personal site on Drupal+PostgreSQL. If Drupal drops PG support, I'll switch the front-end. I'm not switching the back end. I'm also planning a small enterprise that I was originally considering using Drupal for. I'm now more seriously considering Bricolage. However, read on ... It'd be easier to build drupal to only run on mySQL, but then again it'd be easy to build postgreSQL to only run on linux and forget about the windows users. I know it's not their duty to make drupal work with postgresql, but if they drop it like they're talking about, I'll be making a push here and to everyone I know who uses drupal to switch to another system, whether they're running postgres or not. If drupal 6 absolutely doesn't support postgres, then I'm dropping my drupal 5 install on the spot. This is a cold move drupal, and you should be ashamed. I made a post on the drupal-devel list to this effect. I got chewed out for "flaming" Karoly ... who's obviously some big Drupal code guru. Frankly, every time this topic comes up, it's initiated by Karoly, and I've lost patience with the crap, so I unsubscribed. If I can get my life back in order, I'll re-subscribe some time in Feb, and hopefully start to do something productive, like contribute testing and patches. Sorry, I'll end the rant here. BTW, I'm a PHP developer who uses postgreSQL almost exclusively and I'm on this list as well as other postgres lists constantly (even if as a reader most of the time). If they have this big of an issue, why not ask for help? If you read through the thread, it's just Karoly and a few other minor players in the Drupal community. Many people have stepped up and said, "I _do_ test on PostgreSQL, so what are you complaining about?" As best I can tell, Karoly writes patches, and when they don't work on PostgreSQL and therefore don't get committed right away, he starts this argument up on the Drupal lists yet again. The guy is a whiner who has a personal axe to grind and seems unable to accept that Drupal wants to run on more than just MySQL. If he loves MySQL so much, he should join a project that only supports MySQL and leave the Drupal people to their work. There's a LOT of effort in the Drupal community to build code abstractions that will make the system database-agnostic, and Karoly's constant whining is simply counterproductive. To a large degree, I think Karoly has blown the situation out of proportion. Look at how it affects _this_ list every time he starts bitching, for example. Is it just Karoly (chx) who has all these things to say about pg? He's just one person on the drupal team. Has anybody else in the core team spoken out on this subject? Let's keep in mind as well that this doesn't only affect pg users but any other database as well that drupal supports or plans on supporting. Drupal is pretty popular, and I expect there are a number of organizations that don't fit in their mold of the "ideal drupal user". I'd almost consider trying to take drupal and create a derivative product and build in the pg and oracle and mssql, etc. support myself, but if the drupal team really pulls a messed up move like this, I really don't want to have anything to do with them anymore. It's not that I'm that huge of a pg nut (I used mySQL for a while myself), but any team that can turn it's back on that many of it's users to make their lives a little easier isn't in it for the right reasons (the advancement of technology, computing as a science, etc.). I am literally astonished that they would even consider telling even 1% of their users "Take off, you're too much work". How many drupal+postgres users are large corporations, or regular donators? What about code contributors? How many people are they looking at pissing off with a move like this? Obviously emotion has gotten the better of me which is why I won
[GENERAL] ascii to utf-8
Hello everybody. I hope your week's going well so far. I built our data mine in postgreSQL around 3 months ago and I've been working with it since. Postgres is great and I'm really enjoying it, but I've hit a bit of a hitch. Originally (and against pgAdmin's good advice, duh!) I set up the database to use ASCII encoding. However we have a large base of Spanish speaking members and services, and we need utf-8 encoding to maintain and support the extended character sets. In my naivety I thought it would be a relatively simple process to convert the db but I've found this to not be the case. I tried doing a dump and restore into a new database with the proper encoding, but pg_restore is getting hung up on one of the tables, our largest by far (~1gb, not huge I know). When I tried pg_restore from a command line (I was using pgAdmin, I know i'm a nub) I received this error. C:\Program Files\PostgreSQL\8.2\bin>pg_restore.exe -i -h 192.168.1.xxx -p 5432 -U foobar -d warehouse_utf8 -a -t "transaction" -v "O:\foo\bar\pg_dump_transaction.backup" pg_restore: connecting to database for restore Password: pg_restore: restoring data for table "transaction" pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 1706; 0 17861 TABLE DATA transaction foobar pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0xc52f HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY transaction, line 209487 WARNING: errors ignored on restore: 1 I remember reading somewhere recently that I could use iconv to convert the ASCII encoded dump to UTF-8 encoding, but I'm currently on a windows box, and a windows server, so is there an easier way to do this? Also I was thinking perhaps it was possible to do an ETL type setup, where I can SELECT from the ASCII db and INSERT into the UTF-8 db. If you haven't gathered yet, I'm pretty in the dark regarding encoding issues, especially when applied to pg, so any help here would be appreciated. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Best practices for protect applications agains Sql injection.
pepone.onrez wrote: Hi all I interesting in the protect my applications that use postgresql as is database backend from Sql Injections attacks, can any recommend me best pratices or references to protected postgres from this kind of malicious users. Thanks in advanced José Manuel, Gutíerrez de la Concha Martínez. SQL injection vulnerabilities are a product of the coding, not the database. In a typical sql injection vulnerability, the code (typically PHP or ASP, hopefully PHP) fails to sanitize the input of a parameter to a query (removing ; among other things), but the db is acting properly in such a situation. For example the query "SELECT * FROM users WHERE username = '$username';" is a pretty typical PHP generated query. if $username is input as foobar then the query "SELECT * FROM users WHERE username = 'foobar';" would work as intended. However if the username was "foobar'; DELETE FROM users;" then the query would become "SELECT * FROM users WHERE username = 'foobar'; DELETE FROM users;'" which is a perfectly legal query (except the last ' but it won't make much of a difference) and the db is acting as designed. It is the responsibility of the code to sanitize the input to keep this from happening by removing special characters such as ; and ' so there is no way (AFAIK) to utilize postgresql settings to protect against SQL injection. Check out this page: http://www.acunetix.com/websitesecurity/sql-injection.htm and this page: http://www.acunetix.com/websitesecurity/sql-injection2.htm for more information. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] retry: converting ASCII to UTF-8
I didn't see this come through the first time, so I'm retrying. I apologize if this comes through twice. -- Hello everybody. I hope your week's going well so far. I built our data mine in postgreSQL around 3 months ago and I've been working with it since. Postgres is great and I'm really enjoying it, but I've hit a bit of a hitch. Originally (and against pgAdmin's good advice, duh!) I set up the database to use ASCII encoding. However we have a large base of Spanish speaking members and services, and we need utf-8 encoding to maintain and support the extended character sets. In my naivety I thought it would be a relatively simple process to convert the db but I've found this to not be the case. I tried doing a dump and restore into a new database with the proper encoding, but pg_restore is getting hung up on one of the tables, our largest by far (~1gb, not huge I know). When I tried pg_restore from a command line (I was using pgAdmin, I know i'm a nub) I received this error. C:\Program Files\PostgreSQL\8.2\bin>pg_restore.exe -i -h 192.168.1.xxx -p 5432 -U foobar -d warehouse_utf8 -a -t "transaction" -v "O:\foo\bar\pg_dump_transaction.backup" pg_restore: connecting to database for restore Password: pg_restore: restoring data for table "transaction" pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 1706; 0 17861 TABLE DATA transaction foobar pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0xc52f HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY transaction, line 209487 WARNING: errors ignored on restore: 1 I remember reading somewhere recently that I could use iconv to convert the ASCII encoded dump to UTF-8 encoding, but I'm currently on a windows box, and a windows server, so is there an easier way to do this? Also I was thinking perhaps it was possible to do an ETL type setup, where I can SELECT from the ASCII db and INSERT into the UTF-8 db. If you haven't gathered yet, I'm pretty in the dark regarding encoding issues, especially when applied to pg, so any help here would be appreciated. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ascii to utf-8
Tommy Gildseth wrote: Tom Hart wrote: Hello everybody. I hope your week's going well so far. I built our data mine in postgreSQL around 3 months ago and I've been working with it since. Postgres is great and I'm really enjoying it, but I've hit a bit of a hitch. Originally (and against pgAdmin's good advice, duh!) I set up the database to use ASCII encoding. However we have a large base of Spanish speaking members and services, and we need utf-8 ...snip snip pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0xc52f HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY transaction, line 209487 WARNING: errors ignored on restore: 1 Try editing your dump-file and change the line which reads "SET client_encoding = 'SQL_ASCII';" to "SET client_encoding = 'LATIN1';" I tried making the changes you specified with notepad, wordpad, gVim, vim and emacs and in each case pgAdmin (and pg_restore) complain about the dump header being corrupted. This has been kind of a pain since the file is ~ 65mb and it's difficult to load something that size into a text editor. I also did a head > file, edited the file, and then did head -n -10 >> file, but once again I had no success. Is there an easy way of doing this, or perhaps a different way of solving the problem? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ascii to utf-8
Tommy Gildseth wrote: Tom Hart wrote: Hello everybody. I hope your week's going well so far. I built our data mine in postgreSQL around 3 months ago and I've been working with it since. Postgres is great and I'm really enjoying it, but I've hit a bit of a hitch. Originally (and against pgAdmin's good advice, duh!) I set up the database to use ASCII encoding. However we have a large base of Spanish speaking members and services, and we need utf-8 ...snip snip pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0xc52f HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY transaction, line 209487 WARNING: errors ignored on restore: 1 Try editing your dump-file and change the line which reads "SET client_encoding = 'SQL_ASCII';" to "SET client_encoding = 'LATIN1';" Ok, so I figured out that head -n -10 and tail -n +10 are not the same thing, and I've got a decent file now. However when I try the restore I get this pg_restore.exe -i -h 192.168.1.xxx -p 5432 -U foobar -d warehouse_utf8 -a -t "transaction" -v "O:\foo\bar\fixed.backup" pg_restore: [archiver] out of memory Process returned exit code 1. I tried upping some of the memory settings in postgresql.conf. The server has ~2gb of RAM unused, and the file is ~65mb. Anybody have any ideas? Also, it's taking around an hour and a half for a message to go from my computer to being posted on the list. Is there a problem with the mailing list software? Thanks again for any assistance you can give me. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[Fwd: Re: [GENERAL] retry: converting ASCII to UTF-8]
Martin Gainty wrote: character encoding is implemented at Database level not the table http://www.postgresql.org/docs/8.2/interactive/sql-createdatabase.html CREATE DATABASE name [ [ WITH ] [ OWNER [=] dbowner ] [ TEMPLATE [=] template ] [ ENCODING [=] encoding ] [ TABLESPACE [=] tablespace ] [ CONNECTION LIMIT [=] connlimit ] ] vs Table CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE table_name ( [ { column_name data_type [ DEFAULT default_expr ] [ column_constraint [ .. ] ] | table_constraint | LIKE parent_table [ { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS } ] ... } [, ... ] ] ) [ INHERITS ( parent_table [, ... ] ) ] [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] [ TABLESPACE tablespace ] HTH M- - Original Message - From: "Tom Hart" <[EMAIL PROTECTED]> To: "Postgres General List" Sent: Wednesday, January 23, 2008 3:32 PM Subject: [GENERAL] retry: converting ASCII to UTF-8 I think you may have misunderstood. I realize that encoding is a database setting, which is why I originally dumped and recreated the whole db. I tried a full restore with some success, but it kept hanging on a specific table, a very large one. I decided to limit the dump and restore to that table, though the from database is encoded 'SQL-ASCII' and the to database is 'UTF8'. I must have explained it poorly. My current problem is waiting for the third message to finally post to this list so I can get help with my 'out of memory' error. Thanks for your help anyway :-) -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] ascii to utf-8
Ok, that did it. Thank you for the help. Tomasz Ostrowski wrote: On Wed, 23 Jan 2008, Tom Hart wrote: pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0xc52f Try editing your dump-file and change the line which reads "SET client_encoding = 'SQL_ASCII';" to "SET client_encoding = 'LATIN1';" I tried making the changes you specified with notepad, wordpad, gVim, vim and emacs and in each case pgAdmin (and pg_restore) complain about the dump header being corrupted. Try an "-E LATIN1" option in pg_dump. Do you at least know what is the encoding of data in the database? Regards Tometzky -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] postgresql book - practical or something newer?
Hey everybody. I was just informed that our organization has a credit at amazon.com and asked if I had any books I wanted. I've been thinking about getting a postgresql book, and from what I've seen and read Practical PostgreSQL seems to be the standard (as well as co-authored by Joshua Drake, somebody that has helped me many times on this very list) but the fact that it's based on 7.x worries me. I started using postgresql with 8.x on windows and I'm wondering if this book and it's teachings will help me or if I should look at something targeted at 8.x or windows. What do you guys think? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] postgresql book - practical or something newer?
Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Tue, 29 Jan 2008 12:53:25 -0500 Tom Hart <[EMAIL PROTECTED]> wrote: Hey everybody. I was just informed that our organization has a credit at amazon.com and asked if I had any books I wanted. I've been thinking about getting a postgresql book, and from what I've seen and read Practical PostgreSQL seems to be the standard (as well as co-authored by Joshua Drake, somebody that has helped me many times on this very list) but the fact that it's based on 7.x worries me. I started using postgresql with 8.x on windows and I'm wondering if this book and it's teachings will help me or if I should look at something targeted at 8.x or windows. What do you guys think? Pratical PostgreSQL is still a good reference but you can use the free web version as a reference. It lacks a lot of information that is very useful (ex, the books has zero idea of pg_stat_*). The Korry Douglas book is still reasonably relevant (as it covers 8) and is also a good book. I find that the best way to get what you need, is to read the fine manual from postgresql. Yes, its massive, unwieldy and in a lot of ways counter-intuitive (to a newbie) but if you have the terminology down you aren't going to find a more comprehensive text. Plus, when you find things that don't quite make sense you can submit a doc patch to make the docs that much better. Sincerely, Joshua D. Drake - -- The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit I've checked out the docs online, and they've helped me a great deal. I've also read excerpts from the free online version of practical. I'll stick with my current strategy of online docs/mailing list for now. BTW, thanks for not completely plugging your book. Have you guys considered authoring another on 8.x? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] postgresql book - practical or something newer?
Tom Lane wrote: "vincent" <[EMAIL PROTECTED]> writes: In the manual yes, but I think there's definately a need for a howto document, something that demonstrates how to handle typical database functionality in PgSQL. Many of the people I've convinced to start using PostgeSQL spend the first week or so asking me questions on how to do basic things in PostgreSQL. When I say that there's a manual, the complaint usually is what I've noticed myself: the manual is great for looking up individual facts, but your problem may consist of 15 facts and it's up to you to connect the dots. Surely even a book that's a little out-of-date can serve fine for that kind of introduction? regards, tom lane I agree that it would be useful as an introduction, but I have 4 years of mySQL experience (I know, I'm sorry) and I've been working with postgres for the past 3-4 months during which time I've built a data mine by hand, and set up a few different web apps running against it (drupal, openreports, etc.) so I think I'm past the introduction phase. What I was looking for was an intermediate level (call me presumptuous) book with more performance tips and advanced techniques/functions. Even though this book may have some sort of this information in it, it's going to be based on 7.x and the entire thing is available online (as well as the docs, which personally I like). And on the subject of beginner's documentation, I think I learned a lot more playing/hacking/reading docs/posting here (of course that's always been my preferred learning method) then I would have with a book. Everybody has their own learning style and different things work well for different people. The key here is that when it's up to you to "connect the dots" then you learn what the dots are, how they relate to each other, and what each of them is for. That gives you a lot better understanding then "Just run SELECT count(*) FROM a LEFT JOIN...". Of course that's just my opinion, I could be wrong :-) -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] postgresql book - practical or something newer?
Glyn Astill wrote: More documentation would be nice, but surely it's more down to getting the type of user base that write your average "how to" books? The O'Reilly books seem to cover postgres quite nicely, however I've only had a flick through in shops. One thing's for sure, 2 months ago I signed up to the most common postgresql and m*sql lists when I was trying to decide what was best for our backend. At the time m*sql was my 1st choice, and it took me less than a day to drop those toys in the street and decide postgresql was the way forward. ___ Support the World Aids Awareness campaign this month with Yahoo! For Good http://uk.promotions.yahoo.com/forgood/ I definitely think that the lists are one of the shining stars for postgresql support. I've learned some good reference stuff from online docs/google but the really tricky questions were only answered here, and amazingly enough, quickly and with good humor. Perhaps what we really need is somebody to comb through the archives looking for common problems or exceptional solutions and compile them into a "book". -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] postgresql book - practical or something newer?
Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Wed, 30 Jan 2008 13:55:12 -0500 Tom Hart <[EMAIL PROTECTED]> wrote: I definitely think that the lists are one of the shining stars for postgresql support. I've learned some good reference stuff from online docs/google but the really tricky questions were only answered here, and amazingly enough, quickly and with good humor. Perhaps what we really need is somebody to comb through the archives looking for common problems or exceptional solutions and compile them into a "book". /me looks hard at Tom Hart... "Yep, looks like a volunteer to me" said Bob. Joshua D. Drake - -- The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHoMnOATb/zqfZUUQRAjKPAJsEnY/OHS74AcRM3WoEdkIXWwChaACgnbcU VkU7J4iZfCiwcL8k0OqicgU= =bg/L -END PGP SIGNATURE- /me misses the good old days :-) I'm definitely willing to participate in this, or maybe just start it and pass it off, but as much as I'd love to put something like this together, I currently have no internet at home (I thought nerds weren't supposed to be dead poor) and doing this all at work wouldn't be my boss's idea of high productivity, though admittedly he is the one who got me into postgreSQL and is definitely open-source friendly. I just don't think he'll want to be paying me wages to create postgreSQL docs. Is there anybody else out there who is interested in working with me on a project like this? I think it'd be an excellent way to contribute back to the list/community for the assistance we've received here that wouldn't have been given anywhere else (especially not free of charge). My e-mail is [EMAIL PROTECTED] if you're interested. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Mailing list archives/docs project
Hello everybody. What started as a question about "Practical PostgreSQL" has ballooned into a project to create another documentation resource, compiled entirely from mailing list archives. While discussing documentation in the general list I realized that the resource I had learned the most from and had been the most helpful to me was the mailing lists themselves. It was from this thought that an idea was born. What if we compile a book of hand-picked mailing list archives to address some of the intermediate/advanced and less-used/documented features of postgresql along with well-written solutions to not-so-common problems? Well I've decided to run with it, both to further my own knowledge of postgreSQL but also to contribute back to the community in the first way that I thought of. The purpose of this message is a call for help. I'm looking for anybody willing to help comb the archives for exceptional excerpts or submit their own list questions/experiences. Anybody willing to help in any way is more than welcome, but the first (and arguably most arduous) stage of this project will be the information gathering itself, hence the plea. If anybody is interested in contributing, email me at [EMAIL PROTECTED] and we can go from there. And of course a big thanks to anybody that takes their time to answer or assist another person. I don't know about anybody being paid to be here so our thanks is all the compensation we can offer. Sorry it's not a BMW :-) -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Mailing list archives/docs project
Ivan Sergio Borgonovo wrote: On Wed, 30 Jan 2008 16:10:07 -0500 Tom Hart <[EMAIL PROTECTED]> wrote: Hello everybody. What started as a question about "Practical PostgreSQL" has ballooned into a project to create another [snip] I generally tend to write stuff that was not clear on my website. http://www.webthatworks.it/d1/taxonomy_menu/2/3/10/11 At this moment very few info... I was planning to publish some more notes that now are just local txt files. Other sources of info are: http://www.network-theory.co.uk/docs/postgresql/vol1/index.html and the other volumes http://www.postgresql.org/files/documentation/books/aw_pgsql/15467.html here there are good tech articles but hard to spot in all "non reference/example" material http://people.planetpostgresql.org/xzilla/ great resource for how-tos http://www.varlena.com/ I downloaded a: "annotated postgresq.conf guide for postgres in pdf but I can't find the source. I'm a dev not a DBA so I generally don't collect info about management and tuning. BTW nice drupal website. I'm looking for a drupal web designer. Thanks for the response. The book/document itself will focus on the mailing list archives but I have forwarded the information onto the team that has assembled so far to look at the possibility of creating an "appendix" with extra information such as what's above. Also, I'm glad you like our site :-) The external site was developed by my supervisor, and I'm currently involved in developing our intranet on a separate drupal install. Unfortunately neither of us is very talented :-) but I know there's about 10 billion drupal developers and theme designers out there so it shouldn't be too hard to find somebody who can help you out. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] postgresql book - practical or something newer?
Christopher Browne wrote: On Jan 31, 2008 4:40 PM, Guy Rouillier <[EMAIL PROTECTED]> wrote: Robert Treat wrote: Just so you know, I approached OReally about writing a PostgreSQL Cookbook, and they turned it down. They did offer me some other titles, but those don't seem to have gone anywhere. As someone else pointed out in this thread, very much of what you need to know has been previously discussed at one point; the hard part is finding it. What we need is for some of the people with the big brains ;) to come up with some new kind of "hyperbook". That would be the documentation in some form similar to what it is today, but somehow connected to the discussions that happen in the mailing lists. That way, when something really insightful or helpful gets said in the mailing lists, it can get connected to a particular place in the documentation. Then over time, the doc maintainers can take the best of those and incorporate them directly into the docs at the appropriate place. The trouble is that this is nearly as much trouble as actually writing a book, and doesn't provide a clear incentive for people to put in the effort of making it happen. There's the problem (and it is, to a degree, truly a problem) that the "postgreSQL book" market hasn't been lucrative enough to draw people into writing books. And honestly, it *needs* to be more lucrative. If I'm thinking about alternative uses for my spare time, writing does not appear to be a particularly profitable use. Finding a "poor man's way" to generate a "hyperbook" actually needs much the same sorts of skills and efforts, even though it probably provides those that provide the effort with *less* benefits. Personally I'm surprised that the last couple responses seem to center around not being able to make much money off of it. I agree that it would require some time investment, but so did building PG in the first place. Countless people have already sacrificed hours upon hours of their time with no return on their investment except pride in their work and a better overall product for everybody to use. I'm not a talented enough programmer to contribute to the code, but in this way I can do something to give back to the pg community. -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] postgresql book - practical or something newer?
Joshua D. Drake wrote: On Tue, 5 Feb 2008 23:07:37 -0500 (EST) Greg Smith <[EMAIL PROTECTED]> wrote: Can anyone think of another place a community docs wiki could go at? I don't have any good web hosting facilities here right now. I just took a look at buying a cheap host somewhere, but I feel it would be inappropriate to host a PostgreSQL documentation wiki on a shared host where the underlying database was *censored*. CMD will host anything you need. Joshua D. Drake So who wants to volunteer taking the lead and setting up a wiki on CMD hosting? All we need is one person to get this going, and then the community can have at it -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] postgresql book - practical or something newer?
Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Mon, 11 Feb 2008 18:50:41 -0500 (EST) Greg Smith <[EMAIL PROTECTED]> wrote: I could help out with the initial setup, you could just have somebody internally do the install and let me have an account when it's ready, whatever makes sense for you. I have two articles I can submit as examples of a good format for people to use to push some initial content in there, I may turn those into a template or something. Let me know what I can do to help get this going. I just got back from scale, let me talk to the guys and see which machine this needs to go on and I will get back with you. Joshua D. Drake - -- The PostgreSQL Company since 1997: http://www.commandprompt.com/ PostgreSQL Community Conference: http://www.postgresqlconference.org/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit Are we still waiting on this, or did the discussion move off list? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] utf8 issue
Hello everybody. I recently converted my db from ASCII encoding to UTF8 (we have a lot of spanish-speaking members, and need the extra character support). Everything was working great, but I noticed this error, while trying to COPY one of our tables from a csv. ERROR: invalid byte sequence for encoding "UTF8": 0xb9 SQL state: 22021 Hint: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". Context: COPY transaction_import, line 59358 Is there anything I can do to look for these sorts of errors and fix them? I'm really not great with character encodings and I'm not sure where to go on this. Any help? -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to copy tables between databases?
Kynn Jones wrote: Is there a simple way to copy a table from one database to another without generating an intermediate dump file? TIA! Kynn You're looking for ETL. http://en.wikipedia.org/wiki/Extract%2C_transform%2C_load -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] utf8 issue
Richard Huxton wrote: Tom Hart wrote: Hello everybody. I recently converted my db from ASCII encoding to UTF8 (we have a lot of spanish-speaking members, and need the extra character support). Everything was working great, but I noticed this error, while trying to COPY one of our tables from a csv. ERROR: invalid byte sequence for encoding "UTF8": 0xb9 Not converted, I fear. Why not use latin9 rather than utf8 - that should cover everything and is probably what it's in anyway. I was specifically requested to use UTF-8, and the work's already been done, so converting again is not my favorite option. Is there anything I can do to look for these sorts of errors and fix them? I'm really not great with character encodings and I'm not sure where to go on this. Any help? Google a bit for iconv (and postgresql) to get your characters valid. I already have a php script that does some data scrubbing before the copy. I added this line to the script and things seem to be working better now $line = iconv("ISO-8859-1", "UTF-8", $line); Thanks for the help guys :-) -- Tom Hart IT Specialist Cooperative Federal 723 Westcott St. Syracuse, NY 13210 (315) 471-1116 ext. 202 (315) 476-0567 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq