Re: [GENERAL] Recommended method for creating file of zeros?
At 2:17a -0400 on 14 Oct 2007, Jason L. Buberel wrote: > create file '00A4' and fill it with 256k zeros. Is there a quick and > easy linux-way of creating such a beast? The tool is 'dd' and /dev. /dev/zero in this case. The summary of what you asked: $ dd if=/dev/zero of=./zblah count=1 bs=256k 1+0 records in 1+0 records out 262144 bytes (262 kB) copied, 0.00130993 seconds, 200 MB/s $ dd if=/dev/zero of=./zblah count=1 bs=256000 1+0 records in 1+0 records out 256000 bytes (256 kB) copied, 0.00136915 seconds, 187 MB/s HTH, Kevin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] atomic commit;begin for long running transactions , in combination with savepoint.
On 10/13/07, syan tan <[EMAIL PROTECTED]> wrote: > I was wondering whether there could be an atomic commit;begin command > for postgresql, in order to open up a transaction at the beginning of > a unit of work in a client session, so that client applications don't > have to duplicate work with having optimistic locking and version id > fields in their table rows. savepoint doesn't actually commit writes > in a transaction upto the time savepoint is called, but it's useful > for detecting conflicts, whilst allowing work to continue ( e.g. > with timeout set) . the atomic commit;begin wouldn't be necessary > except a client could crash before the end of the unit of work, > and work done upto that point would be lost in the transaction. > the atomic commit;begin is so that clients can use postgresql's > mechanisms for detecting concurrency read/write conflicts by > issuing savepoints before each write, instead of the current need > to begin;select for update xxx, client_versionid (or xmin) ; ( client > checks version id hasn't changed against version id stored when last > selected for read); update; commit . I'm not following your train of thought. It sounds as though you want to commit data without actually leaving your current transaction, but what do you need the transaction for? I don't understand how an atomic COMMIT;BEGIN would help. Consider a transaction with serializable isolation: your snapshot view of the data exists exactly as long as your transaction does. A COMMIT followed by a BEGIN, whether atomic or not, is going to change your view of the data. If you want it to do something else, what is that exactly? > Also, if the transaction is in read committed mode, then if > a write failed ,and a rollback to savepoint was done, you could > do select again ,get the new value, inform the client, and if > the user elected to go ahead, overwrite with their new value, > it would work the second time, because one has read the committed > select. What is preventing you from doing that now? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] fmgr_info_cxt_security() modifies variable?
Hi, I'm calling an arbitrary user-defined function from the backend. Although I can do it via FunctionCallInvoke, I have a weird problem when calling fmgr_info. The call results in a argument variable (eventType) being cleared. A gdb hardware watch says that the variable is modified by this line (185) in fmgr.c: finfo->fn_extra = NULL; Any ideas of what is happening? For simplicity, in the following snippet I have hardcoded foid to some (PGPSQL) function id and removed the rest of the call-related statements. static void execEventTypeFunc(char *eventType, Oid funcId) { FmgrInfo *finfo; FunctionCallInfoData fcinfo; Datum result; Oid foid = 17283; finfo = (FmgrInfo *) palloc0(sizeof(FmgrInfo)); fmgr_info(foid, finfo); ... } Thanks, Luis ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Database reverse engineering
> I am trying to use postgresql-autodoc. The autodoc finds all the Perl > modules and compiles but when I go to /usr/local/bin and run > postgresql_autodoc like this I had a good luck with schema-spy (done in java)... http://schemaspy.sourceforge.net/ Vlad ps: the command I use is (all on one line): java -jar ~/app/schemaspy/schemaSpy_3.1.1.jar -t pgsql -cp /usr/share/java/postgresql-8.2-506.jdbc3.jar -host localhost -db database_name -s public -u database_user -o doc/db -cp - the location of postgresql jdbc driver -o - output directory The database_user does not need password for localhost connection to postgresql on my system... ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SLEEP in posgresql
I think pg_sleep is not implemented in 8.1 and earlier versions. Is there any alternative if someone is using versions before 8.2 ? On 10/9/07, Guy Rouillier <[EMAIL PROTECTED]> wrote: > > Jasbinder Singh Bali wrote: > > Hi, > > > > I have a while loop and I want to re-iterate after every 't' seconds. > > I was reading up on the postgresql documentation that says pg_sleep(t) > > should be handy. > > However i doesn't work. > > Hmmm, I'm looking at section 9.9.5 Delaying Execution in the PostgreSQL > 8.2.0 Documentation. Following the example presented there, I fired up > psql and ran the following: > > postgres=# select current_timestamp; select pg_sleep(3); select > current_timestamp; > now > > 2007-10-09 23:50:32.649-04 > (1 row) > > pg_sleep > -- > > (1 row) > > now > > 2007-10-09 23:50:35.774-04 > (1 row) > > Seems to be working. What version are you using and on what platform? > > -- > Guy Rouillier > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly >
Re: [GENERAL] corrupt database?
> > Any ideas what to do next? Well, I am going to try the same with 8.3 beta1, will see what happens... Vlad ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] PLPGSQL 'SET SESSION ROLE' problems ..
Hi, Is it possible to change the current role to another arbitrary role using a PLPGSQL function? In essence I've a function "authorise" created by the db superuser with 'SECURITY DEFINER' specificied. However from within plpgsql the 'superuser' attribute isn't honored when trying to change roles (ie: the non 'superuser' rules for role change must be honoured). Is this a bug? tnx, -greg -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SLEEP in posgresql
On Wednesday 10 October 2007 01:00, Jasbinder Singh Bali wrote: > I'm using Postgresql Version 8.1.4. on fedora core 6 > I'm pretty sure that pg_sleep is not implemented in 8.1. > Am not sure what is the work around > Yeah can code your own sleep function in plpgsql, but it tends to be resource intensive. Better would probably be to use an external lang, like this: CREATE OR REPLACE FUNCTION otools.sleep(integer) RETURNS integer AS $$ my ($seconds) = @_; return sleep($seconds); $$ LANGUAGE 'PLPERLU'; -- Robert Treat Database Architect http://www.omniti.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] corrupt database?
Hello! Running postgresql 8.2.5 (build from source on debian testing, amd64) i run into following error when running "vacuum full analyze": ERROR: invalid page header in block 1995925 of relation "data_pkey" The database was freshly initialized and contains around 1.4 billion records in the main table (the data). Any ideas what to do next? Vlad ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] reporting tools
> I guess I should have noted that we will need to run this on Linux > clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Andrus. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Aggregate View and Conditions taking FOREVER
Background: Ok..I've got a view which returns agency_id, fiscal_year, reporting_month, count_col0, count_col1...grouping by agency_id, fiscal_year and reporting_month. Now, if I just select * from myreport; it returns all the various counts for the various fiscal years and all that within 4 seconds, but if I add a conditional ie where fiscal_year = 2006 and reporting_month = 11, the query takes...10 minutes -- which I think is because the conditional cols (ie fiscal_year) is calculated via extract(year from datecol) which I think is causing that function to be checked on every row within the view. Slow query: select * from agency_9902_report_summary where fiscal_year = 2006 and reporting_month = 11 Fast query: select * from agency_9902_report_summary Definition of fiscal_year/reporting_month: ... extract(year from p.completed_timestamp) as fiscal_year, extract(month from p.completed_timestamp) as reporting_month ... Any help/tips are greatly appreciated...I've been working on this for a few days with little success :|. - Tyler -- View this message in context: http://www.nabble.com/Aggregate-View-and-Conditions-taking-FOREVER-tf4596962.html#a13124972 Sent from the PostgreSQL - general mailing list archive at Nabble.com. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Using C API
I was trying this example program out of a well known postgresql book and keep getting errors whenever I try to compile. Error messages ##make client1 cc -g -I /usr/include -I /usr/include/pgsql -I /usr/include/pgsql/server -c -o client1.o client1.c cc -g client1.o -L /usr/lib -L /usr/lib/pgsql -o client1 client1.o: In function `main': /root/PROGRAMMING/C_API/client1.c:10: undefined reference to `PQconnectdb' /root/PROGRAMMING/C_API/client1.c:11: undefined reference to `PQfinish' collect2: ld returned 1 exit status make: *** [client1] Error 1 - -- /* ** File: client1.c */ #include "libpq-fe.h" int main( void ) { PGconn * connection; connection = PQconnectdb( "dbname='MyTestDB'" ); ## I tried the above line with ## connection = PQconnectdb( "" ); ## as per the book PQfinish( connection ) ; return( 0 ); } - ## File: Makefile ## ## Rules to create libpq sample applications ## CPPFLAGS += -I /usr/include -I /usr/include/pgsql -I /usr/include/pgsql/server CFLAGS += -g LDFLAGS += -g LDLIBS += -L /usr/lib -L /usr/lib/pgsql client1: client1.o - following are the rpms I have installed on a RHEL5 system. postgresql-plperl-8.2.4-1PGDG postgresql-contrib-8.2.4-1PGDG postgresql-python-8.1.9-1.el5 postgresql-docs-8.2.4-1PGDG postgresql-server-8.2.4-1PGDG postgresql-devel-8.2.4-1PGDG postgresql-8.2.4-1PGDG postgresql-plpython-8.2.4-1PGDG compat-postgresql-libs-4-2PGDG.rhel4 postgresql-libs-8.2.4-1PGDG postgresql-pltcl-8.2.4-1PGDG Yes I see the compat one and will install rhel5 if I find one. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] extracting multiple results from a set of tuples
Hi all, I am writing a few pg/plsql and have no great experience, so i'm asking here. I need to obtain ,,.. from N queryes like these: SELECT FROM tab GROUP BY SELECT FROM tab GROUP BY SELECT FROM tab WHERE C3 GROUP BY [eventually ORDER BY Z3]; example: SELECT count(*),name FROM tab GROUP BY name; SELECT sum(seconds), job FROM tab WHERE status = 1 GROUP BY job; etc; Doing this way the table tab would be scanned once per SELECT (tab is a quite large table). Since it hurts to me the idea of repeating many queryes on the identical set of tuples I would like to write a stored procedure who produceses these results putting them into a few "temporary" tables, so i ask you: is there a way to obtain , , with a single tables scan? May you point me to an example? I hope i was clear enough explaining the problem. Thank you Stefano ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Invalid error message when user has nologin attibute
When user login has disabled by CREATE USER NOLOGIN Postgres 8.2 returns "Password authentication failure" error message on login. This is very confusing to users. How to force Postgres to return message like in Windows: Login is disabled. ? Andrus. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] db errors
We had a crash of one of our db systems last night. After doing a fsck of he file system and getting the db backup, we're getting a lot of these messages in our logs. The DB will also occasionally crash now. Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16 PDTERROR: could not access status of transaction 2259991368 Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16 PDTDETAIL: Could not read from file "pg_clog/086B" at offset 73728: Success. Any ideas? Thanks, Akash
[GENERAL] replicating to a stopped server
Hello, I have a 'strange' situation: I need to make a replica copy of my database to a reduntant spare computer. The reduntant computer is not running postgres, but postgres is installed. The redundant computer is running in a special run-level (I'm talking Linux here) in which Pg is _not_ running. When the primary computer crashes the redundant one will be rebooted in 'normal' mode and Postgres must be started with the databases from the replica. a) So... how do I replicate a database to a stopped postgres ? b) Is it safe just to copy the /var/lib/pg/* directories to the right place and let Pg boot on that ? c) I know the right tool for this should be 'pg_dump' but it needs a live postgres daemon running, in order to install the replica. Is this correct ? d) Is it viable to start postgres directlly from the dump ? by specifying the dump-file in the cmd line ? thx a lot joao ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] how to ignore invalid byte sequence for encoding without using sql_ascii?
got it. thanks very much. On 10/2/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote: > > On Thu, Sep 27, 2007 at 02:28:27AM -0700, [EMAIL PROTECTED] wrote: > > I am now importing the dump file of wikipedia into my postgresql using > > maintains/importDump.php. It fails on 'ERROR: invalid byte sequence > > for encoding UTF-8'. Is there any way to let pgsql just ignore the > > invalid characters ( i mean that drop the invalid ones ), that the > > script will keep going without die on this error. > > No, postgres does not destroy data. It you want bits of your data > removed you need to write your own tool to do it. > > That said, are you sure that the data you're importing is UTF-8? > > Have a nice day, > -- > Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > > From each according to his ability. To each according to his ability to > litigate. > > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.1 (GNU/Linux) > > iD8DBQFHAfOQIB7bNG8LQkwRAlMxAJ93gd9QP/c00tOcK9rSzEUvg4kZcQCfQYjS > JhhN/o8NT9xpahZmMz6XjbA= > =n0T1 > -END PGP SIGNATURE- > >
[GENERAL] drop table cascade doesn't drop manual sequences
Hello folks, I'm new to postgres and I'm using version 8.1 Here's the problem anyway: If I insert a sequence later on table creation with alter table, drop table cascade simply doesn't drop this sequence even when I specify CASCADE. works create table bla(id serial); drop table bla CASCADE; select * from pg_class were relkind = 'S' => nothing doesn't create table bla(id integer); create sequence bla_id_seq; alter table bla alter column id set default nextval('bla_id_seq'); drop table bla CASCADE; select * from pg_class were relkind = 'S' => 'bla_id_seq' Is this supposed to happen or am I missing something? Thanks in advance, Guilherme ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Importing MySQL dump into PostgreSQL 8.2
On 5 oct, 01:06, [EMAIL PROTECTED] (Chris) wrote: > Jeff Lanzarotta wrote: > > Hello, > > > I have a MySQL dump file that I would like to import into our PostgreSQL > > 8.2 database. Is there a way to do this? > > You'll need to convert the table definitions then the data. > > For example, mysql has int(11) columns, postgres only has int columns. > > I usually convert the tables, then do a csv dump from mysql: > > select * from table into outfile '/path/to/file'; > > then import into postgres: > > \copy table from '/path/to/file' > > -- > Postgresql & php tutorialshttp://www.designmagick.com/ > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend hi! but what happens when somebody sent you a dump file and you can't convert the tables? all i have is a 116MB sql file, and i won't convert it by hand :P ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] corrupt database?
On 10/10/07, vladimir konrad <[EMAIL PROTECTED]> wrote: > > Hello! > > Running postgresql 8.2.5 (build from source on debian testing, amd64) i run > into following error when running "vacuum full analyze": > > ERROR: invalid page header in block 1995925 of relation "data_pkey" Check your hardware... Most likely you've got problems with RAM or Storage. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] drop table cascade doesn't drop manual sequences
On 10/10/07, Guilherme <[EMAIL PROTECTED]> wrote: > Hello folks, > > I'm new to postgres and I'm using version 8.1 > > Here's the problem anyway: > > If I insert a sequence later on table creation with alter table, drop > table cascade simply doesn't drop this sequence even when I specify > CASCADE. This is normal. > works > > > create table bla(id serial); > drop table bla CASCADE; > select * from pg_class were relkind = 'S' => nothing Here, the sequence was created as a dependent object of the table. > doesn't > > > create table bla(id integer); > create sequence bla_id_seq; > alter table bla alter column id set default nextval('bla_id_seq'); > > drop table bla CASCADE; > select * from pg_class were relkind = 'S' => 'bla_id_seq' Here the sequence was created independently. this method is often used when a sequence needs to be shared by >1 table: create table bla(id integer); create table bla2(id integer); create sequence bla_id_seq; alter table bla alter column id set default nextval('bla_id_seq'); alter table bla2 alter column id set default nextval('bla_id_seq'); Now, if i drop table bla2 should I lose the sequence that I assigned to be used by bla? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] atomic commit;begin for long running transactions , in combination with savepoint.
I meant commit and continue current transaction. The transaction is opened on the user application caching composite data from many tables regarding a root object. Because almost all applications cache data, there is apparently a pattern "optimistic offline lock" where orb middleware basically adds a version field to rows , because transactions are usually begun just when the user has done a modification to a displayed value, and now wants to change an old cached value which he believes is the current value. The middleware starts a transaction, and reads the version number, and if it has been incremented since the initial transaction that read the value and the version number, it then informs the user that a new old value exists, and whether he wants to overwrite it. This is basically a duplication of mvcc, which has to occur with all applications that can't start long running transactions from the very beginning of reading a complex object, because there are a lot of updates per work unit, and if there is a crash during the work unit, then a lot of updates would be lost, unless the client app also does it's own WAL, which is another duplication. On Sun Oct 14 1:56 , "Trevor Talbot" sent: >On 10/13/07, syan tan > wrote: >> I was wondering whether there could be an atomic commit;begin command >> for postgresql, in order to open up a transaction at the beginning of >> a unit of work in a client session, so that client applications don't >> have to duplicate work with having optimistic locking and version id >> fields in their table rows. savepoint doesn't actually commit writes >> in a transaction upto the time savepoint is called, but it's useful >> for detecting conflicts, whilst allowing work to continue ( e.g. >> with timeout set) . the atomic commit;begin wouldn't be necessary >> except a client could crash before the end of the unit of work, >> and work done upto that point would be lost in the transaction. >> the atomic commit;begin is so that clients can use postgresql's >> mechanisms for detecting concurrency read/write conflicts by >> issuing savepoints before each write, instead of the current need >> to begin;select for update xxx, client_versionid (or xmin) ; ( client >> checks version id hasn't changed against version id stored when last >> selected for read); update; commit . > >I'm not following your train of thought. It sounds as though you want >to commit data without actually leaving your current transaction, but >what do you need the transaction for? > >I don't understand how an atomic COMMIT;BEGIN would help. Consider a >transaction with serializable isolation: your snapshot view of the >data exists exactly as long as your transaction does. A COMMIT >followed by a BEGIN, whether atomic or not, is going to change your >view of the data. > >If you want it to do something else, what is that exactly? > >> Also, if the transaction is in read committed mode, then if >> a write failed ,and a rollback to savepoint was done, you could >> do select again ,get the new value, inform the client, and if >> the user elected to go ahead, overwrite with their new value, >> it would work the second time, because one has read the committed >> select. > >What is preventing you from doing that now? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] atomic commit;begin for long running transactions , in combination with savepoint.
On 10/14/07, Syan Tan <[EMAIL PROTECTED]> wrote: > I meant commit and continue current transaction. The transaction is opened > on the user application caching composite data from many tables regarding > a root object. Because almost all applications cache data, there is apparently > a pattern "optimistic offline lock" where orb middleware basically adds > a version field to rows , because transactions are usually begun just > when the user has done a modification to a displayed value, and now > wants to change an old cached value which he believes is the current value. Well, transactional semantics won't help you here. In order to detect a change occurred, what you want is a SERIALIZABLE transaction: you want the update to fail if the row it matches is no longer the same version as your snapshot. However, in order to read the new value to decide if you want to update it anyway, you need to leave your current snapshot. As soon as you do that, ALL previously read values lose the update checks that snapshot provided you. If you use a READ COMMITTED transaction, there is no check, since the update can see the new value itself. Nested transactions or the ability to commit some data without leaving the current transaction won't get you want you want either, since you're really looking for per-row behavior, not per-transaction. > The middleware starts a transaction, and reads the version number, and > if it has been incremented since the initial transaction that read > the value and the version number, it then informs the user that > a new old value exists, and whether he wants to overwrite it. A way to do this using PostgreSQL's own row version data came up recently on this list. Have a look at this post and the one following it: http://archives.postgresql.org/pgsql-general/2007-10/msg00503.php ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Using C API
On 10 Oct 2007, at 02:25, [EMAIL PROTECTED] wrote: I was trying this example program out of a well known postgresql book and keep getting errors whenever I try to compile. Error messages ##make client1 cc -g -I /usr/include -I /usr/include/pgsql -I /usr/include/pgsql/ server -c -o client1.o client1.c cc -g client1.o -L /usr/lib -L /usr/lib/pgsql -o client1 client1.o: In function `main': /root/PROGRAMMING/C_API/client1.c:10: undefined reference to `PQconnectdb' /root/PROGRAMMING/C_API/client1.c:11: undefined reference to `PQfinish' collect2: ld returned 1 exit status make: *** [client1] Error 1 - you must add -lpq too your LDFLAGS in order to link the lib you only have the linker paths set in LDFLAGS it should read like this then: cc -g client1.o -L /usr/lib -L /usr/lib/pgsql -o client1 -lpq -- Viele Grüße, Lars Heidieker [EMAIL PROTECTED] http://paradoxon.info Mystische Erklärungen. Die mystischen Erklärungen gelten für tief; die Wahrheit ist, dass sie noch nicht einmal oberflächlich sind. -- Friedrich Nietzsche PGP.sig Description: This is a digitally signed message part
Re: [GENERAL] Using C API
[EMAIL PROTECTED] writes: > I was trying this example program out of a well known postgresql book and > keep getting errors whenever I try to compile. > cc -g -I /usr/include -I /usr/include/pgsql -I /usr/include/pgsql/server > -c -o client1.o client1.c > cc -g client1.o -L /usr/lib -L /usr/lib/pgsql -o client1 > client1.o: In function `main': > /root/PROGRAMMING/C_API/client1.c:10: undefined reference to `PQconnectdb' > /root/PROGRAMMING/C_API/client1.c:11: undefined reference to `PQfinish' > collect2: ld returned 1 exit status You're missing "-lpq" in the link step. I think most or all of those -I and -L switches are useless, also. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Aggregate View and Conditions taking FOREVER
illusina <[EMAIL PROTECTED]> writes: > Ok..I've got a view which returns agency_id, fiscal_year, reporting_month, > count_col0, count_col1...grouping by agency_id, fiscal_year and > reporting_month. Now, if I just select * from myreport; it returns all the > various counts for the various fiscal years and all that within 4 seconds, > but if I add a conditional ie where fiscal_year = 2006 and reporting_month = > 11, the query takes...10 minutes -- which I think is because the conditional > cols (ie fiscal_year) is calculated via extract(year from datecol) which I > think is causing that function to be checked on every row within the view. Rather than guessing, how about showing EXPLAIN ANALYZE output for both cases? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] drop table cascade doesn't drop manual sequences
"Scott Marlowe" <[EMAIL PROTECTED]> writes: > On 10/10/07, Guilherme <[EMAIL PROTECTED]> wrote: >> If I insert a sequence later on table creation with alter table, drop >> table cascade simply doesn't drop this sequence even when I specify >> CASCADE. > This is normal. In 8.2 and up you can use ALTER SEQUENCE ... OWNED BY ... to establish a link that will make a manually created sequence go away when its "owner" column is dropped. In 8.1 that aspect of SERIAL is hidden magic :-( regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] What encoding to use for English, French, Spanish
My project is currently SQL_ASCII encoded. I will need to accomodate both French and Spanish in addition to English. I don't anticipate needing Far East languages. Reading here on the forums I come up with Latin9 as perhaps adequate. But others recommend unicode for relatively simple needs like my own. I'd appreciate any advice on this topic. Unicode is the most versatile? What's the downside of unicode? If Far East languages do become a requirement, unicode is the way to go? -- View this message in context: http://www.nabble.com/What-encoding-to-use-for-English%2C-French%2C-Spanish-tf4622283.html#a13200459 Sent from the PostgreSQL - general mailing list archive at Nabble.com. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] What encoding to use for English, French, Spanish
novnov wrote: > My project is currently SQL_ASCII encoded. I will need to accomodate > both French and Spanish in addition to English. I don't anticipate > needing Far East languages. Reading here on the forums I come up with > Latin9 as perhaps adequate. But others recommend unicode for > relatively simple needs like my own. LATIN9 or UTF-8 are the appropriate choices for your project. The choice between these is mostly a matter of taste, unless there are additional requirements in the project. Nowadays, many operating systems configure themselves to use Unicode by default, and so there is probably no reason to use a more restricted character set. Note that some versions of PostgreSQL have various degrees of trouble with UTF-8 support. Be sure to use the latest version. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] can I define own variables?
Martin, Can you explain this a little further? My client-application would somehow set the the environment-variable PGUSER e.g. to 42. Then create an odbc connection with user='john' and password='doe' to the remote pg-server. Later the triggers that run on the server would access this client-local environment-variable for updates? How can the server-located function see client-local Env-Vars ? Regards Andreas Martin Gainty schrieb: http://www.postgresql.org/docs/8.3/static/libpq-envars.html use environment variables PGHOST is the host for Postgres PGHOSTADDR is the Numeric Host Address for Postgres PGPORT is the port for the PostgresServer to listen on PGDATABASE is the default Database Name PGUSER is the default Username to use to login PGPASSWORD is the password to use for default Username to login M-- - Original Message - From: "Scott Ribe" <[EMAIL PROTECTED]> To: "Andreas" <[EMAIL PROTECTED]>; Sent: Saturday, October 13, 2007 11:05 AM Subject: Re: [GENERAL] can I define own variables? can I define connection-global variables within a ODBC connection ? Temp table, containing only 1 row, one column per variable. If you so wish, wrap it up in stored procedures for creating, setting, and accessing. -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] can I define own variables?
Good Evening Andreas- Windows Control Panel/System/Advanced/EnvironmentVariables/Set System Variables as mentioned PGHOST=localhost all processes for all users will see these env vars Unix bash (to set system wide variables) go to /etc/profile vi .profile PGHOST=localhost export PGHOST :w! :q Hope this helps/ Martin-- - Original Message - From: "Andreas" <[EMAIL PROTECTED]> To: "Martin Gainty" <[EMAIL PROTECTED]> Cc: Sent: Sunday, October 14, 2007 1:30 PM Subject: Re: [GENERAL] can I define own variables? > Martin, > > Can you explain this a little further? > > My client-application would somehow set the the environment-variable > PGUSER e.g. to 42. > Then create an odbc connection with user='john' and password='doe' to > the remote pg-server. > Later the triggers that run on the server would access this client-local > environment-variable for updates? > > How can the server-located function see client-local Env-Vars ? > > Regards > Andreas > > > Martin Gainty schrieb: > > http://www.postgresql.org/docs/8.3/static/libpq-envars.html > > use environment variables > > > > PGHOST is the host for Postgres > > PGHOSTADDR is the Numeric Host Address for Postgres > > PGPORT is the port for the PostgresServer to listen on > > PGDATABASE is the default Database Name > > PGUSER is the default Username to use to login > > PGPASSWORD is the password to use for default Username to login > > > > M-- > > > > - Original Message - > > From: "Scott Ribe" <[EMAIL PROTECTED]> > > To: "Andreas" <[EMAIL PROTECTED]>; > > Sent: Saturday, October 13, 2007 11:05 AM > > Subject: Re: [GENERAL] can I define own variables? > > > > > > > >>> can I define connection-global variables within a ODBC connection ? > >>> > >> Temp table, containing only 1 row, one column per variable. If you so > >> > > wish, > > > >> wrap it up in stored procedures for creating, setting, and accessing. > >> > >> -- > >> Scott Ribe > >> [EMAIL PROTECTED] > >> http://www.killerbytes.com/ > >> (303) 722-0567 voice > >> > >> > >> > >> ---(end of broadcast)--- > >> TIP 1: if posting/reading through Usenet, please send an appropriate > >>subscribe-nomail command to [EMAIL PROTECTED] so that your > >>message can get through to the mailing list cleanly > >> > >> > > > > > > ---(end of broadcast)--- > > TIP 6: explain analyze is your friend > > > > > > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org/ > ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] drop table cascade doesn't drop manual sequences
Tom Lane <[EMAIL PROTECTED]> schrieb: > "Scott Marlowe" <[EMAIL PROTECTED]> writes: > > On 10/10/07, Guilherme <[EMAIL PROTECTED]> wrote: > >> If I insert a sequence later on table creation with alter table, drop > >> table cascade simply doesn't drop this sequence even when I specify > >> CASCADE. > > > This is normal. > > In 8.2 and up you can use ALTER SEQUENCE ... OWNED BY ... to establish a > link that will make a manually created sequence go away when its "owner" > column is dropped. In 8.1 that aspect of SERIAL is hidden magic :-( Really no way to recognize with pg_* or information_schema.* ? I can't believe this, but i don't know a way... Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) "If I was god, I would recompile penguin with --enable-fly."(unknow) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] drop table cascade doesn't drop manual sequences
Andreas Kretschmer <[EMAIL PROTECTED]> writes: > Tom Lane <[EMAIL PROTECTED]> schrieb: >> In 8.2 and up you can use ALTER SEQUENCE ... OWNED BY ... to establish a >> link that will make a manually created sequence go away when its "owner" >> column is dropped. In 8.1 that aspect of SERIAL is hidden magic :-( > Really no way to recognize with pg_* or information_schema.* ? I can't > believe this, but i don't know a way... Well, ALTER OWNED BY works by adding or removing a pg_depend entry, and if you wanted to get down and dirty you could do that manually in earlier releases. I wouldn't recommend it though ... regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] reporting tools
Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] reporting tools
Geoffrey <[EMAIL PROTECTED]> wrote: > > Andrus wrote: > >> I guess I should have noted that we will need to run this on Linux > >> clients. > > > > Geoffrey, > > > > You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) > > Thanks, we're looking for something that will run natively on Linux. I read this, almost deleted it, read it again ... Just in case there's confusion, MONO + FYIReporting _is_ native on Linux. At least, as much so as Java on Linux is. -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Importing MySQL dump into PostgreSQL 8.2
On 10/13/07, wido <[EMAIL PROTECTED]> wrote: > hi! but what happens when somebody sent you a dump file and you can't > convert the tables? all i have is a 116MB sql file, and i won't > convert it by hand :P And chances are no one on the list will do it for you, either, specially not when you stick out your tongue at them ... Use sed or awk, then, or write a perl script... Others have done similar things, and made their work available. Have a search on freshmeat or gborg. Cheers, Andrej ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] reporting tools
Bill Moran wrote: Geoffrey <[EMAIL PROTECTED]> wrote: Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. I read this, almost deleted it, read it again ... Just in case there's confusion, MONO + FYIReporting _is_ native on Linux. At least, as much so as Java on Linux is. We are not interested in using any Microsoft technologies, or technologies based on Microsoft technologies. It's a philosophical position. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Will UPDATE lock if FROM refers to target table?
If I do an update using the FROM clause, and that clause has a sub-query that refers to the table I am updating, will I be waiting for ever for a table to lock to release? The update before seems to stall, and it occurs to me that perhaps this is the problem. BTW - I rarely do anything but the most straightfoward updates - am I mistaken as to the logic of how this will work? Carlo UPDATE mdx_core.provider_practice SET default_postal_code = def.postal_code, default_state_code = def.state_code, default_country_code = 'US' FROM (SELECT provider_id, provider_practice_id, substr(coalesce(a.postal_code, f.default_postal_code), 1, 5) as postal_code, coalesce(a.state_code, f.default_state_code) as state_code FROM mdx_core.provider_practice as pp JOIN mdx_core.facility as f ON f.facility_id = pp.facility_id LEFT JOIN mdx_core.facility_address as fa ON fa.facility_address_id = pp.facility_address_id LEFT JOIN mdx_core.address as a ON a.address_id = fa.address_id WHERE coalesce(a.country_code, f.default_country_code) = 'US' ) as def WHERE provider_practice.provider_practice_id = def.provider_practice_id ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Using C API
On Tue, 9 Oct 2007, [EMAIL PROTECTED] wrote: following are the rpms I have installed on a RHEL5 system. compat-postgresql-libs-4-2PGDG.rhel4 Yes I see the compat one and will install rhel5 if I find one. It has no bearing on what you were running into, and unless you're having a problem there's little reason to fix this, but the file you want is at http://www.postgresql.org/ftp/binary/v8.2.4/linux/rpms/redhat/rhel-es-5/ If I recall correctly here, there were a few weeks where the RPMs on the site for RHEL5 accidentally included the wrong compat library, and I'm guessing you got your copy during that period. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] reporting tools
On 10/14/2007 6:41 PM Geoffrey wrote: Bill Moran wrote: Geoffrey <[EMAIL PROTECTED]> wrote: Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. I read this, almost deleted it, read it again ... Just in case there's confusion, MONO + FYIReporting _is_ native on Linux. At least, as much so as Java on Linux is. We are not interested in using any Microsoft technologies, or technologies based on Microsoft technologies. It's a philosophical position. Geoffrey, have you looked at OpenRPT? http://sourceforge.net/projects/openrpt ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Using case expressions in update set clause
I just want to clarify that the following will always behave the way I think it's supposed to behave:- update tableA set date_field = case when date_field is null then some_date else date_field end; If the current value of date_field for the current record is null then set the date_field with some_date value otherwise keep the current value of date_field. Note: in the actual situation there'll be other fields that will always be updated in addition to this one field that needs to be conditionally updated. Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group m: 0415 469 095 www.flashdev.org.au ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster