[GENERAL] How to manually force a transaction wraparound
I want to force my database to wraparound, just to see what happens. How can I do this without consuming a few billion transactions? My google-fu is failing me. :( Thanks for your time! eric
Re: [GENERAL] How to manually force a transaction wraparound
On Fri, Apr 29, 2016 at 10:16 PM Thomas Munro wrote: > On Sat, Apr 30, 2016 at 10:48 AM, Eric Ridge wrote: > > I want to force my database to wraparound, just to see what happens. How > > can I do this without consuming a few billion transactions? > > Take a look at the script repro-bogus-subtrans-error-wraparound.sh > from this email: > > > http://www.postgresql.org/message-id/CAEepm=3z0eolpo5wtuwsem38kbq+gjp8xxiuljkuqpm-sw7...@mail.gmail.com > > That used pg_resetxlog -x $XID $PGDATA, but needed to do several hops > stop/pg_resetxlog/start hops to get all the way around the xid clock. Thanks Thomas. I ended up figuring out something similar after I read the docs on pg_resetxlog. It did something interesting to two of my local databases, but I was able to figure out what I wanted to know. Thanks again! eric
[GENERAL] Memory leak with CREATE TEMP TABLE ON COMMIT DROP?
# select version(); version --- PostgreSQL 9.3.4 on x86_64-apple-darwin13.2.0, compiled by Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn), 64-bit (1 row) As best I can guess, Postgres has some kind of memory leak around (at least) temporary tables flagged to drop on commit. It's fairly easy to reproduce: Terminal A -- $ createdb leak $ for i in $(seq 1 100) ; do echo "begin; create temp table foo() on commit drop; commit;"; done | psql leak > /dev/null Terminal B -- $ while(true); do ps auwx | grep $PID_OF_POSTGRES_PROCESS_FROM_TERMINAL_A; sleep 1 ; done And watch the RSS size continue to climb, fairly quickly. This happens on both OS X and Linux (both x86_64). We ran into this thanks to an update trigger that created a temporary table with on commit drop where we were literally updating millions of rows as atomic transactions, across about 100 concurrent connections, firing the trigger for each atomic update. The server quickly ran out of memory. It took some time to find what appears to be the actual problem, but I think this is it. We've since rewritten the trigger to avoid using a temporary table (probably a good thing anyways) and all is well, but I was very shocked to see Postgres behaving badly here. Any thoughts? And thanks for your time! eric PROPRIETARY AND COMPANY CONFIDENTIAL COMMUNICATIONS The information contained in this communication is intended only for the use of the addressee. Any other use is strictly prohibited. Please notify the sender if you have received this message in error. This communication is protected by applicable legal privileges and is company confidential. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Index bloat with "USING GIN(varchar[])" index?
PostgreSQL 8.4.8 on i386-apple-darwin10.7.0, compiled by GCC i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit I'm not sure exactly what's going on, but I've got a table defined like this: CREATE TABLE foo ( ... tags varchar(1000)[], ... ); CREATE INDEX idxfoo_tags ON foo USING GIN (tags); I query the tags column quite a bit like so: SELECT * FROM foo WHERE tags @> ARRAY['value']::varchar[]; Works great. Super fast... usually. "foo" has roughly 50k records, and each row has anywhere from 2 to 5 elements in "tags". "tags" gets rewritten pretty regularly across large swaths of records (1k - ~10k at a time) with different-ish values. What I see happing is the above SQL going from a few milliseconds to a few seconds. Until I REINDEX it. Then it performs well again until lots of rewrites happen. Before or after the REINDEX, the query plan is always the same (and it's a good plan): explain analyze SELECT * FROM foo WHERE tags @> ARRAY['CATTLE']::varchar[]; QUERY PLAN -- Bitmap Heap Scan on foo (cost=8.59..147.76 rows=38 width=496) (actual time=8.870..10.073 rows=1654 loops=1) Recheck Cond: (tags @> '{CATTLE}'::character varying[]) -> Bitmap Index Scan on idxfoo_tags87 (cost=0.00..8.58 rows=38 width=0) (actual time=8.806..8.806 rows=5034 loops=1) Index Cond: (tags @> '{CATTLE}'::character varying[]) Total runtime: 10.258 ms (5 rows) I haven't had a chance to nail down a standalone test case to reproduce this, but it smells like index bloat. Are GIN indexes known for bloating, especially if they're on a varchar[]? Any suggestions for how to prove/disprove that it's index bloat? eric -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] error when compiling a c function
On Wed, Jul 27, 2011 at 2:12 PM, Sebastian Jaenicke wrote: > On Wed, Jul 27, 2011 at 09:34:20AM -0700, Ioana Danes wrote: > >> #ifdef PG_MODULE_MAGIC > > #ifndef Just to avoid confusion... #ifdef *is* correct. See: http://www.postgresql.org/docs/current/static/xfunc-c.html (I can't comment on the OP's actual problem) eric -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] OS X 10.7, psql, and tab completion?
Hi! What's the incantation one needs to recite before compiling Postgres 8.4.x on OS X 10.7 such that psql's tab completion will work? I love my Mac, but Apple really dorked up libedit/readline and I just can't figure out what I'm supposed to do. Any hints will be greatly appreciated! eric -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Cannot access various Postgres sites
I'm only subscribed to -general and -hackers, so if this message should go to a different list, please feel free to forward it along, but I've been unable to get to a couple of the Postgres websites for quite awhile. Back on March 20 Bruce Momjian posted a link in -hackers to the 9.0 release notes (http://developer.postgresql.org/pgdocs/postgres/release-9-0.html). It blew my mind that so many were able to discuss that page because developer.postgresql.org didn't respond then, and it doesn't respond now. :( I'd still like to read that page. Today I wanted to peek inside the JDBC driver sources before I join their mailing list and ask a bunch of dumb questions, but jdbc.postgresql.orgisn't responding either. I've also seen intermittent connectivity issues with planet.postgresql.org, but it seems to be working today. Is it just my network or is something up with these hosts? eric
Re: [GENERAL] Cannot access various Postgres sites
It's not DNS. I can resolve the hostnames just fine. In fact, those two both resolve to the same IP: 200.46.204.71 They just don't respond. $ telnet 200.46.204.71 80 Trying 200.46.204.71... just hangs indefinitely. :( eric I could read the release notes for 9.0 by then and still can now. This is from two different countries (Brazil first and now US), so this should be an indicative that you might be having network issues. Can you please try canging your DNS servers from whatever you have now to Google's? Make 8.8.4.4 primary and 8.8.8.8 secondary and try again. You can also try using OpenDNS servers. -- Jorge Godoy On Sat, Apr 17, 2010 at 11:57, Eric Ridge wrote: > I'm only subscribed to -general and -hackers, so if this message should go > to a different list, please feel free to forward it along, but I've been > unable to get to a couple of the Postgres websites for quite awhile. > > Back on March 20 Bruce Momjian posted a link in -hackers to the 9.0 release > notes (http://developer.postgresql.org/pgdocs/postgres/release-9-0.html). > It blew my mind that so many were able to discuss that page because > developer.postgresql.org didn't respond then, and it doesn't respond now. > :( I'd still like to read that page. > > Today I wanted to peek inside the JDBC driver sources before I join their > mailing list and ask a bunch of dumb questions, butjdbc.postgresql.org isn't > responding either. > > I've also seen intermittent connectivity issues with planet.postgresql.org, > but it seems to be working today. > > Is it just my network or is something up with these hosts? > > eric > >
Re: [GENERAL] Cannot access various Postgres sites
On Sat, Apr 17, 2010 at 1:42 PM, Adrian Klaver wrote: I can get to both sites and telnet also. Must be something on your end :( > Yup, it was. :( I appreciate the confirmation that it was me, thanks! eric
[GENERAL] Read Committed Transaction Isolation and SELECT ... UNION ... SELECT
I think I've been studying the documentation too long and have thought myself into a circle. http://www.postgresql.org/docs/8.4/static/transaction-iso.html says: "Also note that two successive SELECT commands can see different data, even though they are within a single transaction, if other transactions commit changes during execution of the first SELECT." I get that what that means in normal cases, but what about a single query comprised of one or more unions: SELECT ... FROM foo WHERE ... UNION SELECT ... FROM foo WHERE ... Since the above is one query issued by the client, are the two SELECT statements still operating within the same snapshot? Is the above considered to be one command? I think the answer to those questions is "Yes", but I'd appreciate some clarification. Thanks in advance! eric -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Read Committed Transaction Isolation and SELECT ... UNION ... SELECT
On Tue, Aug 10, 2010 at 6:46 PM, Tom Lane wrote: > > That's just one SELECT command. Sub-SELECTs inside a query don't > count as separate commands for this purpose; the use of SELECT in > that way is just an artifact of the SQL grammar. Thanks. That's what I figured, but wanted to make sure. eric -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Touch row ?
On Jan 24, 2004, at 2:34 PM, Tom Lane wrote: The restriction is not that: the restriction is that you can't have an infinite recursion in your rules. The above is infinitely recursive because it says that for any UPDATE on mytable, you should also do an UPDATE on mytable ... but then for that UPDATE you also need to do another UPDATE on mytable ... etc. The bodies of rules are not exempt from rule expansion. Understood. Even after 12 hours of sleep (I love Saturdays!), I still can't see how an update rule wouldn't cause infinite recursion if it tried to update its target. It might be interesting to change that definition, so that a rule like the above could be written that wouldn't recursively trigger itself. This would need a lot of careful thought though. In most cases you *do* want rule bodies to be rule-expanded. I sure want rule bodies to be rule-expaned! Rule's are super cool and extremely flexible as they are. A different tack that might be interesting to think about is to invent a notion of an "update default" for a column, analogous to the existing "insert default". The normal behavior is that the "update default" is the old value, but if you could specify some computable expression to use instead, this and related problems could be solved with a much simpler mechanism than a rule. This thought ran through my head last night. Something like: CREATE TABLE foo ( id int4 DEFAULT nextval('foo_seq'), d timestamp DEFAULT now() ON UPDATE now() ); But it seems that if the user explicitly provided a value for 'd', you'd want to use that over the computed value. Whatever the details, it would be a very useful feature to have. eric ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Touch row ?
On Jan 24, 2004, at 3:58 PM, Tom Lane wrote: True. So if your goal is to force the timestamp column to be the correct value even when the user tries to set it to something else, you'd still have to use a trigger or rule. Maybe the rule is that the computed value is always used, unless: UPDATE foo OVERRIDE DEFAULTS set d=yesterday(); *shrug*. At least with something like the above, the user makes his intention explicit. Perhaps if user doesn't specify OVERRIDE DEFAULTS, postgres outputs a warning: WARNING: value for column 'd' ignored. HINT: Use UPDATE ... OVERRIDE DEFAULTS to override ON UPDATE DEFAULT values and of course, this would be handy too: UPDATE foo OVERRIDE DEFAULTS set d=DEFAULT; eric ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] ps output and postgres
On Feb 12, 2004, at 11:54 AM, Bruce Momjian wrote: Well, let's see if someone else like the feature because adding it might reduce its usability as a guaranteed value for admins. Maybe making the ability to do this a configuration option (off by default, of course) would make everyone happy? eric ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] How to determine current database?
On Feb 13, 2004, at 6:05 PM, Ron St-Pierre wrote: I am using postgres 7.3.4 and need to be able to determine which database a query is being run in (from a script). pg_database lists databases but doesn't tell me which one is currently active. Is there a query I can use along the lines of: The built-in function "current_database()" returns the current database name. =# select current_database(); current_database -- testing (1 row) Use it in your update statements too. eric UPDATE tblUpdates SET xxx=1234 WHERE pg_current = TRUE; or UPDATE tblUpdates SET xxx=1234 WHERE pg_current = thisDBname; We have about 15 databases all set up identically and when the structure changes I run scripts to update them to ensure that they are all the same. I need to create two new databases which have slight changes (different default values - for now) and want to be able to have my scripts be able to determine which database their running from. Thanks Ron ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] ps output and postgres
On Feb 12, 2004, at 12:08 PM, Bruce Momjian wrote: Yes, I think that would be the only way to go, but I would like to have some other folks interested in extending the ps display before adding such a capability. Too bad nobody responded. I still think it would be a useful feature. eric ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Cursors and Transactions, why?
On Apr 7, 2004, at 7:51 AM, Jan Wieck wrote: Eric Ridge wrote: On Apr 6, 2004, at 11:54 AM, Jan Wieck wrote: If the underlying query is for example a simple sequential scan, then the result set is not materialized but every future fetch operation will read directly from the base table. This would obviously get screwed up if vacuum would think nobody needs those rows any more. Is vacuum the only thing that would muck with the rows? Vacuum is the only thing that cares for the dustmites, yes. And WITH HOLD is strong enough to defend against a vacuum, I hope... I need to setup a 7.4 test server and play with this some, and figure out if the benefits are really what I want them to be. I do appreciate the insight into how cursors work... it helps a lot! Experience and knowledge can only be replaced by more experience and more knowledge. Very wise words. My real problem is that the JDBC drivers (and I assume this is true for all client interfaces) buffer the results of a SELECT in memory, because the backend pushes out all the tuples as the response. I'm not dealing with a large number of rows (only a few thousand), but they've very wide, and many contain fields with multi-megabyte data. In some situations, when I've got a lot of open ResultSets, the JVM throws OutOfMemory errors. One half-baked thought was to hack the JDBC drivers to have 'em gzip large resultsets in memory. Wouldn't completely solve the problem, but would probably help quite a bit. But the better solution is to use cursors. We're not in a position to upgrade to 7.4 just yet, so we'll just deal with the OutOfMemory errors until we can. eric ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Cursors and Transactions, why?
On Apr 7, 2004, at 12:43 AM, Joe Conway wrote: Eric Ridge wrote: On Apr 6, 2004, at 11:54 AM, Jan Wieck wrote: And now you know why they are so good if you don't use all rows. This benefit I think goes away if you use Joe Conway's suggestion of WITH HOLD. Okay, so WITH HOLD is actually materializing the entire resultset (sequential scan or otherwise)? If that's true, you're right, some of the benefits do go away. Keep in mind that the tuplestore stays in memory as long as it fits within sort_mem kilobytes. And you can do: More good information. Thanks! Is the tuplestore basically just an array of ItemPointer-s? In mean, it's not a copy of each entire row, is it? eric ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Changing the size of a varchar field
Using PG 7.3.x, how stupid is this: UPDATE pg_attribute SET atttypmod= WHERE ; I had to do this on a database, and surprisingly it seems to have worked just fine. The columns accept a larger value, the existing values are still intact, and I've seen no other strange errors... Am I living dangerously, or is this an "okay" thing to do? eric ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] questions on rules
On Apr 26, 2004, at 3:12 PM, Timothy Perrigo wrote: I'm trying to set up some basic rules to log inserts, updates, and deletes to tables in an inheritance hierarchy (by inserting records into a log table), and I've got a couple of questions. (1) Is it possible to create a rule on a base table and have it operate for all derived tables? I'd like to just create 3 rules (insert/update/delete) on the base table and have them apply to all inherited tables. Can this be done? I've never tried this myself, but I feel pretty good about saying the answer is "NO". :( Most other postgres features (esp. triggers) don't inherit either. (2) I've got a very simple update rule-- create rule log_updates as on update to foo do insert into audit_log(table_oid, id, log_what) values (foo.tableoid, NEW.foo_id, 'U'); Ever just tried to do this from psql: SELECT foo.tableoid; You get a resultset with a row for every row in table foo. That's essentially what your INSERT statement is doing. It's as if you wrote: INSERT INTO audit_log(table_oid, id, what) SELECT tableoid, NEW.foo_id, 'U' FROM foo; What you want to do in your rule, I think, is something like this: INSERT INTO audit_log(table_oid, id, what) values ( (select tableoid from foo limit 1), NEW.foo_id, 'U'); There might be a different way to lookup the tableoid for table "foo", but it would likely require using 'foo' as a quoted string against a query in pg_class, so the above might make things clearer. eric ps, never knew about the "tableoid" field until just now. how interesting. I had hoped that this would create a single entry in my audit_log table for each row updated. However, it seems to fire for each record in the "foo" table, even if the update affected only one row! What am I doing wrong? Any help would be very much appreciated. Thanks! Tim Perrigo ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] ps output and postgres
On Feb 11, 2004, at 10:00 PM, Bruce Momjian wrote: No one really has thought of that before. We could do it, though there are admin reasons for restricting that ability. If we said only superusers could change it, it wouldn't be very useful. That's a good point. It would be cool if SET could change it, but it seems that would make it pretty useless for administrator usage. Ran into a situation yesterday where all connections were exhausted on a development database, and thanks to our nat-ing firewall, couldn't tell where all the connections were coming from. It made me think that intelligently mucking with the ps output might have made things easier for me to find the person to yell at. One could just as easily report info like "real" client ip, client application state, etc, to a table, but having that stuff via 'ps' just seemed like a cool idea. In addition, some of our applications have a few background threads that maintain persistent connections to the database. Being able to logically label those processes would make it easier to identify which backend processes are still connected. eric ---(end of broadcast)--- TIP 8: explain analyze is your friend
RE: [GENERAL] Re: Re: Storing images in PG?
> only, a problem can be 30% grow of data... (you can use > "lztext" that is compressed datype:-). Alternativly, you can store "pointers" to the images in the database. Such as a local filepath or url (file:///usr/local/myimages/foo.gif or ftp://user:passwd@host/pub/myimages/foo.gif). Then you could use wget to get the file bytes for you. The nice thing about doing this is that you don't need to store images and data on the same server. It gives you room to expand your storage. It also improves performance b/c I'm sure FTP or a local file copy is more efficient than the Postgres network protocol. Plus, no need to encode and constantly decode the bytes. Also, if you're not going to be searching the bytes of the file (which I'm sure you're not), why put it in the database? eric ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [GENERAL] Re: Re: Storing images in PG?
> It would be convenient to have fast access to binary data in > the database > that is similar to what other databases do. This would allow more > applications to run on PostgreSQL. For small sets of files you're probably right. When you start to reach millions of files, it makes much more sense to separate things. I live in an environment where we manage just over 18 million images. We do the pointer thing and have a completely separate "image farm" to store the images. > Not everybody will have the opportunity to access the local > filesystem, e.g. > ISPs providing servlet hosting will likely turn of access to > the filesystem. This is very true. But if you do have access to the local filesystem, why not use it? It's far more efficient that executing a query against the database and forcing the database to chunk in the data. > It is also more work to write code to manage files in the > filesystem. Why > store pointers to the actual files in the database, if I can store the > files themselves there. In my mind, databases are for storing things that you want to search. It makes no sense to search the bytes of an image, but it does make sense to search the database to find where the image is located. Plus, if you do the pointer thing and if the pointers are URL's, images can live anywhere: local fileystem, remote ftp server, remote web server, remote samba server, whatever. > As for speed of access you could always provide a cache for the images > in your application. true. You'd probably want to do that regardless of where the images are physically located. > > I've done the pointer to filesystem type of implementation in a CMS > system before and that turned out be messy to manage. E.g. If > the disk is > full when adding a image, rollback transaction. What if some > ignorant user > messes up the files ? Access control to the images, for that > we need to go through the application anyway. If you're managing large systems things are going to be complicated, whether you're tracking images or chickens. You've got to find the solution that will give you the most long-term flexibility, even if that means more complication. eric ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] count of occurences
> I tried > > select distinct job_num, (select count(*) from search_records j where > j.job_num=k.job_num) from search_records k can't you just do: select job_num, count(job_num) from search_records group by job_num order by job_num http://www.postgresql.org/idocs/index.php?queries.html#QUERIES-GROUP and creating an index on job_num might improve performace even more, but I don't know if index scans are used in aggregate functions or not. eric ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] UNIQUE constraint violations and transactions
Hi! I've got a UNIQUE constraint on a field, and obviously, when I try to insert a duplicate value, I get a WARNING via psql (and an Exception via the JDBC drivers) saying I tried to violate the constraint. No biggie. This is what I expect. The tricky part comes in when I violate the constraint inside a transaction. It seems once a WARNING happens inside a transaction you can't do anything else in the transaction. You can only COMMIT or ROLLBACK. In my situation, it's not fatal that I tried to insert a duplicate value... I just want to continue on in the transaction. I realize this is by design, but I'm wondering if there's a configuration option (compile-time or otherwise) that will let me continue to do stuff inside the transaction, even after a WARNING. Another way to ask this might be: Is it more efficient to blindly INSERT the value, and let postgres throw the Exception, or to SELECT for it first, then INSERT only if the SELECT returns zero rows? ie: try INSERT INTO words (word) VALUES ('foo'); catch (Constraint violation) COMMIT // this bugs me because I don't want BEGIN// to commit the transaction yet end v/s SELECT word_id FROM words WHERE word='foo'; if (resultset size == 0) INSERT INTO words (word) VALUES ('foo'); end eric ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Can't Build 7.3.4 on OS X
On Sep 21, 2003, at 3:11 PM, Tom Lane wrote: BTW, is anyone interested in looking into whether we can be made to build without using either flag? I tried it and saw a number of I did this... before I knew about -no-cpp-precomp. :( I read all about -traditional-cpp in the gcc man page, but could never find the corresponding "not traditional cpp" flag. It boiled down to two things: use of macros that used the "stringification" syntax, and whitespace around marco arguments. Take src/include/nodes/nodes.h, around line 265 for example: #define makeNode(_type_) ((_type_ *) newNode(sizeof(_type_),T_#_type_)) ... #define IsA(nodeptr, _type_) (nodeTag(nodeptr) == T_#_type_) gcc 3.3 just didn't like this. So I had to fake it out: #define T_UNDER() T_ #define makeNode(_type_) ((_type_ *) newNode(sizeof(_type_),T_UNDER()_type_)) ... #define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_UNDER()_type_) But it gets better. Apparently with gcc 3.3 whitespace around macro arguments is preserved! So, in the case of calls to (at least) the IsA macro: before: if (IsA(foo, Short)) after: if (IsA(foo,Short)) ^- no space! From what I could tell, the statement would be expanded into (using my re-defined version above): if (nodeTag(nodeptr) == T_ Short) which of course isn't legal syntax b/c of the space. So I went through with some Perl and did a bunch of global substitutions on the files that gcc complained about. There were a few more than the above examples, but not too many. too. It would be interesting to understand what the problem is. There it is. eric ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Can't Build 7.3.4 on OS X
On Sep 27, 2003, at 7:41 PM, Tom Lane wrote: I'm not sure whether we are planning another 7.3 release or not. I'd like to push forward to a 7.4 release, myself. Do you have any idea when OS X 10.3 will be released? If it's further out than next month, we could probably plan that 7.4 will win the footrace. By no means is this official, cuz well, I ain't in the loop, but I recently read something about mid-late November. I suspect they'll want it out by the time all the G5's are shipped... and the 2x2gig started shipping on friday... well, at least mine did. eric ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html