Re: [JDBC] Re: [BUGS] 'on insert do instead' rule with a where clause responds 'INSERT 0 0'
Julius Stroffek wrote: There is only one option that comes to my mind - always return Statment.SUCCESS_NO_INFO in executeBatch (or possibly only depending on some java property). I can not see any simple solution for Statement.executeUpdate since the number of rows affected may differ depending on the rules and might be also difficult to calculate. The server is reporting to the driver that zero rows were affected (not "unknown", *zero*) so I don't see any reason why the driver should not report that as the number of rows affected. Returning SUCCESS_NO_INFO reduces the usefulness of the driver in the other 98% of cases where there are no INSTEAD rules. The protocol docs say: CommandComplete (B) [...] For an INSERT command, the tag is INSERT oid rows, where rows is the number of rows inserted. oid is the object ID of the inserted row if rows is 1 and the target table has OIDs; otherwise oid is 0. So if the server is not returning "the number of rows inserted" then either the server has a bug or the protocol docs are wrong. -O ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [JDBC] [BUGS] Problem with Serializable transactions
Tom Lane wrote: "Robert Green" <[EMAIL PROTECTED]> writes: I have noticed that using postgresql 7.4.2 at serializable level it is possible for two users to update the database at the same time. I ran your test program here and tracked down what the problem is. What's happening is that the JDBC driver is issuing commands in the wrong order. Look at this log_statement trace of startup of one of your test processes: 2004-03-25 19:19:58 31096 LOG: statement: set datestyle to 'ISO'; select version(), case when pg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else getdatabaseencoding() end; 2004-03-25 19:19:58 31096 LOG: statement: set client_encoding = 'UNICODE' 2004-03-25 19:19:58 31096 LOG: statement: begin; 2004-03-25 19:19:58 31096 LOG: statement: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL SERIALIZABLE 2004-03-25 19:19:58 31096 LOG: statement: SELECT value FROM Values WHERE valueId = 0 2004-03-25 19:19:58 31096 LOG: statement: UPDATE Values SET value = 31WHERE valueId = 0 2004-03-25 19:19:58 31096 LOG: statement: commit;begin; The error is that "SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL SERIALIZABLE" is issued *after* the first BEGIN. This means that the transaction level of the first transaction has already been set, and it's READ COMMITTED. Your bug happens when a write conflict occurs during that first transaction (this is why you never saw it on any valueId except zero). One (untested) workaround that might work is to call setTransactionIsolation() *before* setAutoCommit(). The problem is that the JDBC driver keeps a transaction open whenever autocommit is off, even if no statements have been executed since the last setAutoCommit()/commit()/rollback(). This also causes the "idle JDBC connections produce idle-in-transaction backends" issue. Doing things in this order is broken for another reason, which is that if the first transaction later rolls back with an error, the SET will be rolled back too, and so all the subsequent transactions will have the wrong isolation level as well. In short: if the driver is gonna use SET SESSION CHARACTERISTICS for this, it *must* issue it outside any transaction block. What do we do if setTransactionIsolation() is called halfway through a transaction? Refusing to do anything (and throwing an exception) seems better than accepting a request that might get rolled back. The current driver doesn't track transaction state beyond "autocommit is off, I must be in a transaction!" so there will be some lower-level work needed if we want to be more selective about this. The JDBC spec says that the behaviour of setTransactionIsolation() is implementation-defined if it's called during a transaction, so the client is digging a hole for themselves anyway if they do this. Of course, the spec doesn't seem to define exactly *when* a transaction is considered to start (surprise surprise) so the safest course for portable clients is probably to always set isolation while autocommit is on. Perhaps we should just always throw an exception if setTransactionIsolation() is called with autocommit off, since we know that doesn't work at all currently? -O ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [JDBC] [BUGS] BUG #1347: Bulk Import stopps after a while (
Kris Jurka wrote: I sent him a driver that was recompiled with MAX_BUFFERED_QUERIES = 1 and he still appears to be having some problems, but I have no idea where. I wonder if he is doing one transaction per batch update loop or something similar. If there's a bottleneck on the server side that's related to the number of updates in a transaction (deferred constraints or something similar?) then that could explain some of the behaviour. Don't know why changing MAX_BUFFERED_QUERIES would change things at all if that's the case, though. -O ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [JDBC] [BUGS] BUG #1347: Bulk Import stopps after a while (
Kris Jurka wrote: On Mon, 13 Dec 2004, PostgreSQL Bugs List wrote: The following bug has been logged online: Bug reference: 1347 PostgreSQL version: 8.0 Beta Operating system: Windows XP Description:Bulk Import stopps after a while ( 8.0.0. RC1) - I have written a java program to transfer data from SQL Server 2000 to PosgresSQL 8.0.0 RC1 release. I am updating the data in batches. If my batch size is 1000/2000 records at a time.. This works fine.. And if I change this size to say 20,000, it does only finishes one loop.. and then stays idle. The CPU usage down to 10 % which was before 100 % while applying the first batch of 20, 000 records. The execution of program is halting just at int n [] = stmt.batchUpdate(); This may be a problem with the JDBC driver deadlocking as described in the below code comment. When originally written I asked Oliver about the estimate of MAX_BUFFERED_QUERIES and he felt confident in that number. It would be good to know if lowering this number fixes your problem. [...] I'm not convinced that this is the problem as MAX_BUFFERED_QUERIES is around 250 by default, and yet batches of 1000/2000 execute OK. Note that I haven't seen much benefit to batch sizes above perhaps 50 on a LAN anyway. Batch sizes of 20,000 are almost certainly excessive (but the driver shouldn't fail!) Are you executing the batch with autocommit on or off? Anyway, there are two bits of debug output that would be useful to see: 1. A thread dump of the importing JVM when it gets stuck. On unix-like systems you'd do this by sending the JVM a SIGQUIT. I don't know how you'd do the same under Windows. 2. Driver debug output: a. add "?logLevel=2" to the driver URL b. set a log writer on DriverManager at the start of the run: // Point this somewhere else, e.g. to a file, // if it's more convenient. java.sql.DriverManager.setLogWriter( new java.io.PrintWriter(System.err))) This should trace all the protocol messages being sent or received to System.err. If the driver is wedging at the protocol level, it should show up in that output. Send me this output off-list and compressed -- it'll be pretty big for 20,000+ queries! -O ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [JDBC] [BUGS] BUG #1347: Bulk Import stopps after a while ( 8.0.0.
Tom Lane wrote: Kris Jurka <[EMAIL PROTECTED]> writes: // To avoid this, we guess at how many queries we can send before the // server -> driver stream's buffer is full (MAX_BUFFERED_QUERIES). It seems essentially impossible for the driver to do this reliably, since it has no clue how much data any one query will return. Right, but I'm not convinced that this is the problem here as batch execution in JDBC is only allowed to do non-result-set-returning queries. The only case I can think of where this would break is if something is causing lots of logging output to the client (triggers etc.). How about instead thinking in terms of not filling the driver->server channel? You have a good handle on how much data you have sent (or at least you could keep track of that), and if you bound it to 64K or so then you should be safe. Perhaps the limit ought to be easily configurable just in case, but at least you'd be measuring something measurable. That's possibly a better idea but it does mean that we wouldn't be able to batch inserts that contain lots of data. That's the use case I needed to support when I wrote this in the first place.. Also, it's never going to be 100% without a separate thread, as the server can spontaneously generate output (e.g. because of NOTIFY) regardless of how careful we are with our queries. There's actually another problem with this code: the subdivision into smaller batches is not transparent if autocommit is on. We send a Sync at the end of the batch which will cause an implicit commit. We should be sending a Flush, but it's harder for the driver to handle this as a Flush does not provoke a response message from the server, so we would have to track the protocol state more closely. Given that JDBC is silent about the autocommit semantics of batch execution anyway, I'm not too worried about fixing this urgently. I'd like to see that this is really the problem before tweaking this code. Given that the OP said that batch sizes of 1000-2000 worked OK, I'm not sure that this code is the problem since the maximum number of queries we'll send per batch is around 250 by default. -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [pgsql-hackers-win32] [BUGS] More SSL questions..
Tom Lane wrote: BTW, as of CVS tip, if the server has a root.crt file and the client does not have any certificate files, the default behavior is that connections fail: $ psql -h localhost regression psql: could not open certificate file "/home/tgl/.postgresql/postgresql.crt": No such file or directory $ I'm not sure if this is desirable. Should libpq try to fall back to a non-SSL-encrypted connection, instead? Only if the server certificate validates, otherwise an active attacker could intercept the SSL connection to force libpq to fall back to non-SSL and then intercept the unencrypted/unauthenticated connection. Does openssl lets you detect a "server cert OK but no suitable client cert provided" error easily? -O ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [pgsql-hackers-win32] [BUGS] More SSL questions..
Tom Lane wrote: Basically my point here is that the default "prefer" SSL mode effectively becomes "require" if the server has a root.crt. Ok, in the scenario where validation is important, clients should be using "require" anyway, so it's not an issue so long as libpq doesn't try to fall back to non-SSL when "require" is in effect. A default SSL mode of "prefer" does seem a bit dodgy, though -- it only protects against passive attacks. I'd be tempted to make "disable" the default, so that you have a better chance of visible errors if clients are not correctly configured rather than silently forging ahead with a connection that might be unintentionally insecure. That would mean lots of pain for existing installs though :( I had to dig into the libpq docs to find any mention of the environment variables / config files that set the SSL behaviour. It'd be useful to have details in the psql manpage too.. -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] BUG #1402: getPrecision don't work
raxfar wrote: The following bug has been logged online: Bug reference: 1402 Logged by: raxfar Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.0-rc2 Operating system: windows 2000 Description:getPrecision don't work Details: have java.sql.ResultSetMetaData _oMetaData.getPrecision(i) java.sql.ResultSetMetaData _oMetaData.getScale(i) methods not to get Precision and Scale What JDBC driver version are you using? What is the type of the columns that you are calling getPrecision() and getScale() on? Do you have an example schema and query I can try out? What are the results you get? What are the results you expected to get? -O ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #1410: Hibernate PerformanceTest is incredibly slow
Simon Nicholls wrote: Bug reference: 1410 PostgreSQL version: PostgreSQL 8.0 Operating system: WindowsXP Description:Hibernate PerformanceTest is incredibly slow (in effect unusable) 00:41:41,453 DEBUG SchemaExport:136 - drop table Simple 00:41:41,500 DEBUG SchemaExport:154 - create table Simple ( id_ int8 not null, name varchar(255), address varchar(255), count_ int4 not null unique, date_ timestamp, pay float4, other int8, primary key (id_) ) 00:41:41,656 DEBUG SchemaExport:154 - alter table Simple add constraint FK939D1DD26527F10 foreign key (other) references Simple Without an index on Simple.other, DELETEs on Simple can be slow (each change requires a seqscan over Simple to check the FK constraint). It looks like Hibernate's postgresql dialect code does not add an index in this case. If I patch Hibernate to not add a FK constraint at all, it runs substantially faster. It does not seem trivial to fix Hibernate to add an index in this case -- Hibernate wants to run "ALTER TABLE tablename " to add a FK constraint, but we need to execute "ALTER TABLE tablename ADD CONSTRAINT ...; CREATE INDEX ON tablename ..." and the dialect code is not given "tablename". So to fix this properly needs some work on the Hibernate side. -O ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [BUGS] BUG #1459: Connection hangs when other connection is not
Peter Eisentraut wrote: Am Freitag, 4. Februar 2005 11:54 schrieb Rainer Frey: Is there any possibility to set a timeout for the lock, after which the ALTER TABLE statement fails, instead of remaining in wait status (when calling with JDBC? Yes, there is a statement_timeout parameter or something like that. JDBC has a couple of mechanisms that may be more portable than fiddling with statement_timeout directly: - Statement.setQueryTimeout(). This is *not* implemented by the current driver (it is a no-op), but shouldn't be too hard to implement as mapping to statement_timeout if you feel inclined to do some driver hacking. - Statement.cancel(). This is implemented by the current driver, but you will need to build your own infrastructure to handle doing the cancel from a separate thread if a query takes too long. -O ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [BUGS] PreparedStatement setObject
Kobus Walters wrote: Create a prepared statement with sql "select localtimestamp - time ?". Call setObject(1, java.lang.String) with the String value "00:00:00" for instance. In version 7 this gets executed as "select localtimestamp - time '00:00:00'", while version 8 try to execute it as "select localtimestamp - time 00:00:00", which of cause fails since the quotes are missing. I can't reproduce this. With the 8.0-310 driver against an 8.0.0 server, I get a syntax error with that query: ERROR: syntax error at or near "$1" The query sent by the driver is: SELECT LOCALTIMESTAMP - TIME $1 which the backend does not like. The change in behaviour from 7.4-era drivers is because the 8.0 driver pushes handling of parameters to the server, so you have to follow the server's rules for where you may place parameters. Apparently, after TIME isn't one of those places. To use a string as a time parameter, see the examples in my test code (attached). -O import java.sql.*; // Run with one argument: a JDBC url to connect to. public class TestTime { public static void main(String[] args) throws Exception { Class.forName("org.postgresql.Driver"); Connection c = DriverManager.getConnection(args[0]); // Works: PreparedStatement s = c.prepareStatement("SELECT LOCALTIMESTAMP - ?"); s.setObject(1, "00:00:00", Types.TIME); // Works: //PreparedStatement s = c.prepareStatement("SELECT LOCALTIMESTAMP - {t '00:00:00'}"); // Breaks: //PreparedStatement s = c.prepareStatement("SELECT LOCALTIMESTAMP - TIME ?"); //s.setObject(1, "00:00:00", Types.TIME); ResultSet rs = s.executeQuery(); rs.next(); System.err.println("returned " + rs.getObject(1)); rs.close(); s.close(); c.close(); } } ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] PGPASSWORD
Tom Lane wrote: postgresbugs <[EMAIL PROTECTED]> writes: I would like to recommend removing the deprecation flag from PGPASSWORD. Do you understand why it's deprecated? It's because on some operating systems, everyone on the machine can see your environment variable values (if they know how to look). We cannot prevent that. Assuming that you can't tweak .pgpass (for example, you're just propagating a password you interactively got from the user), what's the recommended way to provide the password? The solution I've seen elsewhere is to pass it on an additional FD that's specified in a command-line option ("--password-fd=4"). But AFAIK the postgres tools don't support this. -O ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] PGPASSWORD
Tom Lane wrote: Oliver Jowett <[EMAIL PROTECTED]> writes: Assuming that you can't tweak .pgpass (for example, you're just propagating a password you interactively got from the user), what's the recommended way to provide the password? In the connection string given to PQconnectdb (or equivalent). If you're a shell script calling psql / pg_dump / etc, how do you do this? -O ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] PGPASSWORD
Tom Lane wrote: Oliver Jowett <[EMAIL PROTECTED]> writes: If you're a shell script calling psql / pg_dump / etc, how do you do this? That doesn't strike me as a compelling case for inventing "--password-fd". Such a shell script is going to have a hard time passing a password securely anyway (you weren't planning to "echo $PW" it somewhere, I trust). No; you can, for example, write to a securely created temporary file via a pipeline from the original source of the password, and then use the fdnumber Also, for "shell script" read "application that wants to delegate work to the standard postgresql tools" and some of your objections go away. And why should the shell script have its fingers on the password in the first place? It has no "need to know", and more chances to leak the password unintentionally than one likes to think about. The main reason is when you have several operations to do and don't want to reprompt for the password multiple times. If you really don't want to solve the problem with .pgpass, I'd recommend letting the invoked program collect the password for itself. That's exactly why we do the collection on /dev/tty and not stdin. That's not useful if the invoking program wants to execute multiple tools without having each one reprompt, or if the invoking program doesn't *have* a tty (e.g. it is a GUI or a batch process). My main objections to using .pgpass are: - the script/application itself has no business meddling with .pgpass - the user might not want to store their password in .pgpass - reprompting for the password every time a tool is invoked is at best a major annoyance, and at worst impossible to do. I guess that allowing the command-line tools to use an alternative .pgpass location solves most of these objections: the caller can set up a temporary entry based on user input. It's still pretty ugly as you end up with the password on disk (that seems unavoidable with a shell script, but avoidable for more flexible callers). -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] PGPASSWORD
Tom Lane wrote: - the user might not want to store their password in .pgpass Without a concrete argument why they should not, this is a straw man. I think this is the core of our disagreement. Storing passwords on the client's disk is surely less secure than not storing them on disk. That's my personal reason for avoiding .pgpass: I'd rather keep my password in my head where possible and have it prompt each time I start working with a database (note: this does not necessarily mean each time whatever tool I use makes a new database connection). If the only way for higher-level tools to give password information to the command-line tools is via .pgpass, then I am stuck with either putting my password on disk, or getting reprompted for the password excessively (and having to keep a tty open). In comparison, applications that wrap tools such as PGP/GPG usually have options to remember a passphrase for a session (with optional timeouts, etc.) without storing it on disk. Why can't I do the same with an application that wraps the postgresql command-line tools? - reprompting for the password every time a tool is invoked is at best a major annoyance, and at worst impossible to do. Agreed, which is why we invented .pgpass. But that argument scales up to beyond one invocation of this hypothetical script, does it not? I think it's acceptable to prompt for a password *once* per interaction -- but not multiple times. "interaction" might be "run of the script" or "session of an admin tool". Also, the script isn't hypothetical: We install our software into a standalone directory. As part of the installation (or via later reconfiguration) you specify a postgresql configuration to use: host, port, database name, user, password. This gets stored in the installation directory somewhere as part of the application's configuration. To set up or reinitialize the database schema, we provide a script. This script calls createdb/dropdb/psql to do its work. It extracts the database connection information from the application's configuration files and passes that to the command-line tools. Currently we rely on psql etc to prompt for passwords, because there's no reasonable way to pass the passwords (which we already have!) down to them. If the official way to pass password information is to use .pgpass, how do we keep .pgpass in sync with the application configuration so that our script continues to work when the application is reconfigured? We're actually looking at reimplementing a subset of psql/createdb/dropdb just for this bit of our application; the password issue isn't the main reason, but it contributes. === Hypothetical case: You have a GUI admin tool that talks directly to the database. It prompts for connection details on startup. It does not necessarily use libpq. So you start it up, connect and authenticate, do some interactive SQL work. Then you say "please dump the database", so the tool shells out to pg_dump to do the work. Should it: a) set PGPASSWORD b) write to $HOME/.pgpass, perhaps cleaning up afterwards c) require the user to set up $HOME/.pgpass before using the tool d) require the user to have a tty, and let pg_dump prompt there e) run pg_dump in a pty and fake the password interaction f) use --password-fd (assuming it is implemented) g) not shell out, but embed or reimplement pg_dump itself h) something else? As I understand it, your answer is (c)/(d). This makes for an unfriendly admin tool -- "I just want to dump this database that I have already authenticated to, why do I need to set up an external password file?", or "why isn't it doing anything?" if it's prompting on the tty that is hidden behind the admin window.. -O ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] PGPASSWORD
postgresbugs wrote: Tom Lane wrote: The point here is that if PGPASSWORD is passed down to psql as an environmental variable, it is visible as part of psql's environment for the entire run of psql. Whatever the calling script does later doesn't remove that window of vulnerability. [...] And, yes I do understand that for the brief period the environmental variable could possibly be visible on some platforms, but even Windows has the local directive which makes the variable far more secure. The window is much longer than that. As Tom said, for PGPASSWORD to work it has to be present in the environment of the psql process -- that's how psql gets the password! That environment may be visible to other users of the system, depending on the OS. psql could remove the password after use, I suppose, but that just narrows the window. IMO *any* window of vulnerability is unacceptable -- it opens up any periodic or triggerable process to an attacker who tries to get the timing just right (not impossible to do if you can also slow down the system you are attacking to widen the window..) PGPASSWORD is just a bad idea as a general mechanism. We need some other way. -O ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] PGPASSWORD
postgresbugs wrote: The functionality provided by PGPASSWORD should not be removed unless there is a functionality other than .pgpass, which is fine for some uses and not for others, that will provide similar functionality. That could be psql and pg_dump and the like accepting a password on the command line as I stated earlier. Putting the password on the command line would be even more of a security problem than PGPASSWORD is now. I agree that an alternative to ,pgpass would be useful, but it needs to be a *secure* alternative. -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] PGPASSWORD
Andreas Pflug wrote: Oliver Jowett wrote: postgresbugs wrote: The functionality provided by PGPASSWORD should not be removed unless there is a functionality other than .pgpass, which is fine for some uses and not for others, that will provide similar functionality. That could be psql and pg_dump and the like accepting a password on the command line as I stated earlier. Putting the password on the command line would be even more of a security problem than PGPASSWORD is now. I agree that an alternative to ,pgpass would be useful, but it needs to be a *secure* alternative. The command line could take a file handle of an inherited pipe. Yeah, that's what I originally suggested. Tom didn't seem to like it though.. -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] BUG #1523: precision column value returned from getTypeInfo()
(cc'ing -jdbc) Sergio Lob wrote: After calling Connection.DatabaseMetaData.getTypeInfo() method, the PRECISION column of the ResultSet seems hardcoded to value 9, which of course in incorrect Well, it's working as intended in the sense that there is no code to support retrieving a real precision value -- it is indeed a hardcoded placeholder value. I'm not sure why '9' off the top of my head, perhaps making that column NULL would be better? What are the units of this value anyway? If you're willing to do the implementation work to improve this, then post a patch to pgsql-jdbc and I can look at applying it. -O ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Precision and scale of numeric column reported as value
Sergio Lob wrote: Bug to report - For a numeric in a table defined by: CREATE TABLE SERG (F01NUM NUMERIC) ; , the precision and scale reported by ResultSetMetaData.getScale() and ResultSetMetaData.getPrecision() are value -1 What should they return instead in this case? -O ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Precision and scale of numeric column reported as value
Sergio Lob wrote: Precision is the number of total digits in the number. Scale is the number of fractional digits. For instance, a column defined as NUMERIC(10,3) should return precision=10, scale=3. Yes, I understand that. The error only occurs for a column defined as NUMERIC (without precision or scale specified). Presumably, there is a default precision and scale assigned to the column by postgreSQL, which is not -1. There appears to be no default, which is why we currently return -1. Scale should default to 0 per the standard, but defaults to whatever the precision is in PostgreSQL (see the docs for details). The docs claim an implementation precision limit of 1000, but that doesn't seem to correspond to the actual implementation -- I can insert and retrieve 2000 digits NUMERICs (for example) just fine. I can't see an obvious limit on precision in the backend code. There may be a theoretical limit somewhere around 2^30 digits, I think (limited by the backend's memory allocation sanity checks), but I have not tested that. Given that there is effectively no default, do you have suggestions for a better value to return? -O ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] BUG #1561: wrong detection of number of parameters in
Ingolf Knopf wrote: > "java.sql.PreparedStatement.executeUpdate()" throws "java.sql.SQLException", > if the prepared sql-string contains '?' within a C-like comment. > > Example: > select relname > from pg_class > where /*relowner = ? and*/ > relname = ? > > In this case, the java.sql.PreparedStatement of PostgreSQL requires two > parameters, but I have only one. The JDBC driver doesn't currently parse the statement in much detail. It really only understands string literals, semicolon-separated statements, and some JDBC-specified {...} escapes. A workaround would be to "quote" the comment: select relname from pg_class where /*" relowner = ? and "*/ relname = ? which should work (although I have not tested it) There's no real reason why the driver can't be modified to understand C-style comments, someone just needs to find the time to do it.. (patches to pgsql-jdbc please ;-) -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] BUG #1611: reading a date-field by "ResultSet.getTimestamp()"
Ingolf Knopf wrote: > The following bug has been logged online: > > Bug reference: 1611 > Logged by: Ingolf Knopf > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.0.1 > Operating system: JDBC > Description:reading a date-field by "ResultSet.getTimestamp()" > method analized dayligth flag > Details: > > Retrieving data by "java.sql.ResultSet" I read a data from a column which > has type DATE. I read content of this column by method > "ResultSet.getTimestamp( int )". > I get a "java.sql.Timestamp"-object, where Calendar.HOUR_OF_DAY==1. I > suppose, this is daylight flag. > > Maybe this behavior of your JDBC driver is compatible with SQL standard, but > it is completely other than the behavior of "Oracle" or "Ingres". Can you provide a compilable test case please? What is the behaviour of Oracle/Ingres with the same test case? -O ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] BUG #1628: JDBC bug with regard to text column data type
Nathan Neuenschwander wrote: > I am using a CachedRowSet, the JDBC driver for PG 8.0, and j2ee. > If the target table contains a column of data type text, the following > causes an SQLException with a message stating that a negative value was > returned for the column display size. If the text column is changed to a > varchar, the exception is not triggered. The exception is not dependent upon > the particular column being queried, just the enclosing table. (The proper > variable init is implied.) I believe we are still waiting for Sun to get back to us on this (it's arguable whether the bug lies in the JDBC driver or in what their RowSet implementation is expecting). Search the pgsql-jdbc archives for details. -O ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] BUG #1646: ERROR: column "mycolumn" is of type boolean
Kjh wrote: > The following bug has been logged online: > > Bug reference: 1646 > Logged by: Kjh > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.0 > Operating system: windows XP > Description:ERROR: column "mycolumn" is of type boolean but > expression is of type character > Details: > > newest drivers (JDBC3) > deployed it with JBoss 3.2.6 > > Indeed, the entity bean has fields of java-type Boolean and the database has > fields of type boolean, but the application server seems to send it to the > database as "true" or "false" strings. That is what the errormessage > indicates to me anyway: ERROR: column "mycolumn" is of type boolean but > expression is of type character. (Though I am not aware of how to verify > this other than changing column type varchar(5) which I have tried - > unfortunately, this had lots of other undesired side-effects even though it > worked in some cases.) Which PreparedStatement.set...() method is being called to set the value for this column? To set a boolean-typed parameter, you shoud use PreparedStatement.setBoolean(...), PreparedStatement.setObject(..., Types.BIT), or PreparedStatement.setObject(..., Types.BOOLEAN). If the application server is using setString() to send a boolean value, that's where the bug lies. -O ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] BUG #1712: JDBC column precision for bigint is 0
Gilles Dubochet wrote: > The following bug has been logged online: > > Bug reference: 1712 > Logged by: Gilles Dubochet > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.0.2 > Operating system: Linux > Description:JDBC column precision for bigint is 0 > Details: > > Whith the JDBC driver at least up to version 8.1dev-400, the result of the > getPrecision method of ResultSetMetaData on a bigint column is 0 instead of > the expected 19. This has been reported before but I haven't got to fixing it yet. This is partly because I haven't seen a good explanation of exactly what we should be returning here -- what spec says we should return 19? -O ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] BUG #1712: JDBC column precision for bigint is 0
Gilles Dubochet wrote: >>> Whith the JDBC driver at least up to version 8.1dev-400, the result >>> of the >>> getPrecision method of ResultSetMetaData on a bigint column is 0 >>> instead of >>> the expected 19. >>> >> >> This has been reported before but I haven't got to fixing it yet. This >> is partly because I haven't seen a good explanation of exactly what we >> should be returning here -- what spec says we should return 19? > If you count the number of digits in these numbers, you'll notice that > for the signed number, 19 decimal digits at most are required to > represent it (if the sign comes for free, which seems assumed for other > data types such as INT or SMALLINT). [...] > This is why I believe 19 is the value the getPrecision method should > return. I don't think there is some kind of standard reference that > defines it, but it seems pretty clear what it should be really. The problem I have is how we decide that "precision" means "number of digits, ignoring sign". Actually, I just dug through the JDBC javadoc, and noticed that ResultSetMetaData.getPrecision() does talk about the number of decimal digits; but in all the other places it's used (such as DatabaseMetaData) it's not described. For numeric types using number of decimal digits seems reasonable since it's consistent with the NUMERIC type's precision/scale information. Previous reports of this also wanted to provide precision for dates and times according to the number of digits in them, which seems much more dubious. I guess it's another case of the JDBC specification being woefully underspecified :( -O ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] BUG #1753: Query Optimizer does not work well with libpg
Ernst Bachmann wrote: > The following bug has been logged online: > > Bug reference: 1753 > Logged by: Ernst Bachmann > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.0.3 > Operating system: Linux > Description:Query Optimizer does not work well with libpg / > PQexecParams > Details: > > It looks like the query optimizer isn't taking the value of parameters sent > with PQexecParams into account, thus generating (in my case, very) unoptimal > plans If PQexecParams uses the unnamed statement (it appears to), this shouldn't happen -- planning of the unnamed statement is delayed until the first set of parameter values is bound. This behaviour started in 8.0. What's the query? Are you definitely using PQexecParams() and not PQexecPrepared()? -O ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] BUG #1753: Query Optimizer does not work well with libpg
Andrew - Supernews wrote: > The problem is that even with the unnamed statement and deferred planning, > the planner still has to treat the parameters as variables, not constants, > since nothing in the protocol stops you from running multiple portals from > the unnamed statement. This can make a significant difference, especially > where function calls are involved and major optimizations can be made on > constant values as a result of inlining. Sure, expression optimization is less aggressive, but is that on its own really going to produce a 100-fold difference in query execution? The main problem pre-8.0 (or with named statements) is that index selectivity estimates go out the window with a parameterized query, so a much more general (and slower) plan gets chosen. The 8.0 unnamed-statement behaviour glues the actual parameter values into the selectivity estimates so in theory you should get the same plan for the unparameterized and parameterized-unnamed-statement cases. This is why I'd like to see the actual query.. -O ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] BUG #1753: Query Optimizer does not work well with libpg
Tom Lane wrote: > Oliver Jowett <[EMAIL PROTECTED]> writes: > >>Sure, expression optimization is less aggressive, but is that on its own >>really going to produce a 100-fold difference in query execution? > > > It's certainly possible, depending on query details. Andrew pointed out in some offlist discussion that it's actually more than just expression optimization via eval_const_expressions; there are some other cases where the tree is transformed differently if you have a Const vs. Param in ways that estimate_expression_value() doesn't deal with: - predicate_implied_by_simple_clause can remove parts of an expression based on the actual (not estimated) values involved; - match_special_index_operator manipulates LIKE/regexp/etc expressions based on the actual patterns provided; - LIMIT/OFFSET modifications to query cost only kick in with Consts. I can't see any simple way to fix the first two since they change the meaning of the expression, but the LIMIT/OFFSET query cost code could in theory use estimate_expression_value(). -O ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #1756: PQexec eats huge amounts of memory
Alvaro Herrera wrote: > On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote: > >>On Thursday 07 July 2005 20:43, Alvaro Herrera wrote: > > >>>Really? I thought what really happened is you had to get the results >>>one at a time using the pg_fetch family of functions. If that is true, >>>then it's possible to make the driver fake having the whole table by >>>using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do >>>it behind the scenes.) >> >>Even without cursor, result can be read incrementally. >> >>I mean, query result is transferred over network, right? >>We just can stop read()'ing before we reached the end of result set, >>and continue at pg_fetch as needed. > > > It's not that simple. [...] It also requires that you assume there is only one set of query results outstanding at a time. I know that you can't assume that in JDBC, and by the sounds of it PHP's interface is similar in that you can have multiple query result objects active at the same time. -O ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #1800: "unexpected chunk number" during pg_dump
Alvaro Herrera wrote: > Looks very much like the table was corrupted. Maybe you should try to > test your RAM and disks. Not sure how to do that on x86-64 though, > unless the test utility at www.memtest86.com has been ported to it. x86-64 systems will still boot and run 32-bit code fine (although obviously memtest86 isn't going to test memory it can't address in 32-bit mode) -O ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > I want to follow what is suggested here. How are STDIN and STDOUT > addressed when using the JDBC driver? The current JDBC driver doesn't support this mode of COPY. There was some work done in the past to support this but it never got to the point of making it into the official driver; see the pgsql-jdbc archives for details. -O ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > The majority of JDBC users trying to bulk load tables would not want > to send the data through their connection. This connection is designed > to send commands and to transfer only as much data as necessary and as > little as possible. I don't understand why this is true at all -- for example, our application currently does bulk INSERTs over a JDBC connection, and moving to COPY has been an option I looked at in the past. Importing lots of data from a remote machine is hardly an uncommon case. > The need is only created by the limitations of the Postgres COPY > command. > > I can't see why a workaround should be developed instead of or before > fixing the COPY command. > > It works in other DB engines. I guess that other DB engines don't care about unprivileged DB users reading any file that the backend can access. -O ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > This difference of performance is the main reason for the COPY > command, and this is also the reason why bulk loading through the JDBC > interface will never match the performance of the COPY fith files > command. In some admittedly unscientific tests I see less than 10% difference between server-side COPY and client-side COPY FROM STDIN (psql's \copy command) on a 28mb input file. That's down in the per-run noise. Doing it via JDBC will undoubtably add some extra overhead, but I'd estimate that it's about the same sort of overhead as writing your data out to a file from Java in the first place takes. If you've already got the data in a file, why not just use psql's \copy command? This uses COPY FROM STDIN, reads the file as the user running psql, and does not require superuser permissions. > The whole architectural setup for such "bulk" loading is a mess. Do you have a concrete suggestion for improving bulk loading that doesn't open security holes? -O ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > Oliver and interested list members: [...] And please fix your anti-spam system so it doesn't send me a "you must jump through these hoops to send me email" message every time please! (usual cc: to poster removed for that reason) -O ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > 2) Split up security risk calculations between the two directions "TO" > and "FROM" and relax security. Look at MySQL for clues. The > application developer can manage security on file system permission > level. I looked at MySQL's docs briefly and its behaviour seems almost the same as PostgreSQL's with some minor differences: - the equivalent to COPY is "LOAD DATA INFILE" - the equivalent to FROM STDIN is "LOCAL" - for non-LOCAL loads, the DB user must have FILE privilege which is "file access on server host". Given FILE privilege in MySQL, you can read existing files and create new files based on the access the server user has. It sounds like what you really want is the ability to grant something like FILE access without granting all superuser rights? Sounds like a feature request, not a bug, to me :-) Also, you better hope that there's no sensitive information readable by the server user that could be used to gain superuser access.. such as .pgpass files or info from pg_hba.conf, for example. -O ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Tom Lane wrote: > What is the story on JDBC COPY support, anyway? I'm aware that there's > an unofficial patch for that, but I'm not clear about why it's not made > it into the accepted version. I didn't like the whole "here is an undifferentiated stream of data" approach -- there were some JDBC interfaces we could adapt to read/write typed data. That never happened, though. I suppose we could apply a patch similar to the original one, given that there doesn't seem like much interest in a typed version, but it's likely to need rework as there's been at least one overhaul of the driver's protocol handling layer since then. -O ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Tom Lane wrote: > Oliver Jowett <[EMAIL PROTECTED]> writes: > >>It sounds like what you really want is the ability to grant something >>like FILE access without granting all superuser rights? Sounds like a >>feature request, not a bug, to me :-) > > > AFAICT, the complaint really boils down to there not being any support > for COPY-from-client in the JDBC driver. Bernard was also objecting to the overhead of pushing the data down a TCP pipe when it's already available locally, I think.. I didn't find any real difference there when I compared the two methods, though. -O ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy
Greg Stark wrote: Oliver Jowett <[EMAIL PROTECTED]> writes: Bernard was also objecting to the overhead of pushing the data down a TCP pipe when it's already available locally, I think.. I didn't find any real difference there when I compared the two methods, though. What makes you think it's necessarily available locally? Nothing in general -- that was just the case he had. -O ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: Certainly supporting COPY via STDIN within the java code seems preferable. Why do you say that? That option does not exist because the Postgresql JDBC driver does not support it. If you raise this on pgsql-jdbc (so far, I haven't seen anything on that list from you at all..) we can look at putting support in. In fact Kris just started a thread to that end -- perhaps you would like to comment on it? -O ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] Serialization errors on single threaded request
Kevin Grittner wrote: > I'm afraid I'm not familiar with a good way to capture the stream of > communications with the database server. If you could point me in the right > direction, I'll give it my best shot. tcpdump will do the trick (something like 'tcpdump -n -w some.output.file -s 1514 -i any tcp port 5432') Or you can pass '&loglevel=2' as part of the JDBC connection URL to have the JDBC driver generate a log of all the messages it sends/receives (in less detail than a full network-level capture would give you, though) > I did just have a thought, though -- is there any chance that the JDBC > Connection.commit is returning once the command is written to the TCP buffer, > and I'm getting hurt by some network latency issues No, the JDBC driver waits for ReadyForQuery from the backend before returning. -O ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] Bug?: Text after right paren in query is ignored
Alvaro Herrera wrote: > On Fri, Sep 30, 2005 at 10:37:05PM -0500, Steve Peterson wrote: > >>The query is submitted using the JDBC driver, via >>the SQLExplorer plugin for Eclipse. >> >>Is there a way for me to see the query as it was >>submitted to the backend? If so I'll take a look and see what it looks >>like. > > > Sure. Use the log_statement configuration parameter in postgresql.conf. This won't log anything with recent JDBC drivers and pre-8.1 backends. -O ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] Bug?: Text after right paren in query is ignored
Steve Peterson wrote: > Any known way to log at the JDBC level? When I get a little time I'm > going to write a test that bypasses Eclipse and SQLExplorer. Pass 'logLevel=2' as a URL parameter in the JDBC URL; that will log all protocol-level messages sent and received, including the query text. -O ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #1988: keygen not implemented
Mike Clements wrote: Insert a row into the table using: Connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS); The driver throws an exception saying this method is not yet implemented. This is an optional part of the JDBC spec, and the driver doesn't claim to support it in the metadata it provides (DatabaseMetaData.supportsGetGeneratedKeys() returns false). What it should do is create the prepared statement so when you execute it, the returned ResultSet has the generated primary key. Unfortunately this requires functionality in the backend that does not yet exist (support for "INSERT .. RETURNING ...", or similar). -O ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [JDBC] [BUGS] JDBC: 2 bugs: Getting a smallint array actually gets an integer array and return type of a boolean array is bit.
Saneesh Apte wrote: I think I found two bugs (and I hope I am not wasting everyone's time). One is minor: the base type of a boolean[] is java.sql.Types.BIT instead or java.sql.Types.BOOLEAN. At the very least shouldn't these be aliases for the same type? Types.BOOLEAN does not exist before JDBC3. Client code can use either, but we have to pick one or the other when returning a value, so we return BIT. And secondly the returned type from a smallint[] is an Integer[] instead of a Short[]. smallint is a Types.SMALLINT which is mapped to java.lang.Integer by the JDBC spec. See appendix B of the spec: Note – The JDBC 1.0 specification defined the Java object mapping for the SMALLINT and TINYINT JDBC types to be Integer. The Java language did not include the Byte and Short data types when the JDBC 1.0 specification was finalized. The mapping of SMALLINT and TINYINT to Integer is maintained to preserve backwards compatibility. -O -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] [JDBC] Postgres Server Jdbc driver error
2011/8/4 Silvio Brandani : > On the Tomcat side we get followin message: > > > Caused by: java.io.IOException: Tried to send an out-of-range integer as a > 2-byte value: 34027 > at org.postgresql.core.PGStream.SendInteger2(PGStream.java:201) > at > org.postgresql.core.v3.QueryExecutorImpl.sendParse(QueryExecutorImpl.java:1236) > at > org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:1508) > at > org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1097) > at > org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) What is the query that triggers this exception? (Not the one logged on the server side, it is the next query that's the problem) Taking the exception at face value implies that your query has 34,027 parameters, which seems somewhat unlikely (!!) unless you have a badly garbled query string. Oliver -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs