Re: [GENERAL] default ordering of query result - are they always guarantee

2010-05-19 Thread A. Kretschmer
In response to Guillaume Lelarge :
> > This is the default behavior I want. However, I am not sure whether this is 
> > always guarantee? Or shall I explicitly make the query in the following 
> > form?
> 
> An you're right. It's not guaranted. The only guaranted way is to use
> ORDER BY your_column.

ACK. One reason is the new feature 'synchronised scan' sind 8.3. It's a
big performance boost, but engendered unsorted results. (if there is not
the ORDER BY - statement) 

Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] metadata on a table

2010-05-19 Thread Malm Paul
Ahhh.

boolean isAutoIncerment = rsMetaData.isAutoIncrement(1);

if int4 and autoIncremented = serial

Sorry!!


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Malm Paul
Sent: den 19 maj 2010 08:44
To: pgsql-general@postgresql.org
Subject: [GENERAL] metadata on a table

Hi,
I'm trying to read the metadata from table to create the same sort of table in 
another database, using java.

This is how I do it today

quwstion = "SELECT  * FROM table"
stmnt = dbConnection.prepareStatement(question);

rs = stmnt.executeQuery();

ResultSetMetaData rsMetaData = rs.getMetaData();
int numberOfColumns = rsMetaData.getColumnCount();
System.out.println("resultSet MetaData column Count=" + numberOfColumns);
int precision = rsMetaData.getPrecision(1);
String columnTypeName = rsMetaData.getColumnTypeName(1);
String name = rsMetaData.getColumnName(1);
String classname = rsMetaData.getColumnClassName(1);

The first object is the primary key with the name "fid",
columnTypeName = int4

I can see in the original table that fid has the type serial, which is an int4 
(pgAdmin).
But when I create the table in the other database I would like to create it as 
serial not just int4, how can I know that it is a serial?

Kind regards,
Paul


Re: [GENERAL] default ordering of query result - are they always guarantee

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 1:38 AM, A. Kretschmer
 wrote:
> In response to Guillaume Lelarge :
>> > This is the default behavior I want. However, I am not sure whether this 
>> > is always guarantee? Or shall I explicitly make the query in the following 
>> > form?
>>
>> An you're right. It's not guaranted. The only guaranted way is to use
>> ORDER BY your_column.
>
> ACK. One reason is the new feature 'synchronised scan' sind 8.3. It's a
> big performance boost, but engendered unsorted results. (if there is not
> the ORDER BY - statement)

Just for reference I once had three separate oracle dbas saying my pg
server was broken because group by came out in random order (due to
hash aggregates).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TCL client interface

2010-05-19 Thread BRUSSER Michael
 

 

From: BRUSSER Michael 
Sent: Wednesday, May 19, 2010 10:06 AM
To: 'pgsql-general@postgresql.org'
Subject: TCL client interface

 

Did anyone successfully integrate the TCL client library from
http://pgfoundry.org/projects/pgtcl/  ?

Even more helpful would be if someone did this on Solarix-x86, building
a 64-bit binaries.

 

Thanks,

Michael. 



This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.  If you are not one of the named recipients or have received this 
email in error, (i) you should not read, disclose, or copy it, (ii) please 
notify sender of your receipt by reply email and delete this email and all 
attachments, (iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.For other languages, go 
to http://www.3ds.com/terms/email-disclaimer.


[GENERAL] TCL client interface

2010-05-19 Thread BRUSSER Michael
Did anyone successfully integrate the TCL client library from
http://pgfoundry.org/projects/pgtcl/  ?

Even more helpful would be if someone did this on Solarix-x86, building
a 64-bit binaries.

 

Thanks,

Michael. 



This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.  If you are not one of the named recipients or have received this 
email in error, (i) you should not read, disclose, or copy it, (ii) please 
notify sender of your receipt by reply email and delete this email and all 
attachments, (iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.For other languages, go 
to http://www.3ds.com/terms/email-disclaimer.


[GENERAL] how to add a postgresql dll in the GAC of windows 7 ?

2010-05-19 Thread lolveley

hi,

I would like to install postgresql into windows 7, but there is a dll to 
add and I can't do this, there is an error; but I tried 2 methods:


1/ gacutil, this one : C:\Program Files (x86)\Microsoft 
SDKs\Windows\v7.0A\Bin\x64, but I have this error :



Failure adding assembly to the cache: Cet assembly a été construit par 
un runt
ime plus récent que le runtime actuellement chargé et ne peut pas être 
chargé.


(it says : this assembly was build by a more recent runtime than the one 
loaded  presently, and can't be loaded)


2/ by copiyng the dll in the directory c:\Windows\Assembly, from the 
explorer in administrator mode.

I have no error message but the dll isn't added to the directory...

can you help me?

olivier







___ 
Yahoo! Mail réinvente le mail ! Découvrez le nouveau Yahoo! Mail et son interface révolutionnaire.

http://fr.mail.yahoo.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Gordon Shannon

In an effort to fine-tune my table storage parameters so tables are analyzed
at the optimal time, I have written a query to show how soon my tables will
be auto-analyzed. But my results to not jive with what I see autovacuum
doing, i.e. there are tables that are millions of rows past the threshold
that haven't been analyzed for 2 weeks, and autovacuum does not want to auto
analyze them, so I conclude that my query must be wrong. 

The docs say that tables will be auto analyzed when (paraphrasing) the
number of tuples inserted or updated since the last analyze exceeds the
analyze base threshold plus the product of the analyze scale factor and the
number of tuples.

After a reading of the some of the code in autovacuum.c, it appears the
formula used is something like this.  (Note that I'm not quite sure how
"number of tuples inserted or updated since the last analyze" translates in
code as "n_live_tup + n_dead_tup - [tuples at last analyze]", but I'll trust
the code is correct, and I need only understand how to get the values of the
variables.)

T = bt + (sf * rt)
X = lt + dt - at
will analyze when X > T

T: Threshold
X: test value
bt: base threshold
sf: scale factor
rt: rel tuples
lt: live tuples
dt: dead tuples
at: last analysis tuples

The next step was to get these values from the system tables. Here's what I
used:

bt: base threshold = current_setting('autovacuum_analyze_threshold') or
table override setting
sf: scale factor = current_setting('autovacuum_analyze_scale_factor') or
table override setting
rt: rel tuples = pg_class.reltuples
lt: live tuples = pg_stat_user_tables.n_live_tup
dt: dead tuples = pg_stat_user_tables.n_dead_tup
at: last analysis tuples = pg_class.reltuples 

I'm the least confident about the last one -- tuples as of last analyze.
Can anyone confirm or correct these?

Version:  PostgreSQL 8.4.3 on x86_64-redhat-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit


Thanks!
--gordon

-- 
View this message in context: 
http://old.nabble.com/Help-writing-a-query-to-predict-auto-analyze-tp28610247p28610247.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Alvaro Herrera
Excerpts from Gordon Shannon's message of mié may 19 11:49:45 -0400 2010:

> at: last analysis tuples = pg_class.reltuples 
> 
> I'm the least confident about the last one -- tuples as of last analyze.
> Can anyone confirm or correct these?

In 8.4 it's number of dead + lives tuples that there were in the previous
analyze.  See pgstat_recv_analyze in src/backend/postmaster/pgstat.c.
(In 9.0 it's been reworked a bit.)
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CIDR data type query help

2010-05-19 Thread Scot Kreienkamp
Hi everyone,

I have a column of type CIDR in a table that I am querying that contains
the values of 10/8, 10.1/16,10.1.28/24, and 10.1.28.95.  I am trying to
return only the one record that's most specific compared to the IP
address I'm currently on as this is being done in a CGI script.  If
there's no specific match for the IP of the originating workstation then
it should return the /24 if it's there, then the /16 if it's there, etc.
I have never worked with the CIDR type, and a novice when it comes to
SQL query language, so I have no idea how to approach this.

Thanks for any help.

Scot Kreienkamp
skre...@la-z-boy.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CIDR data type query help

2010-05-19 Thread Steve Atkins

On May 19, 2010, at 10:32 AM, Scot Kreienkamp wrote:

> Hi everyone,
> 
> I have a column of type CIDR in a table that I am querying that contains
> the values of 10/8, 10.1/16,10.1.28/24, and 10.1.28.95.  I am trying to
> return only the one record that's most specific compared to the IP
> address I'm currently on as this is being done in a CGI script.  If
> there's no specific match for the IP of the originating workstation then
> it should return the /24 if it's there, then the /16 if it's there, etc.
> I have never worked with the CIDR type, and a novice when it comes to
> SQL query language, so I have no idea how to approach this.

Something like this (untested):

select foo from table where foo >>= '10.1.28.14' order by masklen(foo) desc 
limit 1;

You likely want to look at http://pgfoundry.org/projects/ip4r/ as an 
alternative,
if the table is likely to grow beyond a few dozen rows. It's usefully indexable
for "contains" queries, unlike the native cidr type,

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] object tracking

2010-05-19 Thread Little, Douglas
Hi,

I have a fairly large data warehouse in Greenplum, and am having trouble 
tracking object changes.
I need to

1.Know when an object was changed, by who.

2.   Have some logging about when an object was dropped - especially drop 
cascade.


Currently I'm having a problem with a set of disappearing views.   We're still 
in pre-prod environment, but

Any suggestions?

I'd like

1.PG to timestamp the catalog tables

2.   Permit triggers on catalog objects.


Doug Little

Sr. Data Warehouse Architect | Enterprise Data Management | Orbitz Worldwide
500 W. Madison, Suite 1000  Chicago IL 60661| Office 312.260.2588 | Fax 
312.894.5164 | Cell 847-997-5741
douglas.lit...@orbitz.com
 [cid:image001.jpg@01CAF759.01607890]   orbitz.com | 
ebookers.com | 
hotelclub.com | 
cheaptickets.com | 
ratestogo.com | 
asiahotels.com

<>

Re: [GENERAL] Connection lost

2010-05-19 Thread Hernan Danielan
Yes, the most interesting thing is that the log says

unexpected EOF on client connection or  could not send data to client:
Broken pipe

But this shouldn't be happening according to the code i sent because i do
NOT close the socket. Moreover the transaction only takes up a few seconds.

I read a person who had this problem and he fixed changing th cache size but
I wasn't successful.

Any ideas?

On Tue, May 18, 2010 at 11:34 PM, Joshua Tolley  wrote:

> On Tue, May 18, 2010 at 1:18 PM, Hernan Danielan
>  wrote:
> > Hello! I am using postgress 8.4. I am trying to save to my DB a Large
> Binary
> > Object, in localhost, let's say 1.4MB. I read that LargeObjectAPI should
> be
> > used.
> > I have a problem that sometimes i can store the file and some others i
> get
> > an exception of
> >>>org.postgresql.util.PSQLException: An I/O error occured while sending to
> >>> the backend.
> >>>java.net.SocketException: Socket closed
>
> Do the PostgreSQL logs include any useful information?
>
> --
> Joshua Tolley  /  eggyknap
> End Point Corporation
>


Re: [GENERAL] object tracking

2010-05-19 Thread Joshua D. Drake
On Wed, 2010-05-19 at 13:41 -0500, Little, Douglas wrote:
> Hi,
> 
>  
> 
> I have a fairly large data warehouse in Greenplum, and am having
> trouble tracking object changes.  
> 

Depends on Greenplum I would assume. I don't know how compatible they
are with .Org anymore. You might look at table_log

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] object tracking

2010-05-19 Thread Steve Atkins

On May 19, 2010, at 11:41 AM, Little, Douglas wrote:

> Hi,
>  
> I have a fairly large data warehouse in Greenplum, and am having trouble 
> tracking object changes. 
> I need to
> 1.Know when an object was changed, by who.
> 2.   Have some logging about when an object was dropped – especially drop 
> cascade.
>  
>  
> Currently I’m having a problem with a set of disappearing views.   We’re 
> still in pre-prod environment, but
>  
> Any suggestions?

Take it up with your vendor. That's what you're paying them for, and they're 
the only ones who are
likely to understand how their product works.

If you were using postgresql I'd suggest looking at the log_statement and 
log_line_prefix
options, to see if tracking DDL that way were adequate for what you need.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Gordon Shannon



alvherre wrote:
> 
> Excerpts from Gordon Shannon's message of mié may 19 11:49:45 -0400 2010:
> 
>> at: last analysis tuples = pg_class.reltuples 
>> 
>> I'm the least confident about the last one -- tuples as of last analyze.
>> Can anyone confirm or correct these?
> 
> In 8.4 it's number of dead + lives tuples that there were in the previous
> analyze.  See pgstat_recv_analyze in src/backend/postmaster/pgstat.c.
> (In 9.0 it's been reworked a bit.)
> 
> 

I'm sorry, I'm not following you.  Are you saying that "last analysis
tuples" is "number of dead + live tuples from the previous anlyze"?  If so,
that would really confuse me because X would always be 0:

X = lt + dt - at
X = pg_stat_user_tables.n_live_tup + n_dead_tup - (n_live_tup + n_dead_tup)
X = 0

or is there something else wrong with the formula?

--gordon

-- 
View this message in context: 
http://old.nabble.com/Help-writing-a-query-to-predict-auto-analyze-tp28610247p28614875.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Alvaro Herrera
Excerpts from Gordon Shannon's message of mié may 19 18:02:51 -0400 2010:

> I'm sorry, I'm not following you.  Are you saying that "last analysis
> tuples" is "number of dead + live tuples from the previous anlyze"?  If so,
> that would really confuse me because X would always be 0:
> 
> X = lt + dt - at
> X = pg_stat_user_tables.n_live_tup + n_dead_tup - (n_live_tup + n_dead_tup)
> X = 0
> 
> or is there something else wrong with the formula?

Hmm?  n_live_tup and n_dead_tup corresponds to the current numbers,
whereas "last analysis tuples" are the values from back when the
previous analyze ran.  These counters keep moving per updates, deletes,
inserts, they are not static.

If there are no update/ins/del then indeed the difference is 0, which is
why we choose not do analyze.
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to insert wchar_t type string to PostgreSQL

2010-05-19 Thread Yan Cheng CHEOK
Hello all,

I am programming in Windows environment. Previously, we are using MBCs 
(http://msdn.microsoft.com/en-us/library/5z097dxa%28VS.71%29.aspx), in order to 
support language other than English.

The following function still can be used, as long as we are setting the 
database to correct encoding. For example, if we want to support china 
mainland, we will be using "GBK" as encoding.

PGresult *PQexec(PGconn *conn, const char *query);

But once we compile using unicode, we are no longer using char * typed string. 
Instead, we are using wchar_t * typed string.

But, I do not find any function as follow in libpq library.

PGresult *PQexec(PGconn *conn, const wchar_t *query);

May I know, how I can store wchar_t * typed string into PostgreSQl, using C 
library?

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection lost

2010-05-19 Thread Joshua Tolley
On Wed, May 19, 2010 at 1:46 PM, Hernan Danielan
 wrote:
> Yes, the most interesting thing is that the log says
> unexpected EOF on client connection or  could not send data to client:
> Broken pipe

That log mostly just says something disconnected uncleanly, but
doesn't say PostgreSQL closed the connection on purpose. Any JDBC
driver people reading this, with ideas?

--
Joshua Tolley  /  eggyknap
End Point Corporation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection lost

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 8:57 PM, Joshua Tolley  wrote:
> On Wed, May 19, 2010 at 1:46 PM, Hernan Danielan
>  wrote:
>> Yes, the most interesting thing is that the log says
>> unexpected EOF on client connection or  could not send data to client:
>> Broken pipe
>
> That log mostly just says something disconnected uncleanly, but
> doesn't say PostgreSQL closed the connection on purpose. Any JDBC
> driver people reading this, with ideas?

This sounds like a networking problem to me.  Are the any useful
entries in the system logs of either machine?  Can you routine scp /
rsync large files without network errors between these two boxes?  How
are they connected?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Gordon Shannon



alvherre wrote:
> 
> n_live_tup and n_dead_tup corresponds to the current numbers,
> whereas "last analysis tuples" are the values from back when the
> previous analyze ran.  These counters keep moving per updates, deletes,
> inserts, they are not static.
> 
> 

OK.  Do you know how can I get the values from back when the previous
analyze ran?
-- 
View this message in context: 
http://old.nabble.com/Help-writing-a-query-to-predict-auto-analyze-tp28610247p28616817.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help writing a query to predict auto analyze

2010-05-19 Thread Alvaro Herrera
Excerpts from Gordon Shannon's message of mié may 19 23:32:07 -0400 2010:
> 
> alvherre wrote:
> > 
> > n_live_tup and n_dead_tup corresponds to the current numbers,
> > whereas "last analysis tuples" are the values from back when the
> > previous analyze ran.  These counters keep moving per updates, deletes,
> > inserts, they are not static.
> 
> OK.  Do you know how can I get the values from back when the previous
> analyze ran?

I don't think we expose those values to SQL.  I guess you could create a
C function to get it, modelled after pg_stat_get_live_tuples and friends
(see src/backend/utils/adt/pgstatfuncs.c).
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to lose transaction history (xmin values, WAL, etc.)?

2010-05-19 Thread Richard Walker

I have an application in which I want it
_not_ to be possible to work out which
rows of a table are being/have been updated.

I can think of these different compromise scenarios:

(a) Hacker can connect to the database to
execute queries.
The severity depends on which database user
the hacker has cracked.

(i) A database user who has no select permission on
the table, but only to some stored procedures defined
using the "security definer" clause.

(ii) A database user who has select permission on the
table.

(b) Hacker has root access and can view the
table file as stored in the file system.

(c) Hacker has access to tape backups.

My analysis so far:

(a) (i) Is it still possible to work out which rows
have been touched?

(a) (ii) It seems a breach is possible via the xmin values.
In that case, what about doing updates inside a transaction
that does a trivial update of all rows, e.g.:
  begin transaction;
  update mytable ; -- change one row
  update mytable set id=id; -- change all rows
  commit;
So now all rows have the same xmin values.
Does this work?  Performance is not so good, is it?
Is there a better way?

(b) Now I really need some help.  What to do
about the file that stores the table (which will
contain old versions of the data), and the
WAL file?

One approach would be to make a fresh copy
of the table at regular intervals:
  create table mytablecopy (like mytable ...);
  insert into mytablecopy select * from mytable;
  drop table mytable;
  alter table mytablecopy rename to mytable;
Is there a better way?

In any case, that doesn't solve the "problem" of the WAL.

(c) Use pg_dump only?  Never do a backup of the
raw files?  How to do online backups?

--
Richard Walker
Software Improvements Pty Ltd
Phone:  +61 2 6273 2055
Fax:  +61 2 6273 2082

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general