You will find more info attached.
What about the invalid UNICODE data ? How can I fix it ? In the attachment
there is the line that gives the error...
Thanks.
On Tue, 13 Jan 2004, Tom Lane wrote:
> Andrei Ivanov <[EMAIL PROTECTED]> writes:
> > I have a dump created with pg_dump ver 7.4.1 from
Hello Bob!
Everybody knows that PostgreSQL is better than MySQL and supports more
features etc. But remember - the main issue of database systems now is
web that is being build mainly by students that do not even know what
database engines are made for. At least here (My second job is
Instructor in
Hi Keith and others,
Personally I am rather humble regarding my ability to write unassailable
programs from a security perspective, so I tend to use individual database
accounts for individual users. I know that under certain environments this
may not scale well and connection pooling may be requ
I was thinking about the nested transaction problem, and I came across an
interesting insight. While it is true you could use dblink to perform db
operations outside the transaction (which could be useful for logging, etc.)
what is lacking is a way to roll back the internal transactions when the
p
> I do #1- most connections are not persistent though I have done those
> before as well. Security-wise I'm been reviewing the pros and cons
> of this and so far I really can't make much of an argument for #2.
> Just the opposite in fact. The one thing I will be doing though,
> for more security,
Andrei Ivanov <[EMAIL PROTECTED]> writes:
> You will find more info attached.
I could not reproduce the "out of memory" failure using this information
(I got "invalid byte sequence" instead, on both of those COPY commands).
What exactly is the declaration of the tables being copied into?
> What a
On Wed, Jan 14, 2004 at 02:45:38PM +0700, Chris Travers wrote:
> I was thinking about the nested transaction problem, and I came across an
> interesting insight. While it is true you could use dblink to perform db
> operations outside the transaction (which could be useful for logging, etc.)
> wha
> >Remember that /sbin/ifconfig output usually include MAC address too. Not
> >that MAC addresses are 100% unique, but that should increase the
> >uniqueness.
>
> How do you increase uniqueness? Either a value is unique or it isn't -
It increases the *likelihood* of uniqueness, IOW the expecte
> > How do you increase uniqueness? Either a value is unique or it isn't -
> It increases the *likelihood* of uniqueness, IOW the expected
> collision frequency.
..., IOW ... decreases
Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
-
On 14/01/2004 00:17 Nigel J. Andrews wrote:
On Tue, 13 Jan 2004, Thapliyal, Deepak wrote:
> Thx Nigel..
>
> Also can you kindly advice how to turn autocommit to off
Probably:
SET AUTOCOMMIT TO OFF;
although I'm not sure that is still there in 7.4 and there was a lot of
discussion on it's 'broken
From: "Martijn van Oosterhout" <[EMAIL PROTECTED]>
> Well, actually, the problem appears to be that people want to be able to
> roll back each individual statement without killing the parent
transaction,
> and they want to make this the default behaviour. This takes it out of the
> "never used" cat
I've set the encoding to LATIN2 and everything worked fine... no encoding
errors and no memory allocation failures...
The tables where the memory failures occured are:
CREATE TABLE press_releases (
id serial NOT NULL,
title character varying(255) NOT NULL,
body text NOT NULL,
Hi,
I use PostgreSQL 7.4 for storing huge amount of data. For example 7
million rows. But when I run the query "select count(*) from table;", it
results after about 120 seconds. Is this result normal for such a huge
table? Is there any methods for speed up the querying time? The huge
table has
Hi All,
I'd like to "compress" the following two filter expressions into one -
assuming that it makes sense regarding query execution performance.
... where (adate LIKE "2004.01.10 __:30" or adate LIKE "2004.01.10 __:15")
...
into something like this:
... where adate LIKE "2004.01.10 __:(30/15)"
On Wednesday 14 January 2004 11:11, Sezai YILMAZ wrote:
> Hi,
>
> I use PostgreSQL 7.4 for storing huge amount of data. For example 7
> million rows. But when I run the query "select count(*) from table;", it
> results after about 120 seconds. Is this result normal for such a huge
> table? Is there
On Wednesday 14 January 2004 09:45, Chris Travers wrote:
> From: "Martijn van Oosterhout" <[EMAIL PROTECTED]>
>
> > Well, actually, the problem appears to be that people want to be able to
> > roll back each individual statement without killing the parent
>
> transaction,
>
> > and they want to mak
On Wed, Jan 14, 2004 at 04:45:46PM +0700, Chris Travers wrote:
> Ok. Now I am confused. I thought that a nested transaction would involve
> two features:
> 1: The ability to incrimentally commit/rollback changes, i.e. at certain
> points in the transaction have a sub-commit.
I think they're ref
Don't be afraid to read the manual:
http://www.postgresql.org/docs/current/static/functions-matching.html#FUNCTI
ONS-SQL99-REGEXP
http://www.postgresql.org/docs/current/static/functions-matching.html#FUNCTI
ONS-POSIX-REGEXP
Terry Fielder
Manager Software Development and Deployment
Great Gulf Hom
Martijn van Oosterhout said:
> In any case, I don't quite understand the intended semantics of "duplicate
> key shouldn't rollback transaction". If I call a function that inserts a
> duplicate key, should the other effects of the function be rolled back or
> not? Anyone know how MSSQL deals with th
>He is talking about the MAC which in itself is supposed to be
>globally unique. Nothing to do with IP numbers.
Supposed is the word...
Don't count on MACs being unique. Under some OS's you can set them to what you want.
Bob
==
When logic and proportion
Have f
Richard Huxton wrote:
On Wednesday 14 January 2004 11:11, Sezai YILMAZ wrote:
Hi,
I use PostgreSQL 7.4 for storing huge amount of data. For example 7
million rows. But when I run the query "select count(*) from table;", it
results after about 120 seconds. Is this result normal for such a huge
Richard Huxton wrote:
PG uses MVCC to manage concurrency. A downside of this is that to verify the
exact number of rows in a table you have to visit them all.
There's plenty on this in the archives, and probably the FAQ too.
What are you using the count() for?
select logid, agentid, logbody
On Wednesday 14 January 2004 17:57, Sezai YILMAZ wrote:
> Richard Huxton wrote:
> >What are you using the count() for?
>
> I use count() for some statistics. Just to show how many records
> collected so far.
Rather than doing count(*), you should either cache the count in application
memory
or
Have you run 'vacuum analyze log;'? Also I believe
that in Oracle count(1) used to be quicker than count(*).
Matthew
- Original Message -
From:
Sezai YILMAZ
To: Richard Huxton
Cc: [EMAIL PROTECTED]
Sent: Wednesday, January 14, 2004 12:39
PM
Subject: Re: [GENERA
On Wednesday 14 January 2004 18:22, Matthew Lunnon wrote:
> select logid, agentid, logbody from log where logid=300;
>
> this query also returns after about 120 seconds. The table log has about
> 7 million records, and logid is the primary key of log table. What about
> that? Why is it
Hi Terry,
thanks for your ansver. I've already read this page but I couldn't find out
if I can do such things or not. And if I can than how. So if you can suggest
me some additional manual pages regarding regular expressions can be used in
LIKE statements, please write me.
I don't know where to fi
On Wednesday 14 January 2004 12:39, Sezai YILMAZ wrote:
>
> select logid, agentid, logbody from log where logid=300;
At a guess, because logid is bigint, whereas 30 is taken to be integer.
Try ... where logid = 30::bigint;
This is in the FAQ too I think, and is certainly in the archi
On Wednesday 14 January 2004 12:27, Sezai YILMAZ wrote:
> Richard Huxton wrote:
> >There's plenty on this in the archives, and probably the FAQ too.
> >
> >What are you using the count() for?
>
> I use count() for some statistics. Just to show how many records
> collected so far.
If you want an ac
> Jeff Bowden <[EMAIL PROTECTED]> writes:
> > That makes sense to me. I wonder if sqlite suffers for this problem
> > (e.g. app crashing and corrupting the database).
>
> Likely. I can tell you that Ann Harrison once told me she made a decent
> amount of money as a consultant fixing broken Interb
I have to agree. In my experience the average database user out there does not
need the entire functionality of something like Oracle. It is because of this
that I have used MySQL for many years now.
In this light (please accept my confession as to being a more prolific MySQL
user), I am becoming
"Rick Gigger" <[EMAIL PROTECTED]> writes:
> Anyway since postgres uses WAL files to verify the integrity of the database
> couldn't it more or less make the same guarantee's in an embedded version?
> As long as the app uses the db libs unmodified and doesn't mess with the
> files it creates how do
Matt Davies wrote:
> 2. Documentation: In delving deeper into the Postgress database I
> have tried to find whatever I can to learn more. I have found an
> Oreilly book out there, but the TOC reads almost the exact same as
> the online documentation. I ask myself - have they lifted the
> documentat
Chris Ochs wrote:
I am pretty sure I know this already, but every time you run a Pl/Perl
function it is just like running a perl script as far as having to load and
compile the code right? My application runs under mod perl so I'm thinking
that speed is not something I would gain by putting any o
1. Replication: Like it or not most people regard their data and access to their
data as 'invaluable'. If not, why are they storing it. Having a secondary
server (read slaves) on which you can perform backups, load balance RO traffic,
and eventually use as a failover has been one of the great sell
On Wed, 14 Jan 2004, Joshua D. Drake wrote:
> Not to mention that PostgreSQL.Org has some of the most complete
> documentation
> of any software out there.
Yes, I don't understand why people seem to keep complaining about
Postgres' documentation - it is by far the best reference documentation
On Jan 14, 2004, at 10:18, Matt Davies wrote:
1. Replication: Like it or not most people regard their data and
access to their
data as 'invaluable'. If not, why are they storing it. Having a
secondary
I'm not the only person who has used this same argument against mySQL
installs. There is a h
Quoting "Joshua D. Drake" <[EMAIL PROTECTED]>:
> >
> Replication exists in multiple manners for PostgreSQL. There is Mammoth
> replicator (our product),
> ErServer (pgsql.com's product), dbmirror, Rserv, and pgCluster.
What I meant was integrated replication. When adding more layers to the data
Mike Nolan <[EMAIL PROTECTED]> writes:
> I'm restoring a full dump from a 7.3.3 system to a new 7.4.1 system and am
> getting a 'permission denied' error when it tries to create the user
> schemas.
Yeah, this is an error in the 7.3 pg_dump logic for schemas. Use the
pg_dump from the 7.4 installat
Francisco Reyes <[EMAIL PROTECTED]> writes:
> How does one now create a database and make sure that everything inside is
> owned by a particular user?
System objects are not, and should not be, owned by any nonprivileged
user. I fail to see what your complaint is.
> The line from the sql file th
Quoting Ben <[EMAIL PROTECTED]>:
> Yes, I don't understand why people seem to keep complaining about
> Postgres' documentation - it is by far the best reference documentation
> I've ever come across.
>
> Maybe it's that there isn't much tutorial content in the documentation -
> for somebody try
On Wed, 14 Jan 2004, Tom Lane wrote:
> > The line from the sql file that is failing, a dump, is
> > SELECT pg_catalog.setval('invoicesdetails_invoicesdetailsid_seq', 18,
> > true);
>
> You have not given us any context to interpret this report in, but
> I do not think it has anything to do with th
A good clean replication system is not available for bsd platforms as far as
I can tell, which is the preferred OS of choice for many PG installations.
I am playing around with Erserver, but the download has to be updated from
cvs or it won't even compile (corrupted file in the distribution). It
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Should restores be always done by the super user?
If every object in the dump is owned by the same nonprivileged user,
it will work to restore as that user. You can also use --no-owner
to get the same effect (everything winds up owned by the user runn
Mensaje citado por Ben <[EMAIL PROTECTED]>:
> On Wed, 14 Jan 2004, Joshua D. Drake wrote:
>
> > Not to mention that PostgreSQL.Org has some of the most complete
> > documentation
> > of any software out there.
>
> Yes, I don't understand why people seem to keep complaining about
> Postgres' do
We are in the process of porting a logistics application to Postgres from a
Progress Software database. We have about 75 employees and will be putting
Postgres to work in a real time business situation where there are lots of
transactions. Once we are successful in this endeavor, how can we get
What I meant was integrated replication. When adding more layers to the database
there is yet one more possible mechanism for failure at some point. I don't
know about you, but Murphy always bites me in the butt. In addition, I found
your product VERY interesting, but it kinda puts me off that it
Hello, I'm having trouble in find certain information, i've search over the
web and through the documentation but i haven't had lucky.
I need to know if there is some limit in the amount of registers that the
database can manage, and if so, how much is that maximun.
Thanks in advanced, Ruby.
___
> On Wed, 14 Jan 2004, Joshua D. Drake wrote:
>
> > Not to mention that PostgreSQL.Org has some of the most complete
> > documentation
> > of any software out there.
>
> Yes, I don't understand why people seem to keep complaining about
> Postgres' documentation - it is by far the best reference doc
"Ruby Deepdelver" <[EMAIL PROTECTED]> writes:
> Hello, I'm having trouble in find certain information, i've search
> over the web and through the documentation but i haven't had lucky.
> I need to know if there is some limit in the amount of registers that
> the database can manage, and if so, how
Terry Lee Tucker wrote:
We are in the process of porting a logistics application to Postgres from a
Progress Software database. We have about 75 employees and will be putting
Postgres to work in a real time business situation where there are lots of
transactions. Once we are successful in this
I used to have that complaint until I got more aquainted with the docs.
When I used to use mysql I found that if I used search feature on their docs
I could find exactly what I was looking for almost immediately. When I use
the postgres doc search feature I don't get the same experience. It is
Mensaje citado por Matt Davies <[EMAIL PROTECTED]>:
> Quoting "Joshua D. Drake" <[EMAIL PROTECTED]>:
>
> > >
> > MySQL has what 19 million in the bank?
>
> I only point out what the userbase is feeling. I have never been attacked as
> an
> idiot when using MySQL - I have always had helpful resp
> >I used to have that complaint until I got more aquainted with the docs.
> >When I used to use mysql I found that if I used search feature on their
docs
> >I could find exactly what I was looking for almost immediately. When I
use
> >the postgres doc search feature I don't get the same experienc
Any comments on multi-versioning problem:
As far as I understand from PG documentation, *CURRENTLY* VACUUM must be run
regulary, otherwise:
-Q. database will grow as fast as there are many DML going on it, won't it?
-Q. transaction ID may wrap - no doubt here.
-Q. Good news that VACUUM nowday
Mensaje citado por "Joshua D. Drake" <[EMAIL PROTECTED]>:
>
> >I used to have that complaint until I got more aquainted with the docs.
> >When I used to use mysql I found that if I used search feature on their
> docs
> >I could find exactly what I was looking for almost immediately. When I use
>
Does anyone have any experience with postgers full text search?
It works well but it is my understanding that our docs search doesn't
use PostgreSQL
and TSearch. It uses PostgreSQL monogo search or something like that.
J
Perhaps someone could get two birds with one stone and make a good f
Mensaje citado por Doug McNaught <[EMAIL PROTECTED]>:
> "Ruby Deepdelver" <[EMAIL PROTECTED]> writes:
>
> > Hello, I'm having trouble in find certain information, i've search
> > over the web and through the documentation but i haven't had lucky.
> > I need to know if there is some limit in the a
- Q. Bad knews that VACUUM must eventually scan every row(in fact, every row
and index pages?) in the database(?):
- if this is true(?) then can anyone give an idea on how long it runs
for a paticular size of the database and how much it slowdowns a database?
On heavily used databases (over
Quoting Martin Marques <[EMAIL PROTECTED]>:
> Oh, please! I have never seen such a better community then this one (well,
> some
> exceptions come to mind right now, but it's still within the best).
>
> I have personally found GREAT replys from the developers (Tom Lane comes to
> mind, saving me lo
> On heavily used databases (over 100,000 transactions an
> hour), vacuum is
> a killer
That's about 27 tx/second - not so many, for some tasks at least.
If VACUUM is rather a killer
- are any plans from PostgreSQL to deal with that?
Thank you in advance,
Laimis
P.S. it's notable that eve
> >Does anyone have any experience with postgers full text search?
> >
> It works well but it is my understanding that our docs search doesn't
> use PostgreSQL
> and TSearch. It uses PostgreSQL monogo search or something like that.
That's good to hear. What is monogo and is it the problem here?
Nigel J. Andrews wrote:
I can't comment on the real content of this discussion though since a) I
haven't be reading it and b) I probably wouldn't know what it was on about if
I had been.
Um, any insight on the original question (see subject)? :-)
--
dave
---(end of broadcas
LitelWang wrote:
>After I installed the driver I found three DLL
>files from windows(2000) ODBC manager :
>psqlodbc.dll
>psqlodbc25.dll
>psqlodbc30w.dll
>What's the difference ?
>
>
The different versions of the ODBC protocol they support.
>Thanks .
>
>
>---(end of bro
Thapliyal, Deepak wrote:
Hi,
Assume I have a bank app.. When customer withdraws $10 from his accouint I
have to do following
--> update account_summary table [subtract $10 from his account]
--> update account detail_table [with other transaction details]
Requirement:
either both transactions
Jeff Bowden wrote:
For ease of configuration and other reasons, I would like for my
single-user GUI app to be able to use postgresql in-process as a library
accessing a database created in the users home directory. I think I
could possibly get what I want by launching a captive copy of postmast
Rick Gigger wrote:
I have just about the same sort of needs now and concluded that postgres
just is not suited for embedding into apps like that.
Why not? It's not that the PostgreSQL backend is a mammoth like Oracle.
The Firebird embedded version is pretty much the same as their server,
but wi
Tom Lane wrote:
Jeff Bowden <[EMAIL PROTECTED]> writes:
That makes sense to me. I wonder if sqlite suffers for this problem
(e.g. app crashing and corrupting the database).
Likely. I can tell you that Ann Harrison once told me she made a decent
amount of money as a consultant fixing broken Inte
Can you give the details ?
Or some Web links .Thanks.
===
>LitelWang wrote:
>
>>After I installed the driver I found three DLL
>>files from windows(2000) ODBC manager :
>>psqlodbc.dll
>>psqlodbc25.dll
>>psqlodbc30w.dll
>>What's the difference ?
>>
>>
>The diff
This caught me today :
I switched on "log_statement=true" whilst examining a possible foreign
key concurrency problem. I noticed that the generated foreign key check
"SELECT 1 FROM ONLY ... WHERE id = ...FOR UPDATE..."
on the parent table seemed to be only appearing every now and again.
This c
69 matches
Mail list logo