On Jun 21, 2006, at 7:42 AM, H.J. Sanders wrote:
The last 15 years we also used Informix and we never, never had to
unload/load
the database because of an upgrade.
Perhaps somebody knows how they do the trick?
Do they provide a migration/upgrade utility?
--
Jim C. Nasby, Sr. Engineering Cons
On Jun 22, 2006, at 7:12 PM, Joseph Shraibman wrote:
I'm running a 8.0 database. I have a very large log table that is
rarely updated or deleted from. The nightly vacuum does not know
this, and spends a lot of time on it, and all its indexes.
My RFE: When vacuuming a table, pg should try t
On Jun 22, 2006, at 4:02 PM, Relyea, Mike wrote:
Thanks Jim and Tom. At least now I've got a direction to head in. I
think for now I'll probably reduce work_mem as a stop-gap measure
to get
the query running again. This will buy me some time to redesign it.
I'll probably separate out each s
On Thu, 22 Jun 2006, Ron Johnson wrote:
Assuming you're on a Unix box, you could easily use awk or perl to parse
your csv and create one that contains only the columns you want.
And probably pipe the results directly into the copy command, bypassing any
intermediary steps.
Well, why not us
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Geoffrey wrote:
> Chris wrote:
>> Ricardo Vaz wrote:
>>> Hi,
>>>
>>>
>>> It is possible to copy data from text file (CSV) ignoring
>>> some columns of this text file?
>
> Assuming you're on a Unix box, you could easily use awk or perl
> to parse you
Chris wrote:
Ricardo Vaz wrote:
Hi,
It is possible to copy data from text file (CSV) ignoring some columns
of this text file?
Assuming you're on a Unix box, you could easily use awk or perl to parse
your csv and create one that contains only the columns you want.
--
Until later, Geoffrey
Ricardo Vaz wrote:
Hi,
It is possible to copy data from text file (CSV) ignoring some columns
of this text file?
(How come this was posted 3 times?)
I don't think so but you could load it into a temporary table and then:
insert into new_table select col1, col2 from temp_table;
--
Postgre
On 20 Jun 2006 18:20:55 +0200, Harald Fuchs <[EMAIL PROTECTED]> wrote:
In article <[EMAIL PROTECTED]>,
Sim Zacks <[EMAIL PROTECTED]> writes:
> To get this result set it would have to be an inner join on employee
> and date where the second event time is greater then the first. But I
> don't want
Hi,
It is possible to copy data from text file (CSV) ignoring some columns
of this text file?
Thanks,
Ricardo Vaz
Tribunal de Contas do Estado de São Paulo
Diretoria de Tecnologia
Rua Venceslau Brás, 183 - 2º andar
(11) 3292-3266 ramal 3640
[EMAIL PROTECTED]
--
Esta mensagem foi verificada
Hi,
It is possible to copy data from text file (CSV) ignoring some columns of
this text file?
Thanks,
Ricardo Vaz
Tribunal de Contas do Estado de São Paulo
Diretoria de Tecnologia
[EMAIL PROTECTED]
--
Esta mensagem foi verificada pelo sistema de antivírus e
acredita-se estar livre de per
Hi,
It is possible to copy data from text file (CSV) ignoring some columns
of this text file?
Thanks,
Ricardo Vaz
Tribunal de Contas do Estado de São Paulo
Diretoria de Tecnologia
Rua Venceslau Brás, 183 - 2º andar
(11) 3292-3266 ramal 3640
[EMAIL PROTECTED]
--
Esta mensagem foi verificada p
TJ O'Donnell wrote:
AHA! I hadn't encountered any null values in my bitstrings.
and having to include 'where xxx is not null' is rather
inconvenient and easy to forget.
indeed bitor(B'1000',null) returns null
but as a test, i nulled one row's data (the bitstring column portion only)
and my ors
I'm running a 8.0 database. I have a very large log table that is
rarely updated or deleted from. The nightly vacuum does not know this,
and spends a lot of time on it, and all its indexes.
My RFE: When vacuuming a table, pg should try to vacuum the primary key
first. If that results in 0 r
Jacob Coby <[EMAIL PROTECTED]> writes:
> I then tried:
> foo=# select * from pg_catalog.pg_statistic ;
> ERROR: could not access status of transaction 3242180415
> DETAIL: could not open file "pg_clog/0C13": No such file or directory
> so it seems that pg_catalog.pg_statistic has somehow become
Tom Lane wrote:
Jacob Coby <[EMAIL PROTECTED]> writes:
I've got a pg 8.1.1 server running autovacuum, and it started attempting
to vacuum template0.
2006-06-22 15:00:50 EDT LOG: autovacuum: processing database "template0"
2006-06-22 15:00:50 EDT ERROR: could not access status of transaction
Jacob Coby <[EMAIL PROTECTED]> writes:
> I've got a pg 8.1.1 server running autovacuum, and it started attempting
> to vacuum template0.
> 2006-06-22 15:00:50 EDT LOG: autovacuum: processing database "template0"
> 2006-06-22 15:00:50 EDT ERROR: could not access status of transaction
> 32421804
Jim Nasby <[EMAIL PROTECTED]> writes:
> On Jun 18, 2006, at 10:47 PM, Qingqing Zhou wrote:
>> After dump/restore the database size is 1685 MB and after
>> vacuum-full/reindex is 1990 MB. Where we saved 305 MB?
> My guess would be due to toasted text columns...
No, it's probably got more to do wit
Thanks Jim and Tom. At least now I've got a direction to head in. I
think for now I'll probably reduce work_mem as a stop-gap measure to get
the query running again. This will buy me some time to redesign it.
I'll probably separate out each sub query and store the results in a
table (would a tem
Tom Lane wrote:
Hmm. One of the things that's on my TODO list is to make the planner
smarter about drilling down into sub-selects to extract statistics.
I think that's what's called for here, but your example has eliminated
all the interesting details. Can you show us the actual query, its
EXP
I've got a pg 8.1.1 server running autovacuum, and it started attempting
to vacuum template0.
2006-06-22 15:00:50 EDT LOG: autovacuum: processing database "template0"
2006-06-22 15:00:50 EDT ERROR: could not access status of transaction
3242180415
2006-06-22 15:00:50 EDT DETAIL: could not op
Hi Tom, Alvaro. Thanks so much for your replies.On 22-Jun-06, at 12:56 PM, Alvaro Herrera wrote:Hmm ... I'd think that the number of dead tuples plus live tuples shouldnever be smaller than the number of tuples seen at last analyze. Unlesssome stats messages are lost (say, stop the server, start
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Apparently somehow last_anl_tuples has managed to get to be bigger than
>> n_live_tuples, which maybe could happen after a delete. Should we be
>> clamping last_anl_tuples to not exceed n_live_tuples somewhere?
>> Alvaro and Matthew,
TJ O'Donnell wrote:
maybe the aggregator (whoever,whatever that is) handles null args
differently,
not calling the func when the arg is null?
see:
http://www.postgresql.org/docs/8.1/interactive/sql-createaggregate.html
specifically:
"If the state transition function is declared "strict",
nuno wrote:
> hi, there. i'm trying to write a SQL statement which does the
> following things.
>
> 1. checks if data already exists in the database
> 2. if not, insert data into database otherwise skip.
>
> for example, i'd like to insert a student called 'Michael Jordan'
> whose ID is 'JORDANM
Tom Lane wrote:
> Dylan Hansen <[EMAIL PROTECTED]> writes:
> > I have been spending some time looking into how auto-vacuum is
> > performing on one of our servers. After putting the PostgreSQL logs
> > in debug I noticed that the threshold for ANALYZE was never being hit
> > for a particular
Andrew Gould wrote:
I just finished migrating US county level census data
into a PostgreSQL database; and thought I'd save
others the trouble of doing the same. (I've been
laid-off and am trying to stay busy.) The gzipped,
dump file is approximately 9.5MB.
Is there a place online where people
AHA! I hadn't encountered any null values in my bitstrings.
and having to include 'where xxx is not null' is rather
inconvenient and easy to forget.
indeed bitor(B'1000',null) returns null
but as a test, i nulled one row's data (the bitstring column portion only)
and my orsum without INITCOND r
On Jun 19, 2006, at 4:10 PM, Clodoaldo Pinto wrote:
2006/6/19, Clodoaldo Pinto <[EMAIL PROTECTED]>:
Since I have a database with a frequent and huge update batch, the
WAL
segment file number is about 130. I suspect these files management
during the update is hurting performance. Is it likely?
On Jun 18, 2006, at 10:47 PM, Qingqing Zhou wrote:
""Linux Portal"" <[EMAIL PROTECTED]> wrote
The article on the subject can be read here:
http://linux.inet.hr/optimize_postgresql_database_size.html
After dump/restore the database size is 1685 MB and after
vacuum-full/reindex is 1990 MB. W
[EMAIL PROTECTED] (Andrew Gould) writes:
> I just finished migrating US county level census data
> into a PostgreSQL database; and thought I'd save
> others the trouble of doing the same. (I've been
> laid-off and am trying to stay busy.) The gzipped,
> dump file is approximately 9.5MB.
>
> Is th
On Jun 21, 2006, at 8:38 AM, Wes wrote:
Maybe you could ask at some postgresql support companies how much
effort it
would
be to add a "without check" flag to "alter table add constraint
foreign key",
and
how much they'd charge for it...
Or if I get ambitious, dig into the code myself if I
On Jun 22, 2006, at 2:23 PM, Tom Lane wrote:
"Relyea, Mike" <[EMAIL PROTECTED]> writes:
I've zipped the results of EXPLAIN INSERT INTO "tblSummary" SELECT *
FROM "qrySummary"; for my case. It's a zip file that I've renamed to
.txt in order to get around the attachment being blocked by
certain
"Relyea, Mike" <[EMAIL PROTECTED]> writes:
> I've zipped the results of EXPLAIN INSERT INTO "tblSummary" SELECT *
> FROM "qrySummary"; for my case. It's a zip file that I've renamed to
> .txt in order to get around the attachment being blocked by certain mail
> servers.
Egad, what a mess :-(. By
On Jun 22, 2006, at 9:10 AM, SunWuKung wrote:
Hi,
I put the following into a function. Is this always going to give me a
sequence with an increment of 1 independently from other transactions?
If not what would be the standard way to return a sequence from a
query?
CREATE TEMPORARY SEQUENCE rank_
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Well, that's the problem right there :-(. Have you ANALYZEd this table?
> My production table and query are more complex. In the original, the
> query above was in a sub-select; the work-around was to create a temp
> table with the
I've zipped the results of EXPLAIN INSERT INTO "tblSummary" SELECT *
FROM "qrySummary"; for my case. It's a zip file that I've renamed to
.txt in order to get around the attachment being blocked by certain mail
servers.
PK yÖ4¨yï ý explain.txtí]]oÇÕ¾/[EMAIL
PROTECTED]( É`ç{ÖH
Tom Lane wrote:
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
oom_test=> explain select val,count(*) from oom_tab group by val;
QUERY PLAN
-
HashAggregate (cost=1163446.13..1163448.63 rows=200
On Thu, Jun 22, 2006 at 11:52:03AM -0700, Andrew Gould wrote:
> Is there a place online where people share data? I
> thought about offering it to techdocs or pgfoundry;
> but it's neither documentation, nor an application, so
> I didn't think it would be appropriate.
PgFoundry has a Sample Databa
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> QUERY PLAN
> -
> HashAggregate (cost=1163446.13..1163448.63 rows=200 width=4)
> -> Seq Scan on oo
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> oom_test=> explain select val,count(*) from oom_tab group by val;
> QUERY PLAN
> -
> HashAggregate (cost=1163446.13..1163448.63 rows=200 width=4)
>
I just finished migrating US county level census data
into a PostgreSQL database; and thought I'd save
others the trouble of doing the same. (I've been
laid-off and am trying to stay busy.) The gzipped,
dump file is approximately 9.5MB.
Is there a place online where people share data? I
thought
Dylan Hansen <[EMAIL PROTECTED]> writes:
> I have been spending some time looking into how auto-vacuum is
> performing on one of our servers. After putting the PostgreSQL logs
> in debug I noticed that the threshold for ANALYZE was never being hit
> for a particular table because the calcula
Tom Lane wrote:
Misestimated hash aggregation, perhaps? What is the query and what does
EXPLAIN show for it? What have you got work_mem set to?
oom_test=> \d oom_tab
Table "public.oom_tab"
Column | Type | Modifiers
+-+---
val| integer |
oom_test=> explai
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> I am consistently running into out-of-memory issues in 8.1.4 running on
> RHEL3 and 8.0.5 on RHEL4. The logs show entries like this:
> AggContext: -2130714624 total in 271 blocks; 9688 free (269 chunks);
> -2130724312 used
> TupleHashTable: 893902872
[EMAIL PROTECTED] writes:
> SQL2003 fixes these issues with information_schema but it seems that
> postgres has not caught up yet:
> http://groups.google.com/group/pgsql.interfaces/browse_thread/thread/9f19995849b3cdf4/c20b81bf8cf183af?lnk=st&q=information+schema&rnum=9&hl=en#c20b81bf8cf183af
I be
Greetings all,I have been spending some time looking into how auto-vacuum is performing on one of our servers. After putting the PostgreSQL logs in debug I noticed that the threshold for ANALYZE was never being hit for a particular table because the calculated value becomes increasingly negative.W
Hi,
I am consistently running into out-of-memory issues in 8.1.4 running on
RHEL3 and 8.0.5 on RHEL4. The logs show entries like this:
AggContext: -2130714624 total in 271 blocks; 9688 free (269 chunks);
-2130724312 used
TupleHashTable: 893902872 total in 119 blocks; 1088688 free (449 chunks);
Michael Fuhr <[EMAIL PROTECTED]> writes:
> ... the length needs to be in host byte order.
Right. The only place where a libpq client would be dealing with
network-byte-order data is within binary-format values for data fields
(in SELECT results) or binary-format parameters (when transmitting to
t
On Thu, Jun 22, 2006 at 08:52:04AM -0700, Kevin Jenkins wrote:
> Previously I was getting a bad result when calling PQexecParams with
> binary results because PostgreSQL stores its data big endian. So I
> had to do ntohl on the result to get it to little endian.
Clarification: PostgreSQL stores
sandro -
SQL2003 fixes these issues with information_schema but it seems that
postgres has not caught up yet:
http://groups.google.com/group/pgsql.interfaces/browse_thread/thread/9f19995849b3cdf4/c20b81bf8cf183af?lnk=st&q=information+schema&rnum=9&hl=en#c20b81bf8cf183af
although i am running only
"Clarence" <[EMAIL PROTECTED]> writes:
> I have a completely idle postgresql system (all backends "idle", none
> "in transaction"); every time I attempt to vacuum a particular table,
> it hangs after a while.
> ...
> In the system error log there is this line:
>could not write block 32756 of re
Previously I was getting a bad result when calling PQexecParams with
binary results because PostgreSQL stores its data big endian. So I
had to do ntohl on the result to get it to little endian.
My question is, do I also need to do htonl then, as in this scenario?
outStr[0]="blah";
outLengths[
"Vladimir Leban" <[EMAIL PROTECTED]> writes:
> I'm curently in version 7.3
> 2006-06-19 10:31:57 PANIC: XLogWrite: write request 0/17B2000 is past
> end of log 0/17B2000
Update to 7.3.latest --- IIRC we fixed that in 7.3.4. You should be able
to update in-place to a newer 7.3.* postmaster witho
[EMAIL PROTECTED] wrote:
I don't want to revisit or be redundant... but is there a quick and
dirty and cross-platform system for developing user input forms for
Postgres?
Don't know if this has already been mentioned, but how about Rekall?
http://www.rekallrevealed.org/kbExec.py#
You could al
So what's my next step? How do I track down what is causing this
problem?
-Original Message-
From: Qingqing Zhou [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 21, 2006 11:01 PM
To: Relyea, Mike
Cc: pgsql-general@postgresql.org; Tom Lane
Subject: RE: [GENERAL] Out of memory error in 8.
> "A" == A M <[EMAIL PROTECTED]> writes:
A> [But even Safe.pm has had
A> dozens of bugs revealed over the years- caveat emptor.]
Eeeh? Proof please? That's just FUD-raking.
>From what I recall, there have been a few clever leakages that have been fixed
rather rapidly.
"few" ne "dozens".
> I desperately needs OLEDB Provider for Postgres. Can any one help me?
Notice the following link:
http://www.postgresql.org/download/
It provides an assortment of useful tools as well as a link to the above
mentioned OLE-DB
provider.
http://pgfoundry.org/projects/oledb/
Regards,
Richard Broe
Thanks for pointing it out You are right; I forgot to add that...
On 6/20/06, Aaron Evans <[EMAIL PROTECTED]> wrote:
sorry to nitpick, but I think that to get this query to do exactly
what you want you'll need to add ordering over EventTime on your sub-
selects to assure that you get the ne
Title: Mensagem
Hi
:
I want to
use plpython.
Where i can
find and download pl/python for windows plattaform ?
Help is
apreciated.
Thanks in
advance.
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.2/372 - Release Date
Hi,
I put the following into a function. Is this always going to give me a
sequence with an increment of 1 independently from other transactions?
If not what would be the standard way to return a sequence from a
query?
CREATE TEMPORARY SEQUENCE rank_seq;
SELECT nextval('rank_seq') FROM whatever OR
Thanks for your Tipps!
Since it is repeatable in your machine, you can compile a new postgres
version with "--enable-cassert" (enable assertions in code) and
"--enable-debug" (enable gcc debug support) configuration. Then run it on
your data and "bt" the core dump.
I try to found out the reas
On Thu, June 22, 2006 7:41 am, Philippe Lang wrote:
> Hi,
>
>
> I would like to access a cpan perl module (FSA::Rules) in a perl
> procedure, inside Postgresql 8.1.4. FSA::Rules is a library that allows you
> to define a finite state machine in perl. No disk access.
>
> In order to to that, I need
On Thu, Jun 22, 2006 at 08:16:05AM -0400, Douglas McNaught wrote:
> Bill Moseley <[EMAIL PROTECTED]> writes:
>
> > Hum. lsof knows about the file.
> >
> > $ lsof -p 1512 | grep /var/run
> > postmaste 1512 postgres4u unix 0xf78b5980 1631
> > /var/run/postgresql/.s.PGSQL.543
On Sun, Jun 18, 2006 at 03:02:05PM -0700, [EMAIL PROTECTED] wrote:
> Given the name of the table and constraint, how would you retrieve the
> comment? "Chapter 9. Functions and Operators" says that
> obj_description(object_oid, catalog_name) returns a comment given the
> object's OID and class nam
John DeSoi wrote:
On Jun 22, 2006, at 7:38 AM, Kenneth Downs wrote:
We wrote a system that does exactly that, its called "Andromeda",
and it is GPL.
http://docs.andromeda.com
OOPS:
http://docs.secdat.com
Sounds interesting but this link does not work (apparently no
server at th
On Jun 22, 2006, at 7:38 AM, Kenneth Downs wrote:
We wrote a system that does exactly that, its called "Andromeda",
and it is GPL.
http://docs.andromeda.com
Sounds interesting but this link does not work (apparently no server
at that address).
John DeSoi, Ph.D.
http://pgedit.com/
Po
Em Domingo 18 Junho 2006 11:33, Erin Sheldon escreveu:
> Hello everyone -
>
> Array columns are, by default, 1-offset in their
> subscripting. Since I usually am calling postgres
> from a language with zero-offset, I would prefer
> that postgres conform to that. The online
> documentation hints t
"Chris Golden" <[EMAIL PROTECTED]> writes:
> This is my first time posting to this forum and I am very new to
> PostgreSQL. I am very excited about using it. I have set up a
> database and just need a point in the right direction on
> interfacing. I have an orders and messages database. One of
Bill Moseley <[EMAIL PROTECTED]> writes:
> Hum. lsof knows about the file.
>
> $ lsof -p 1512 | grep /var/run
> postmaste 1512 postgres4u unix 0xf78b5980 1631
> /var/run/postgresql/.s.PGSQL.5432
>
>
> Any ideas what happened to the socket?
Maybe something in your bootup p
On Tue, Jun 20, 2006 at 02:17:28PM +0100,
Essam Mansour <[EMAIL PROTECTED]> wrote
a message of 35 lines which said:
> In my research project, I need a DBMS that supports XML storage and
> retrieval, and provides ECA rule support.
(No idea about ECA.)
> - whether Postgresql DB could be used as
Hi,
I would like to access a cpan perl module (FSA::Rules) in a perl
procedure, inside Postgresql 8.1.4. FSA::Rules is a library that allows
you to define a finite state machine in perl. No disk access.
In order to to that, I need to use plperlu instead of plperl. And it
works just fine.
What ex
[EMAIL PROTECTED] wrote:
We wrote a system that does exactly that, its called "Andromeda", and it
is GPL.
http://docs.andromeda.com
It uses a data dictionary to do two things: 1) build the database and 2)
generate HTML maintenance forms. But it can also have multiple virtual
sites going in
Tim Allen wrote:
using syntax along the lines of INSERT ... RETURNING ...
SQL Server had a nifty feature here. You could simply toss a SELECT
statement at the end of a trigger of sproc and the results would be
returned.
This in effect made a table the potential return type of all commands
John Tregea wrote:
Greeting again,
I am writing records to postgreSQL from an IDE called revolution. At
the time I perform the INSERT command I need to retrieve the value of
the serial_id column from the newly created row.
We have an after-insert trigger that raises it as a notice. NOTICE
Greg Gazanian wrote:
I was wondering if anyone happens to know whether there is an Aggregate
function available in Postgres that can do an OR on a column of the bit varying
type. For example I would like to do something as follows:
bitstring
*
1110
0100
SELECT bitwise_or(bitstrin
Giraldo Leon wrote:
I want to know how to print a database eschema with the information of tables,
fields,keys,etc
Have a look at PostgreSQL Autodoc
http://www.rbt.ca/autodoc/
b
---(end of broadcast)---
TIP 5: don't forget to increase your
On Wed, Jun 21, 2006 at 10:59:42PM -0700, nuno wrote:
> insert into student (studentid, fname, lname)
> select 'JORDANMICHAEL', 'Michale', 'Jordan' from student where
> studentid not in
> (select studentid from student);
>
> however, this does not seem to work. it does not insert data even if it
>
On Wed, Jun 21, 2006 at 12:03:26PM -0400, Jasbinder Bali wrote:
> I've tried everything so that my .so file is recognized but in vein.
> Don't know whats going wrong.
The easiest may be to attach strace to the backend (strace -p) and then
do a "LOAD blah" on the frontend and see what happens.
Hav
On Tue, Jun 20, 2006 at 02:06:19AM -0700, [EMAIL PROTECTED] wrote:
> Such construction is very slow but when I modify SQL to:
> OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out
> ORDER BY mask LIMIT 100;
>
> it works very fast. It is strange for me becuase I've understood so far
79 matches
Mail list logo