On Wednesday October 20 2004 10:43, Ed L. wrote:
> On Wednesday October 20 2004 10:12, Ed L. wrote:
> > On Wednesday October 20 2004 10:00, Tom Lane wrote:
> > > "Ed L." <[EMAIL PROTECTED]> writes:
> > > > In other words, how do I calculate which bytes to zero to simulate
> > > > zero_damaged_pages
Hi,
I want to make use of some contrib/dblink functions inside my user
defined functions, e.g. I would like to be able to call dblink_record()
from my user defined code in this way:
dblink_record("param1","param2");
Is this possible?
I would like to avoid:
1) inserting all the dblink code in
Actually, now that I think about it, they use a special table type that the INDEX is
also the DATUM. It is possible to recover the data, out of the index listing. So go
down the index, then decode the indexing value - voila, a whole step saved. I have no
idea what engine these table types are in
Dennis Gearon wrote:
Google probably is much bigger, and on mainframes, and probably Oracle
or DB2.
Google uses a Linux cluster and there database is HUGE. I do not know
which database
they use. I bet they built their own specifically for what they do.
Sincerely,
Joshua D. Drake
But the table I
Google probably is much bigger, and on mainframes, and probably Oracle or DB2.
But the table I am worried about is the one sized >= 3.6 GIGA records.
Tino Wildenhain wrote:
Hi,
Am Do, den 21.10.2004 schrieb Dennis Gearon um 1:30:
I am designing something that may be the size of yahoo, google, ebay,
Hi,
Am Do, den 21.10.2004 schrieb Dennis Gearon um 1:30:
> I am designing something that may be the size of yahoo, google, ebay, etc.
>
> Just ONE many to many table could possibly have the following
> characteristics:
>
> 3,600,000,000 records
> each record is 9 fields of INT4/DATE
>
[EMAIL PROTECTED] (Otto Blomqvist) writes:
> I am obviously doing something wrong or using something the wrong way.
What PG version are you using? 7.3 and later can push the WHERE
condition down into the view, but older versions won't.
regards, tom lane
-
i do no think writing the query in the second form differs from the first
one. In both cases, only the relevent articles (in range and of desired
type) will come out of the scan operator that scans the articles.
--h
"Dennis Gearon" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> My
On Wednesday October 20 2004 10:00, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > In other words, how do I calculate which bytes to zero to simulate
> > zero_damaged_pages??
>
> Why simulate it, when you can just turn it on? But anyway, the answer
> is "the whole page".
Old 7.3.4 inst
"Ed L." <[EMAIL PROTECTED]> writes:
> In other words, how do I calculate which bytes to zero to simulate
> zero_damaged_pages??
Why simulate it, when you can just turn it on? But anyway, the answer
is "the whole page".
regards, tom lane
---(end o
On Wednesday October 20 2004 10:12, Ed L. wrote:
> On Wednesday October 20 2004 10:00, Tom Lane wrote:
> > "Ed L." <[EMAIL PROTECTED]> writes:
> > > In other words, how do I calculate which bytes to zero to simulate
> > > zero_damaged_pages??
> >
> > Why simulate it, when you can just turn it on?
Carlo Florendo wrote:
Hello,
I appreciate very much the readline functionality on the psql client.
However, I'd like to ask if it is possible for the readline
functionality to gobble up even table names and field names:
For example, if have tables
`table1' with 3 fields `field1', 'field2', and `f
My question is it possible to speed up a query doing preselects? What I'm working on
could end up being a very large dataset. I hope to have 100-1000 queries per second
(0r more?), and if very large tables are joined with very large tables, I imagine that
the memory would be get very full, overf
Hello,
I appreciate very much the readline functionality on the psql client.
However, I'd like to ask if it is possible for the readline
functionality to gobble up even table names and field names:
For example, if have tables
`table1' with 3 fields `field1', 'field2', and `field3'
and
`table2' wi
On Wednesday October 20 2004 5:34, Ed L. wrote:
> I have 5 corrupted page headers as evidenced by these errors:
>
> ERROR: Invalid page header in block 13947 of ...
>
> The corruption is causing numerous queries to abort. First option is to
> try to salvage data before attempt restore from
I have 5 corrupted page headers as evidenced by these errors:
ERROR: Invalid page header in block 13947 of ...
The corruption is causing numerous queries to abort. First option is to try
to salvage data before attempt restore from backup. I want to try to edit
the file to zero out t
On Thu, 2004-10-21 at 06:40, Thomas F.O'Connell wrote:
> Is the ON COMMIT syntax available to temporary tables created using the
> CREATE TABLE AS syntax?
No, but it should be. There's a good chance this will be in 8.1
> If not, is there a way to drop such a table at
> the end of a transaction?
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following
characteristics:
3,600,000,000 records
each record is 9 fields of INT4/DATE
Other tables will have about 5 million records of about the same size.
There a
On Wed, 20 Oct 2004 23:43:54 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> You will need to tell us the number of updates/deletes you are having. This will
> determine the vacuum needs. If the bulk of the data is inserted you may only need to
> analyze frequently, not vacuum.
>
> In order to get
On 20 Oct 2004 at 15:36, Josh Close wrote:
> On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> > Is this the select(1) query? Please post an explain analyze for this and any other
> > "slow"
> > queries.
>
> I think it took so long 'cause it wasn't cached. The second t
A better solution is to use the serial data type. OID is depreciated
and may go away.
http://www.postgresql.org/docs/7.4/static/datatype.html#DATATYPE-SERIAL
On 19 Oct 2004 07:54:36 -0700, Raffaele Spizzuoco
<[EMAIL PROTECTED]> wrote:
> Hi!
>
> I'm from Italy, and sorry about my english...
> I
Is the ON COMMIT syntax available to temporary tables created using the
CREATE TABLE AS syntax? If not, is there a way to drop such a table at
the end of a transaction?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Sui
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> Is this the select(1) query? Please post an explain analyze for this and any other
> "slow"
> queries.
I think it took so long 'cause it wasn't cached. The second time I ran
it, it took less than a second. How you can te
On 20 Oct 2004 at 14:09, Josh Close wrote:
> On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> > Hmm, that seems a bit slow. How big are the rows you are inserting? Have you
> > checked
> > the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
On Wed, Oct 20, 2004 at 02:59:27PM -0400, Eric E wrote:
> - have the sequence preallocation table hold only numbers with status
> being available or pending, i.e., delete numbers once they have been
> allocated. This leaves on two possible statuses: available and pending.
I would argue that yo
Hmm that's a really intesting idea, Tino. Since we're probably
talking about 100 numbers max, a query on this table would work
fairly fast, and operationally simple. I'll think about that.
Thanks,
Eric
Tino Wildenhain wrote:
Hi,
Am Mi, den 20.10.2004 schrieb Eric E um 19:52:
Hi Tin
Hi,
Am Mi, den 20.10.2004 schrieb Eric E um 19:52:
> Hi Tino,
> Many thanks for helping me.
>
> I know that the sequence issue is a troubling one for many on the list.
> Perhaps if I explain the need for a continuous sequence I can circumvent
> some of that:
>
> This database is for a
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
> the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
> monitoring to determine where the bottleneck is.
Th
On 20 Oct 2004 at 13:34, Josh Close wrote:
> > How long does 100,000 rows take to insert exactly?
>
> I believe with the bulk inserts, 100k only takes a couple mins.
>
Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
the cpu and IO usage during the inserts?
Hi Andrew,
I had basically started working on an idea like the second approach,
but had not been able to put the status element so clearly. I really
like the statuses of available, pending, and granted.
There's one more twist I think I can use to optimize this: once a number
is assigned, it
On Wed, 20 Oct 2004 13:35:43 -0500, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
> You might not need to do the vacuum fulls that often. If the your hourly
> vacuums have a high enough fsm setting, they should be keeping the database
> from continually growing in size. At that point daily vacuum full
On Wed, 20 Oct 2004 18:47:25 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> What about triggers? Also constraints (check contraints, integrity
> constraints) All these will slow the inserts/updates down.
No triggers or constraints. There are some foreign keys, but the
tables that have the inserts
cc me please:
I can't find in the HTML documentation the max length of a bit string.
Anyone know where it is?
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEma
On Wed, Oct 20, 2004 at 01:52:59PM -0400, Eric E wrote:
> One thought I had, and I'd love to hear what people think of this, is to
> build a table of storage location numbers that are available for use.
> That way the search for new numbers could be pushed off until some
> convenient moment wel
On Wed, Oct 20, 2004 at 08:25:22 -0500,
Josh Close <[EMAIL PROTECTED]> wrote:
>
> It's slow due to several things happening all at once. There are a lot
> of inserts and updates happening. There is periodically a bulk insert
> of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
>
[EMAIL PROTECTED] (Leonardo Francalanci) writes:
>> When a data file for a specific table (or index?) is larger than
>> 1GB, its split up in several parts. This is probably a left over
>> from the time OSs used to have problems with large files.
>
> Thank you.
> Is there any documentation I can rea
Hi Tino,
Many thanks for helping me.
I know that the sequence issue is a troubling one for many on the list.
Perhaps if I explain the need for a continuous sequence I can circumvent
some of that:
This database is for a laboratory, and the numbers in sequence
determine storage locations f
is there a way to create a table with a certain type?
CREATE TYPE typename AS (id integer, name varchar);
and something like
CREATE TABLE names OF TYPE typename.
Is there a syntax to support this?
thanks,
--h
---(end of broadcast)---
TIP 4: D
On 20 Oct 2004 at 11:37, Josh Close wrote:
> On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> > 1: Is the bulk insert being done inside of a single transaction, or as
> > individual inserts?
>
> The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
>
The difficulty is, that your view-based statement do not make use of any
index. So the query must look at each tuple. It seems, that union all
requires a full scan of the participates relations. I dont know if it is
possible but try to create an index on the view ;-)
Hagen
Otto Blomqvist wrote:
On Wed, Oct 20, 2004 at 11:57:42AM -0400, Andrew Sullivan wrote:
> Now, how do you handle the cases where either the transaction fails
> so you can't set it to 3? Simple: your client captures errors and
> then sets the value back to 1 later.
Has anyone read "the Sagas paper" by Garcia-Molina? T
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> 1: Is the bulk insert being done inside of a single transaction, or as
> individual inserts?
The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copie
I was attempting to set up my psql client on the Win32 version of
postgres 8.0 beta 2 to be able to use an external editor. I set the
environment variable in windows like so:
PSQL_EDITOR="c:\progra~1\Textpa~1\Textpad.exe"
which does appear to work correctly. However, I get the following
when at
At 09:25 AM 10/19/2004 -0400, Ed Stoner wrote:
I want to use bare numbers because that is how the users (students in this
case) are identified on the network and in the student information
system. They've been identified this way for over 20 years, so it would
be near impossible to change at th
Scott Marlowe writes:
> On Wed, 2004-10-20 at 09:45, Dan Pelleg wrote:
> > Scott Marlowe writes:
> > > On Wed, 2004-10-20 at 08:06, Dan Pelleg wrote:
> > > > I'm trying to access a table with about 120M rows. It's a vertical version
> > > > of a table with 360 or so columns. The new columns
IM trying to build the ppostgres ODBC driver for MacOSX. I thought id
try and build it as a bundle from XCode. All compiles no problem but
then at the end of the compile i get an undefined symbols error, here
it is:
ld: Undefined symbols:
_CurrentMemoryContext
_MemoryContextAlloc
_pfree
Any ide
On Tue, Oct 19, 2004 at 11:19:05AM -0400, Eric E wrote:
> My users will draw a number or numbers from the sequence and write to
> the field. Sometimes, however, these sequence numbers will be discarded
> (after a transaction is complete), and thus available for use. During
> the transaction, h
On Wed, 2004-10-20 at 09:45, Dan Pelleg wrote:
> Scott Marlowe writes:
> > On Wed, 2004-10-20 at 08:06, Dan Pelleg wrote:
> > > I'm trying to access a table with about 120M rows. It's a vertical version
> > > of a table with 360 or so columns. The new columns are: original item col,
> > > origi
On Wed, 2004-10-20 at 07:25, Josh Close wrote:
> It's slow due to several things happening all at once. There are a lot
> of inserts and updates happening. There is periodically a bulk insert
> of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
> hour due to the amount of transacti
Scott Marlowe writes:
> On Wed, 2004-10-20 at 08:06, Dan Pelleg wrote:
> > I'm trying to access a table with about 120M rows. It's a vertical version
> > of a table with 360 or so columns. The new columns are: original item col,
> > original item row, and the value.
> >
> > I created an inde
On Wed, 2004-10-20 at 08:06, Dan Pelleg wrote:
> I'm trying to access a table with about 120M rows. It's a vertical version
> of a table with 360 or so columns. The new columns are: original item col,
> original item row, and the value.
>
> I created an index:
>
> CREATE INDEX idx on table (col,
I've got an app written with Access XP using a PostgreSQL backend internet
connection (using the latest ODBC driver) that was deployed 2 years ago.
Recently a client upgraded to XP Service Pack 2. After the upgrade she was
unable to connect to the remote database, getting the error:
Unable to conn
Hi,
On Tue, 2004-10-19 at 01:16, Eric E wrote:
> Hi,
> I have a question about sequences. I need a field to have values with
> no holes in the sequence. However, the values do not need to be in order.
>
> My users will draw a number or numbers from the sequence and write to
> the field.
All,
My company (Chariot Solutions) is sponsoring a day of free
PostgreSQL training by Bruce Momjian (one of the core PostgreSQL
developers). The day is split into 2 sessions (plus a Q&A session):
* Mastering PostgreSQL Administration
* PostgreSQL Performance Tuning
Registratio
Far from being a perfect idea but a faster solution than stepping through
all holes:
1) Create a second table containing only one field of type of your key.
2) When you delete an entry place the delete key value in your second table
3) If you insert a new entry into your old table and your new tab
Hello,
We have been working on migrating oracle database to postgres for one of
our client.
We have a stored procedure in oracle which
uses varray and I have to convert this stored procedure to
postgres.
Any help with respect to this will be greatly
appreciated.
Thanks in advance.
Best
I'm trying to access a table with about 120M rows. It's a vertical version
of a table with 360 or so columns. The new columns are: original item col,
original item row, and the value.
I created an index:
CREATE INDEX idx on table (col, row)
however, selects are still very slow. It seems it stil
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this
Hi,
I have a question about sequences. I need a field to have values
with no holes in the sequence. However, the values do not need to be in
order.
My users will draw a number or numbers from the sequence and write to
the field. Sometimes, however, these sequence numbers will be discarded
Hi!
I'm from Italy, and sorry about my english...
I have a question that I know it is already said in the groups but I
have however some doubts
I have seen it is technically possible to use OID as PRIMARY KEY and
as FOREIGN KEY but it is correct to do so for the database's logical
integrity?
Is it
Thanks. This worked. This is exactly what I was looking for.
Stephan Szabo wrote:
On Tue, 12 Oct 2004, Ed Stoner wrote:
I am unable to use the "CREATE USER" command with numeric user names
(i.e. CREATE USER 35236 WITH PASSWORD '1234';). Is this a limitation or
a problem somewhere with how I hav
Hi,
I have a question about sequences. I need a field to have values with
no holes in the sequence. However, the values do not need to be in order.
My users will draw a number or numbers from the sequence and write to
the field. Sometimes, however, these sequence numbers will be discarded
(
I want to use bare numbers because that is how the users (students in
this case) are identified on the network and in the student information
system. They've been identified this way for over 20 years, so it would
be near impossible to change at this point (although it is not always
very conve
On Tue, 19 Oct 2004 13:33:07 -0400, Joseph.Dunleavy wrote:
> I am trying to download postgresql from one of the mirror sites. I get
> prompted for a username and password. I try anonymous login and my
> password and I get an error stating either the server doesn't support
> anonymous logins or
Hello !
I have two tables (which contains individual months' data). One of
them contains 500 thousand records and the other one about 40k, 8
columns. When I do a simple query on them individually it takes
milli-seconds to complete (see gory details below). For some querys I
want to include data fr
Leonardo Francalanci <[EMAIL PROTECTED]> writes:
> Is there any documentation I can read about this?
The best concise documentation I know about is in the CVS-tip docs for
contrib/oid2name (reproduced below; the bit about tablespaces is
irrelevant to pre-8.0 versions, but the rest is accurate). I
"Roberts, Adam" <[EMAIL PROTECTED]> writes:
> So, my main question is, is it reasonable to say that a trans id
> wraparound failure could create a situation in which you could
> use/manipulate user data tables if you refer to the data tables directly
> but if you tried to use a util (such as pgdump
On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades <[EMAIL PROTECTED]> wrote:
> Unlike many other database engines the shared buffers of Postgres is
> not a private cache of the database data. It is a working area shared
> between all the backend processes. This needs to be tuned for number
> of conne
On Wed, Oct 20, 2004 at 01:54:04PM +0200, Sim Zacks wrote:
> It is very weird, I just tried both a group by and distinct and both
> of them still return the duplicates.
>
> I also tried a very simple union which didn't return any duplicates,
> both of these said, it is obviously not a problem with
double precision is inexact and therefore any query returning a field
of that type cannot be in a group by/distinct...
I switched it to type ::numeric(10,4) and it worked fine.
It was the system that automatically did the conversion for me, so I
will have to figure out why and keep that in mind f
It is very weird, I just tried both a group by and distinct and both
of them still return the duplicates.
I also tried a very simple union which didn't return any duplicates,
both of these said, it is obviously not a problem with union.
I just tried the query without the case statement that does
I am using 8.0 beta 1 on an RH 8 Linux server.
I have a union query that I am converting from access (where it
worked) and it is returning duplicates. The only difference between
the two rows is the Row field, which is returned automatically.
and an example of a row that it has returned duplicate
Speaking off-list with Zoltan, it appears this problem was *also*
related to nod32 antivirus. Just a different error message than we've
seen before. Seems nod32 is significantly worse than any other AV
products for postgresql...
//Magnus
> -Original Message-
> From: [EMAIL PROTECTED]
> [
I don't know. I just deduced that from an earlier situation where I new
the size of the data, and noticed that the largest table was split up in
enough 1GB parts to fit that size ;)
Best regards,
Arjen
On 20-10-2004 10:14, Leonardo Francalanci wrote:
When a data file for a specific table (or ind
When a data file for a specific table (or index?) is larger than 1GB,
its split up in several parts. This is probably a left over from the
time OSs used to have problems with large files.
Thank you.
Is there any documentation I can read about this?
---(end of broadcast)---
When a data file for a specific table (or index?) is larger than 1GB,
its split up in several parts. This is probably a left over from the
time OSs used to have problems with large files.
The file name, that number, is the OID of the table afaik. And the
postfix is of course the number in the o
On Wed, 2004-10-20 at 01:03, Kathiravan Velusamy wrote:
> Hello All,
>I am a newbie to PostgreSQL. I am using postgreSQL 7.4.5 in
> HP-Unix 11.11 PA , and 11.23 PA.
> I have a problem with postgreSQL Webmin (Webmin Version
> 1.070) testing in update function.
> This
I got a table with oid 25459.
The file is 1073741824 bytes big.
I did some more inserts, and now I have this two new files:
size/name:
1073741824 25459.1
21053440 25459.2
What are they?
The 25459.1 looks exactly like the 25459.
I tried looking at the docs, but searching for ".1" or ".2" wasn't that
Kathiravan Velusamy wrote:
I created database called "test" and created table name called "one"
for that DB, which contains filed name "Name" with varchar(10) as a
type and allows Null values.
The issue here is that you have created a column "Name" with quotes,
which means it is case-sensitive.
"SQ
> "Henry Combrinck" <[EMAIL PROTECTED]> writes:
>
>> The above works fine - the index is used. However, extend the where
>> clause with an extra line (say, col1 = 9) and the index is no longer used.
>
> Do
>
> explain analyze select ...
>
> with both versions and send the results (preferably wit
Hello All,
I am a
newbie to PostgreSQL. I am using postgreSQL 7.4.5 in HP-Unix 11.11 PA , and
11.23 PA.
I have a problem
with postgreSQL Webmin (Webmin Version 1.070) testing in update function.
This problem
exists only when i create a new data base through we
On 19 Oct 2004 at 17:35, Josh Close wrote:
> Well, I didn't find a whole lot in the list-archives, so I emailed
> that list whith a few more questions. My postgres server is just
> crawling right now :(
>
Unlike many other database engines the shared buffers of Postgres is
not a private cache o
82 matches
Mail list logo