Mike Charnoky wrote:
Scott Marlowe wrote:
On Mon, Mar 9, 2009 at 8:21 PM, Mike Charnoky wrote:
The random sampling query is normally pretty snappy. It usually takes on
the order of 1 second to sample a few thousand rows of data out of a few
million. The sampling is consistently quick, too
Scott Marlowe wrote:
On Mon, Mar 9, 2009 at 8:21 PM, Mike Charnoky wrote:
The random sampling query is normally pretty snappy. It usually takes on
the order of 1 second to sample a few thousand rows of data out of a few
million. The sampling is consistently quick, too. However, on some days
Gregory Stark wrote:
Tom Lane writes:
Mike Charnoky writes:
The sampling query which runs really slow on some days looks something
like this:
INSERT INTO sampled_data
(item_name, timestmp, ... )
SELECT item_name, timestmp, ... )
FROM raw_data
WHERE timestmp >= ? and times
vacuum running during this time.
Interesting idea to issue the EXPLAIN first... I will see if I can
instrument the sampling program to do this.
Thanks for your help Tom.
Mike
Tom Lane wrote:
Mike Charnoky writes:
The sampling query which runs really slow on some days looks something
like
ta of the data (in this case, where "item_name=something"),
not just between timestamp ranges. Guess I'll just have to try kicking
up the work_mem for that query.
Thanks so much for your input.
Mike
Scott Marlowe wrote:
On Mon, Mar 9, 2009 at 1:55 PM, Mike Charnoky wrote:
Hello,
I'm looking for some insight on an intermittent PostgreSQL performance
problem that has been very troublesome. Using PG 8.3.5 on a server
running CentOS 5 2.6.18-8.el5 (Dual Xeon 2.00 GHz, 4 GB RAM, RAID-10
SCSI 600GB array).
The problem in a nutshell is this: on some days, a nightly sam
Thanks! That did the trick. For posterity, I was able to do the final
conversion using:
alter table mytable alter column mycolumn type float4[] using
string_to_array(trim(both '[]' from
textin(nbf4a_out(mycolumn))),',')::float4[];
Mike
Alvaro Herrera wrote:
> Mik
missing link to eventually get the data to float4[].
Mike
Alvaro Herrera wrote:
> Mike Charnoky wrote:
>> Our database schema was designed before postgresql supported arrays and
>> contains a custom type which is basically a float4 array. I would like
>> to clean things up a
Our database schema was designed before postgresql supported arrays and
contains a custom type which is basically a float4 array. I would like
to clean things up and convert the custom datatype to float4[], as this
would obviate the need for us to compile a custom shared object. We are
hitting pr
In this usage scenario, doesn't the new HOT (heap only tuples) feature
of PG8.3 help, in terms of the DB requiring less VACUUM maintenance?
I am similarly performing a huge number of inserts, followed by a huge
number of updates to fill in a few null fields. The data is indexed by
insert time. M
is
inserting data into this table. I have checkpoint_segments set to 64 so
that pg is not constantly thrashing the disk with writes. The
transaction log is on a separate disk.
Mike
Bill Moran wrote:
> In response to Mike Charnoky <[EMAIL PROTECTED]>:
>
>> This is strange...
evtime <= '2007-09-30 00:00:00-07'::timestamp with
time zone))
Total runtime: 111201.000 ms
(4 rows)
Time: 111298.695 ms
Mike
Gregory Stark wrote:
> "Mike Charnoky" <[EMAIL PROTECTED]> writes:
>
>> I altered the table in question, with "set statistic
0:00:00-07'::timestamp with
time zone))
(3 rows)
Time: 131.559 ms
mydb# execute stmt;
count
------
14150928
(1 row)
Time: 101721.346 ms
Mike
Gregory Stark wrote:
> "Alban Hertroys" <[EMAIL PROTECTED]> writes:
>
>> Mike Charnoky wrote:
>>> With respect
007-09-27 00:00:00-07'::timestamp with
time zone))
(3 rows)
Mike
Gregory Stark wrote:
> "Alvaro Herrera" <[EMAIL PROTECTED]> writes:
>
>> A. Kretschmer wrote:
>>> am Fri, dem 28.09.2007, um 11:56:46 -0400 mailte Mike Charnoky folgendes:
>>>>
The autovacuum is turned on. Since this is pg8.1, I don't know when the
table was actually last vacuumed. I *did* run analyze on the table,
though. Also, nothing has been deleted in this table... so vacuum
should have no affect, right?
Mike
Sean Davis wrote:
> Mike Charnoky wrot
Hi,
I am still having problems performing a count(*) on a large table. This
is a followup from a recent thread:
http://archives.postgresql.org/pgsql-general/2007-09/msg00561.php
Since the last time these problems happened, we have tweaked some
postgresql config parameters (fsm, etc). I also re
Thanks, recreating the table solved my problems. Our team is working on
implementing some performance tuning based on other recommendations from
the list (FSM, etc).
Mike
Joshua D. Drake wrote:
> At this point, you are in a world of hurt :). If you stop a vacuum you
> have created a huge mess o
I have never heard that stopping a vacuum is problematic... I have had
to do this many times in the past without any adverse affects. Is there
some sort of documentation which elaborates on this issue?
For the record, I did a VACUUM ANALYZE, not FULL. Now that I think
about it, I probably should
Hi,
I'm using PostgreSQL 8.1.8 and am having trouble with a table which
contains a large amount of data. Data is constantly being inserted into
the table, roughly a million inserts per hour at peak. The table
currently has about 100 million entries which take up 14G of space (24G
with indices).
pg_locks.relation=pg_class.oid and pg_locks.database=
(SELECT datid from pg_stat_database where datname = 'my_db_name');
Mike
Tom Lane wrote:
> Mike Charnoky <[EMAIL PROTECTED]> writes:
>> First, a question: For a PG8.1 database, is it preferable to use the new
>&
Hi, I'm using PostgreSQL 8.1.4 and am having some problems with the new
disable/enable trigger command.
First, a question: For a PG8.1 database, is it preferable to use the new
"alter table disable|enable trigger" command as opposed to the old
method of setting pg_class.reltriggers = 0?
I'm assum
hat dataPos is
being treated as an integer somewhere.
Mike Charnoky
Tom Lane wrote:
Mike Charnoky <[EMAIL PROTECTED]> writes:
I am currently using PostgreSQL v7.3.4 on a RedHat 8.0 system (2.4.23 kernel)
using the ext3 filesystem. I am experiencing problems when performing a
pg_restore using
ataPos
2: tctx->dataPos = 1785996817
BTW, the file size is: 2361910772 bytes
Mike Charnoky
Tom Lane wrote:
Mike Charnoky <[EMAIL PROTECTED]> writes:
I am currently using PostgreSQL v7.3.4 on a RedHat 8.0 system (2.4.23 kernel)
using the ext3 filesystem. I am experiencing problems when p
this is the way to support large files). It
is my understanding that ext3 supports file sizes up to 1T. The restore worked
fine when the database was smaller. Any ideas?
Thanks,
Mike Charnoky
---(end of broadcast)---
TIP 6: Have you searched
24 matches
Mail list logo