19 aug 2007 kl. 06:34 skrev Felix Ji:
Hi all,
i am using PQexecParams() to "SELECT" about 3 million record in C+
+, and it takes several minutes to make it done with used memory
dramatically incresed(about 200MB).
it seems when i using PQexecParams(), i can't use the query result
before al
Hello list,
System is running linux kernel 2.6.18 with postgres 8.2.4 and 1GB ram.
I'm having a 50GB database with the biggest table taking about 30 GB
and has about 200 million rows.
I'm already started to redesign the database to avoid the hugh number
of rows in this big table but I'm st
Hello List,
I wonder if it is possible to make data selected in one query NOT
accessible for a SELECT in another query?
The thing is that I have a cleaning script that selects some rows
from a table and processes them.
During that processing I don't want these rows to end up in another
re
Hello list,
I wonder if it is possible to disable triggers for a single transaction.
I know I can disable triggers per table but then I need to disable
all triggers in all recursive tables before doing by query.
Can I do:
BEGIN TRANSACTION;
DISABLE TRIGGERS;
DELETE FROM tbl_foo WHERE ID > 5;
Hello list,
I have a table with 135 rows and it still takes up about 360MB with
only small columns. Details below.
db=# vacuum full tbl_archive;
VACUUM
db=# select * from
pg_size_pretty(pg_total_relation_size('tbl_archive'));
pg_size_pretty
360 MB
(1 row)
db=# select *
6 dec 2007 kl. 15.25 skrev Bill Moran:
Henrik <[EMAIL PROTECTED]> wrote:
Hello list,
I have a table with 135 rows and it still takes up about 360MB with
only small columns. Details below.
db=# vacuum full tbl_archive;
VACUUM
db=# select * from
pg_size_pretty(pg_total_relatio
6 dec 2007 kl. 18.12 skrev Tom Lane:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
5 dec 2007 kl. 16.25 skrev Tom Lane:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
-> Bitmap Index Scan on tbl_archive_idx1
(cost=0.00..1150.47 rows=8 width=0) (actual time=1505.456..1505.456
r
6 dec 2007 kl. 22.18 skrev Alvaro Herrera:
Gauthier, Dave wrote:
Future Enhancement?
If the column's new value can fit in the space already being used
by the
existing value, just change the column value in place and leave the
record alone. Would reduce the need for vacuum in many cases.
6 dec 2007 kl. 20.26 skrev Alvaro Herrera:
Henrik wrote:
I think I have a clue why its so off. We update a value in that
table about
2 - 3 million times per night and as update creates a new row it
becomes
bloated pretty fast. The table hade a size of 765 MB including
indexes and
after
Hello list,
I know backup of pg is a well discussed topic and ranges in solution
from simple pg_dump to more advanced PITR with Slony.
Even though I've studied most of them I can't really decide the best
solution for a new situation and would be grateful for any inputs on
this.
The situ
21 jan 2008 kl. 16.18 skrev Magnus Hagander:
On Mon, Jan 21, 2008 at 04:05:13PM +0100, Henrik wrote:
Hello list,
I know backup of pg is a well discussed topic and ranges in solution
from simple pg_dump to more advanced PITR with Slony.
Even though I've studied most of them I can
Hi list,
I'm using 8.3 and I've started looking at the new checkpoint features.
As a starter does anyone have some clues how to analyse this:
db=# select * from pg_stat_bgwriter;
checkpoints_timed | checkpoints_req | buffers_checkpoint |
buffers_clean | maxwritten_clean | buffers_backend | b
4 mar 2008 kl. 13.45 skrev Greg Smith:
On Tue, 4 Mar 2008, Henrik wrote:
As a starter does anyone have some clues how to analyse this:
db=# select * from pg_stat_bgwriter;
checkpoints_timed | checkpoints_req | buffers_checkpoint |
buffers_clean | maxwritten_clean | buffers_backend
Hi List,
We are running a couple of 8.3.1 servers and the are growing a lot.
I have the standard autovacuum settings from the 8.3.1 installation
and we are inserting about 2-3 million rows every night and cleaning
out just as many every day.
The database size rose to 80GB but after a dump/
3 jun 2008 kl. 15.23 skrev Bill Moran:
In response to Henrik <[EMAIL PROTECTED]>:
We are running a couple of 8.3.1 servers and the are growing a lot.
I have the standard autovacuum settings from the 8.3.1 installation
and we are inserting about 2-3 million rows every night and cleani
Hi list,
I'm having a table with a lots of file names in it. (Aprox 3 million)
in a 8.3.1 db.
Doing this simple query shows that the statistics is way of but I can
get them right even when I raise the statistics to 1000.
db=# alter table tbl_file alter file_name set statistics 1000;
ALTER
3 jun 2008 kl. 23.31 skrev Joris Dobbelsteen:
Henrik wrote:
Hi list,
I'm having a table with a lots of file names in it. (Aprox 3
million) in a 8.3.1 db.
Doing this simple query shows that the statistics is way of but I
can get them right even when I raise the statistics to 1000
3 jun 2008 kl. 16.06 skrev Scott Marlowe:
On Tue, Jun 3, 2008 at 7:41 AM, Henrik <[EMAIL PROTECTED]> wrote:
To be able to handle versions we always insert new folders even
though
nothing has changed but it seemd like the best way to do it.
E.g
First run:
tbl_file 500k new
hello,
I am trying to recover data from postgesql 8.3 incomplete backup. I have
contents of data/base/ (fs snapshot), but there some tablespaces
(indexes, some tables) were in other locations.
On new server (clean install) I created database, replaced base folder,
renamed database folder to match
y?
If so how would I do this?
Regards,
Henrik
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing
Hello list,
I'm using pgsql 8.2.4 and I have this query which is sooo slow but I
can seem to figure out why.
It is using the appropriate indexes but there is something wrong with
the nested loops.
I'm running auto vacuum so the statistics should be up to date. I've
increase the statistics
24 jul 2007 kl. 16:10 skrev Tom Lane:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
I'm using pgsql 8.2.4 and I have this query which is sooo slow but I
can seem to figure out why.
Seems the core of the problem is the misesti
05 04:00:58.815 CEST HINT: Consider increasing the
configuration parameter "checkpoint_segments".
Right now I use the default setting.
Is there any idea of raising these at all if I can't move them to a
separate disk?
Regards,
Henrik
--
weird thing is the seq scan on
tbl_file_structure and also the insane calculated cost of 100 000 000
on some tables.
Explain analyze below with both seq scan on and off.
Regards,
Henrik
EXPLAIN ANALYZE SELECT pk_file_id, file_name_in_tar, tar_name,
file_suffix, fk_tar_id, tar_compressed
6 aug 2007 kl. 15:07 skrev Gregory Stark:
"Henrik Zagerholm" <[EMAIL PROTECTED]> writes:
Hi list,
I'm having a weird acting query which simply retrieves some files
stored in a db
which are related to a specific archive and also has a size lower
than 1024
bytes.
6 aug 2007 kl. 17:31 skrev Tom Lane:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
WHERE file_indexed IS FALSE
AND file_copied
IS TRUE
Hello list,
Usually I can see what is wrong with queries but I can't figure out
why this query is slow.
Below is query and explain analyze output.
Any help would be appreciated.
EXPLAIN ANALYZE SELECT computer_name
FROM tbl_computer
INNER JOIN tbl_share ON pk_computer_id = tbl_share.fk_comput
5 dec 2007 kl. 16.25 skrev Tom Lane:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
Usually I can see what is wrong with queries but I can't figure out
why this query is slow.
Seems the main problem is here:
-> Bitmap Index Scan on tbl_archive_idx1
(cost
g_index.indkey)
AND indisprimary;
The purpose of this rule is to retrieve the primary key name upon the
given table name (relname) stored in the table 'history_relation' upon
SELECT only.
Kind regards,
Henrik
--
Sent via pgsql-general mailing list (pgsql-general@post
On 12/05/2012 04:07 PM, Tom Lane wrote:
Henrik Kuhn writes:
can somebody give me some insights why the creation of this rule fails
with 'ERROR: SELECT rule's target list has too many entries'?
Probably that history_relation has fewer than three columns? But if
you want some
the internals of some
pg_tables? If yes, it should be possible to update this entry and call
ALTER EXTENSION UPDATE again.
What I do not want to do is a DELETE/CREATE EXTENSION process.
Any suggestions?
Regards,
Henrik
--
Sent via pgsql-general mailing list (pgsql-general@pos
On 12/05/2012 04:58 PM, Tom Lane wrote:
Henrik Kuhn writes:
On 12/05/2012 04:07 PM, Tom Lane wrote:
Henrik Kuhn writes:
can somebody give me some insights why the creation of this rule fails
with 'ERROR: SELECT rule's target list has too many entries'?
Probably that histo
squery object in the SELECT clause
and therefor also in the GROUP BY clause.
BUT when executing the query I get the following error. -> "could not
identify an ordering operator for type tsquery"
How do I solve this?
Regards,
Henrik
---(end of broadcast)
ect?
How do I solve this?
Use subselect:
SELECT *, headline(..., to_tsquery()) FROM
{
SELECT GROUP BY ...
);
Thanks I'll try it.
BTW, for performance reason, it might be useful to use limit in
inner select -
headline function is slow enough.
You think?! ;)
Thanks agai
Simple question:
How do I uninstall pg_trgm?
I can't seem to find a uninstall sql script for it like
uninstall_tsearch2.sql.
Regards,
Henrik
---(end of broadcast)---
TIP 4: Have you searched our list archives?
I found an uninstall script in the svn repos...
cheers,
7 dec 2006 kl. 10:21 skrev Henrik Zagerholm:
Simple question:
How do I uninstall pg_trgm?
I can't seem to find a uninstall sql script for it like
uninstall_tsearch2.sql.
Regards,
Henrik
---(e
public.snb_ru_init(internal) does not exist
Command was: ALTER FUNCTION public.snb_ru_init(internal) OWNER
TO postgres;
WARNING: errors ignored on restore: 2
Regards,
Henrik
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
y old database. Made a new
dump and voilá it worked.
I don't have that much russian texts so I hope the removal of the SP
won't impact on the functionality in the future.. =P
Cheers,
henrik
7 dec 2006 kl. 11:14 skrev Gregory S. Williamson:
Henrik --
I have only dabbled in tsearch2
1. Convert PDF to file with e.g xpdf
2. Insert parsed text to a table of your choice.
3. Make vectors from the text.
Cheers,
11 dec 2006 kl. 18:23 skrev Philip Johnson:
Do you know what kind of table should I use ?
Is there a shell script or a php script that does the work ?
regards
-M
eady fixed so the question is
more related to files like postgres.bki and such.
Regards,
Henrik
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
the datafiles on raid devices and only keep the
binaries on a DiskOnModule device.
Cheers,
Henrik
15 dec 2006 kl. 13:49 skrev Ragnar:
On fös, 2006-12-15 at 11:41 +0100, Henrik Zagerholm wrote:
Hello list,
I'm working on a project which tries to run postgre as a embedded
database.
I
I think I need to specify what I mean with embedded.
Its not that we try to embed it into an application.
It is just run from a flash disk and the datafiles are put on
standard raid attached disks.
Its an embedded device not an embedded application. :)
Cheers,
Henrik
15 dec 2006 kl. 16:30
15 dec 2006 kl. 17:15 skrev Ragnar:
On fös, 2006-12-15 at 16:59 +0100, Henrik Zagerholm wrote:
I think I need to specify what I mean with embedded.
Its not that we try to embed it into an application.
It is just run from a flash disk and the datafiles are put on
standard raid attached disks
15 dec 2006 kl. 17:40 skrev Chris Browne:
[EMAIL PROTECTED] (Tom Lane) writes:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
Postgres has been designed as a server, and lots of
implementation details might not make sense in an embedded
context. you might be better served by SQLite, o
definition.
NOTICE: argument type gtsq is only a shell
psql (PostgreSQL) 8.2.0
Linux santa 2.6.8-2-386 #1 Fri Jul 29 18:01:33 CEST 2005 i686 GNU/Linux
Regards,
Henrik
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
nteractive/examples.html
Searching after "tsearch2"
An error occured while searching.
Searching after "tsearch2full"
An error occured while searching.
Why is it so? =)
Cheers,
Henrik
19 dec 2006 kl. 00:47 skrev Joshua D. Drake:
Hello,
search.postgresql.org is now served directly from
t set vectors = to_tsvector(fulltext_text);
Which takes about 80 seconds to complete.
The hardware is a Pentium 4 2.8GHz with 1GB HyperX memory.
Is this normal? What can I tweak in postgresql.conf to speed up big
to_tsvector()?
Regards,
Henrik
---(end of broa
21 feb 2007 kl. 15:44 skrev Tom Lane:
Markus Schiltknecht <[EMAIL PROTECTED]> writes:
Henrik Zagerholm wrote:
Is this normal? What can I tweak in postgresql.conf to speed up big
to_tsvector()?
Hm.. seems not too unreasonable to me.
Take a look at the stemmers or dictionaries in
Hello list,
I'm having a hard time understanding why my query is not using my
indices when I change a field in the WHERE clause.
Please could someone shed some light on this.
Below is both queries and explain analyze for the different queries.
Regards,
Henrik
QUERY 1. Using
23 mar 2007 kl. 12:33 skrev Michael Fuhr:
On Fri, Mar 23, 2007 at 11:43:54AM +0100, Henrik Zagerholm wrote:
I'm having a hard time understanding why my query is not using my
indices when I change a field in the WHERE clause.
The row count estimate for fk_filetype_id = 83 is high by an
23 mar 2007 kl. 13:34 skrev Michael Fuhr:
On Fri, Mar 23, 2007 at 12:41:58PM +0100, Henrik Zagerholm wrote:
23 mar 2007 kl. 12:33 skrev Michael Fuhr:
The row count estimate for fk_filetype_id = 83 is high by an order
of magnitude:
Bitmap Index Scan on tbl_file_idx6 (cost=0.00..25.65 rows
e the statistics on?
Thanks again!!!
23 mar 2007 kl. 14:04 skrev Henrik Zagerholm:
23 mar 2007 kl. 13:34 skrev Michael Fuhr:
On Fri, Mar 23, 2007 at 12:41:58PM +0100, Henrik Zagerholm wrote:
23 mar 2007 kl. 12:33 skrev Michael Fuhr:
The row count estimate for fk_filetype_id = 83 is high
rows=172229 loops=1). Total
query time 24116ms.
After that I turn off seqscans and now query time is 1257ms.
I've tried raising the statistics on pk_structure_id and
fk_structure_id but to no avail.
Regards,
henrik
=== SETTING ENABLE_SEQSCAN =
4 maj 2007 kl. 18:09 skrev Tom Lane:
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
I have a SELECT query that uses Seq scans instead of index scan
despite that the index scan is faster.
Try 8.2, it's a bit smarter about the costs of repeated indexscans
on the inside of a nest
try something like this:
select attname from pg_class, pg_attribute where
relname='your_tablename' and attrelid=relfilenode;
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade
e or while processing the request.
'.
pg_dump failed on obeladmin, exiting
"counter" is a sequence and it's owner is postgres
Any help is appreciated
--
Mvh. / Kind regards
Henrik Farre
http://www.rockhopper.dk
---(end of broadcast)
lease reconnect to the database system and repeat your query.
The notice is repeated 8 times.
I have set "ulimit -c coredumpsize", but I don't get a core dump.
--
Mvh. / Kind regards
Henrik Farre
http://www.rockhopper.dk
---(end of broadcast)---
--
--
Seq Scan on basiseintrag (cost=1.00..100024064.96 rows=1
width=264) (actual time=1028.468..1092.537 rows=1 loops=1)
Filter: ((kundennummer ~~ '0101%'::text) AND (firma ~~
ing issue.
>
>
> ---(end of
> broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to
> [EMAIL PROTECTED])
--
Mit freundlichem Gruß
Henrik Steff
exes now.
Would changing the lc_collate setting to 'C' solve this issue as well?
Thanks again,
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
Haven't had this for quite a while now (using 7.3.3)
Any idea?
In my opinion this should not be possible...
--
Mit freundlichem Gruß
Henrik St
yes, of course, I allready did this.
What I was aiming at, was, that postgres
should normally not insert a duplicate value
into a unique index, should it?
Isn't this a bug?
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-
yes, there were duplicates
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
http://www.topconcepts.com Tel. +49 4141 991230
mail: [EMAIL
since
7.3.3 we have never had it again. On none of our servers.
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
http://www.topconcepts.com Tel. +49
what that could mean?
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
http://www.topconcepts.com Tel. +49 4141 991230
mail: [EMAIL PROTECTED]
shouldn't it be necessary to prevent postgres from inserting
a 536870936 bytes long value in to a 8192 byte column?
I am not sure if this (again) is a hardware issue...
--
Mit freundlichem Gruß
Henrik Steffen
Geschäftsführer
top concepts Internetmarketing GmbH
Am St
this class. And I don't want to specify it in the
program every time.
(Hope somebody understands this)
Now I am trying to do the same in Oracle with OCI. Do anybody know if
and how it could be done?
Henrik
1| 5
1| 6
hope someone outthere can help me :)
Henrik Pedersen
Demark
Hi all
Can I make something linke:
IF EXISTS(company_id_seq)
DROP SEQUENCE company_id_seq;
in a DDL where i creates all my tables?
/Henrik
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
69 matches
Mail list logo