"Takayuki Tsunakawa" <[EMAIL PROTECTED]> writes:
> BTW, why does the bgwriter try to open and write the pages of already
> dropped relations?
It does not; the problem is with stale fsync requests.
> If the relation being dropeed has
> already been registered in the list of files to be fsynced, is
"carter ck" <[EMAIL PROTECTED]> writes:
> And you are definitely right when updating the table. The time it takes is
> getting longer and longer. When I do a select statement, the speed has also
> degraded.
Seems like you need a VACUUM in there somewhere...
regards, tom
am Mon, dem 15.01.2007, um 10:57:17 -0800 mailte Albert folgendes:
> Hi all. I have this issue: I need to load a value for an integer field
> (with auto increment) which I implemented with a serial type and to
> avoid inserts with that value in this field. The task is for a
> distributed program I
carter ck wrote:
Hi, the rpt_generated is a boolean value.
And you are definitely right when updating the table. The time it takes
is getting longer and longer. When I do a select statement, the speed
has also degraded.
If you send us the query that is slow, then I'm sure you'll get some
s
On Jan 16, 2007, at 11:52 , Jerry LeVan wrote:
The locale function on the mac has everything set to "C"
and on the linux box it is all "en_US.UTF-8".
Is there a simple way to get the collating sequences to be
the same on both boxen?
My understanding is that to have the same ordering you need
Hi,
I am running 8.2.1 on my Mac OS X and my X86 Fedora Laptop...
I just noticed that on my mac:
levan=# select 'A' < 'a' ;
?column?
--
t
(1 row)
and on the linux box:
levan=# select 'A' < 'a' ;
?column?
--
f
(1 row)
The locale function on the mac has everything set to "C"
a
"carter ck" <[EMAIL PROTECTED]> writes:
> Hi, the rpt_generated is a boolean value.
Is there any special reason, then, for not using a boolean type?
> And you are definitely right when updating the table. The time it takes is
> getting longer and longer. When I do a select statement, the speed
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> If you're in 8.1 or better, I'd suggest defining only two indexes, one
I'm on 8.1. Waiting for SuSE to update to 8.2... ;-)
> on (ci_id) and other on (document_client_id), and let the system deal
> with mixing them using the bitmap scan technique w
Hi, the rpt_generated is a boolean value.
And you are definitely right when updating the table. The time it takes is
getting longer and longer. When I do a select statement, the speed has also
degraded.
Thanks.
From: Jorge Godoy <[EMAIL PROTECTED]>
To: "carter ck" <[EMAIL PROTECTED]>
CC: [
Jorge Godoy wrote:
> Thinking about how PostgreSQL is able to use composed indices should I create
> the reverse index ("CREATE INDEX something ON ged.documents_clients_cis
> (document_client_id, ci_id)") or I'd only be wasting disk and processing?
>
> The query can be done from either side (i.e.
From: "Magnus Hagander" <[EMAIL PROTECTED]>
> But yeah, that's probably a good idea. A quick look at the code says
we
> should at least ask people who have this problem to give it a run
with
> logging at DEBUG5 which should then log exactly what the errorcode
was.
> Or are you seeing more places th
"carter ck" <[EMAIL PROTECTED]> writes:
> Hi,
>
> Thanks for reminding me. And the actual number of records is 100,000.
>
> The table is as following:
You forgot the EXPLAIN ANALYZE output...
> Table my_messages
>
> mid
carter ck wrote:
>
> Hi,
>
> Thanks for reminding me. And the actual number of records is 100,000.
>
> The table is as following:
And the query?
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Kevin Field <[EMAIL PROTECTED]> writes:
> http://archives.postgresql.org/pgsql-hackers/2004-01/msg00861.php
>
> (Just to dredge up another old thread...)
>
>> While its true that a post-commit operation cannot modify a database, it
>> might still for instance propagate the outcome of the transacti
Hi,
Thanks for reminding me. And the actual number of records is 100,000.
The table is as following:
Table my_messages
midx | integer| not null
default nextval('public.my_
http://archives.postgresql.org/pgsql-hackers/2004-01/msg00861.php
(Just to dredge up another old thread...)
While its true that a post-commit operation cannot modify a database, it
might still for instance propagate the outcome of the transaction to another
resource outside of the database, it
Hi!
I'm not sure about the English terminology for that so I'm sorry if I made a
mistake on the subject and on this message. I hope de code explains it better
if I missed it :-)
I have some tables that will have N:M relationships between themselves and for
that I created some linking tables su
On 1/15/07, carter ck <[EMAIL PROTECTED]> wrote:
I am having slow performance issue when querying a table that contains
more
than 1 records.
Everything just slow down when executing a query though I have created
Index
on it.
You didn't really provide much information for anyone to help y
Hi all,
I am having slow performance issue when querying a table that contains more
than 1 records.
Everything just slow down when executing a query though I have created Index
on it.
Can anyone suggest ways to improve the speed?
Thanks.
___
Hi All,
Found the problem. This was caused by a memory leak in our application.
Regards
Steve Martin
Steve Martin wrote:
Hi All,
We have an intermittent problem where PQexec does not seem to return
even though the server seems to have sent the results.
From the gdb output , the sql sta
Tom Lane wrote:
and unless I'm mistaken, our behavior conforms to the spec and Oracle's
doesn't.
Strictly speaking, the spec doesn't define the behavior of "SUBSTR" at
all, only "SUBSTRING" with this weird FROM/FOR argument syntax. But
PG treats SUBSTR(x,y,z), SUBSTRING(x,y,z) and SUBSTRING(x F
Thanks Shane.
I solved taking trace of the sequence value directly in the
program...it is not a good solution but it works until I find a better
one!
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archiv
Albert wrote:
Hi all. I have this issue: I need to load a value for an integer field
(with auto increment) which I implemented with a serial type and to
avoid inserts with that value in this field. The task is for a
distributed program I'm creating and I wonder if there is an efficient
solution f
Doug,
numnode exists for sure ! It's plainto_tsquery function which
Teodor used in example, appeared in 8.2
Oleg
On Mon, 15 Jan 2007, Doug Cole wrote:
That sounds perfect, but it doesn't seem to exist on either of the
postgresql installations I have access to (8.1 on ubuntu and fedora
core).
Doug Cole wrote:
That sounds perfect, but it doesn't seem to exist on either of the
postgresql installations I have access to (8.1 on ubuntu and fedora
core). Is it new to 8.2? Is there a similar function under 8.1, or at
Yes, it's new in 8.2
least a decent work-around? Thanks for the h
That sounds perfect, but it doesn't seem to exist on either of the
postgresql installations I have access to (8.1 on ubuntu and fedora
core). Is it new to 8.2? Is there a similar function under 8.1, or
at least a decent work-around? Thanks for the help,
Doug
On 1/15/07, Teodor Sigaev <[EMAIL
Hi all. I have this issue: I need to load a value for an integer field
(with auto increment) which I implemented with a serial type and to
avoid inserts with that value in this field. The task is for a
distributed program I'm creating and I wonder if there is an efficient
solution for it.
I though
Demitri Muna <[EMAIL PROTECTED]> writes:
> On 15 Jan 2007, at 16:21, Tom Lane wrote:
>> It looks like spview is a view with an embedded ORDER BY? IIRC that
>> prevents any meaningful optimization of joins to it --- and
>> WHERE-IN-sub-SELECT is a kind of join.
> Thanks for the pointer Tom; removi
Magnus Hagander <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> DEBUG5 is going to be a bit voluminous, but let's try that if we can.
> Perhaps we should switch down the DEBUG level of it, at least until we
> know what happens?
That would have to wait on another update release, or at least someo
On 15 Jan 2007, at 16:21, Tom Lane wrote:
[EMAIL PROTECTED] writes:
I have a query that is extraordinarily slow but I don't know why.
It looks like spview is a view with an embedded ORDER BY? IIRC that
prevents any meaningful optimization of joins to it --- and
WHERE-IN-sub-SELECT is a kind
Tom Lane wrote:
> Magnus Hagander <[EMAIL PROTECTED]> writes:
>> But yeah, that's probably a good idea. A quick look at the code says we
>> should at least ask people who have this problem to give it a run with
>> logging at DEBUG5 which should then log exactly what the errorcode was.
>> Or are you
Magnus Hagander <[EMAIL PROTECTED]> writes:
> But yeah, that's probably a good idea. A quick look at the code says we
> should at least ask people who have this problem to give it a run with
> logging at DEBUG5 which should then log exactly what the errorcode was.
> Or are you seeing more places th
Tom Lane wrote:
> Magnus Hagander <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> pg_control is certainly not ever deleted or renamed, and in fact I
>>> believe there's an LWLock enforcing that only one PG process at a time
>>> is even touching it. So we need another theory to explain this one
Tom Lane wrote:
Aleksander Kmetec <[EMAIL PROTECTED]> writes:
Some quick testing shows that util.row_number() only gets re-evaluated at every call if the subquery contains an ORDER
BY clause. Now we can predict whether we need to compensate for that just by looking at the original query.
If y
From the Docs see below it implies that the WHERE clause of the update
is ignored.(
http://www.postgresql.org/docs/8.1/static/sql-createrule.html )
If I create a Rule as
CREATE RULE foo_update_false AS ON UPDATE TO foo
WHERE NEW.some_flag = TRUE
DO NOTIFY foo_update;
Does the WHERE clause
Yes, the problem was the memory (testing with memtest)
I took us long to have the machine check
Thanks,
- Original Message -
From: "Richard Huxton"
To: "Ardian Xharra" <[EMAIL PROTECTED]>
Cc: "postgreSQL postgreSQL"
Sent: Friday, November 17, 2006 1:12 PM
Subject: Re: [GENERAL] Why t
contrib_regression=# select numnode( plainto_tsquery('the any') );
NOTICE: query contains only stopword(s) or doesn't contain lexeme(s), ignored
numnode
-
0
(1 row)
contrib_regression=# select numnode( plainto_tsquery('the table') );
numnode
-
1
(1 row)
contrib_r
Aleksander Kmetec <[EMAIL PROTECTED]> writes:
> Some quick testing shows that util.row_number() only gets re-evaluated at
> every call if the subquery contains an ORDER
> BY clause. Now we can predict whether we need to compensate for that just by
> looking at the original query.
If you're goin
[EMAIL PROTECTED] writes:
> I have a query that is extraordinarily slow but I don't know why.
It looks like spview is a view with an embedded ORDER BY? IIRC that
prevents any meaningful optimization of joins to it --- and
WHERE-IN-sub-SELECT is a kind of join.
regards, to
Pavan Deolasee wrote:
> Simon Riggs wrote:
>> On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
>>> Christopher Browne wrote:
>>>
Seems to me that you could get ~80% of the way by having the simplest
"2 queue" implementation, where tables with size < some threshold get
thrown
Tom Lane wrote:
This isn't gonna work very well if your query involves sorting, because
the SELECT-list is evaluated before the sort step ...
regards, tom lane
Thanks, this seems to solve my problem.
Some quick testing shows that util.row_number() only gets re-evalua
Shoaib Mir wrote:
> Something like this will help you..
Or shorter:
> ===
>
> CREATE OR REPLACE FUNCTION public.check_table(varchar, varchar)
> RETURNS boolean AS $$
> DECLARE
> v_cnt integer;
> v_tbl boolean;
> BEGIN
PERFORM 1 FROM pg_tables where tablename = $1
Hello all,
I have a query that is extraordinarily slow but I don't know why. It
crosses a many-to-many join table and looks like this:
SELECT * FROM spview WHERE id IN (SELECT signal_profile_id FROM
track_to_signal_profile WHERE track_id = 19510985);
The three tables are:
track <-> track
Aleksander Kmetec <[EMAIL PROTECTED]> writes:
> We're using the following technique for counting the number of rows in a
> cursor:
> DECLARE instance_cur_1 SCROLL CURSOR FOR
> SELECT util.row_number(), *
> FROM (
> $LONG_RUNNING_QUERY
> ) ss
> FETCH LAST IN instance_cur_1;
> util.row_number
dparent <[EMAIL PROTECTED]> writes:
> I have a function which returns TYPE RECORD and receives a string. The passed
> string is used to build a dynamic SQL statement and the passed string is the
> SELECT LIST of the built dynamic SQL statement.
> The call works fine from Postgres SQL (PL/PG SQL) b
Hi,
I'm running into some inconsistent behavior when using volatile functions with
cursors under PG 8.1.
We're using the following technique for counting the number of rows in a cursor:
---
DECLARE instance_cur_1 SCROLL CURSOR FOR
SELECT util.row_number(), *
FROM (
$LONG_RUNNIN
Jan van der Weijde wrote:
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
You will want to increase the default settings and let PostgreSQL use as
much RAM as you have - especially when retrieving a larg
Sim Zacks wrote:
> To retrieve the above example, I needed to query:
> select * from filetable where filename like 'serverdir%'
>
> Is this a bug or is there a reason for this?
>
> Sim
There's a reason. With like queries, if you want to search for the
literal characters % or ? they
I am having trouble with to_tsquery when the query is all stop words.
Rather than return everything as a match, it returns nothing with the
notice:
NOTICE: Query contains only stopword(s) or doesn't contain lexem(s), ignored
What is the best way to check for this, I was hoping to be able to
che
Simon Riggs wrote:
> On Fri, 2006-12-29 at 20:25 -0300, Alvaro Herrera wrote:
>> Christopher Browne wrote:
>>
>>> Seems to me that you could get ~80% of the way by having the simplest
>>> "2 queue" implementation, where tables with size < some threshold get
>>> thrown at the "little table" queue,
Thank you both for your suggestions.
I think I'll try the GiST approach first since using an existing contrib extension as a starting point seems like a
simpler task for someone like me. :)
Regards,
Aleksander
Richard Troy wrote:
Aleksander Kmetec <[EMAIL PROTECTED]> writes:
I'm looking f
I have a function which returns TYPE RECORD and receives a string. The passed
string is used to build a dynamic SQL statement and the passed string is the
SELECT LIST of the built dynamic SQL statement.
The call works fine from Postgres SQL (PL/PG SQL) but when I make the call
from .NET (Core lab
Joshua D. Drake wrote:
roopa perumalraja wrote:
Thanks a lot for your immediate reply. can you please tell me how to use the command pg_dump to backup specific tables. Thanks a lot in advance.
pg_dump --help
Better yet, try this first:
"http://www.catb.org/~esr/faqs/smart-questions.html";
Thanks, that's exactly what I was looking for :-)
kind regards,
Morirt
2007/1/15, A. Kretschmer <[EMAIL PROTECTED]>:
am Mon, dem 15.01.2007, um 13:18:11 +0100 mailte Moritz Bayer folgendes:
> Dear list,
>
> I would like to create a function which gets a tablename and checks if
the
> specifi
An old post on the archives might help you -->
http://archives.postgresql.org/pgsql-interfaces/2005-11/msg00010.php that
was an attempt to simulate functionality similar to setFetchSize in JDBC.
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Jan van der Weijde <[EMAIL
Limit is somewhat magical ... at least to a degree. Not sure about cursors
since I am not currently using them.
select count(*) from bill_rpt_work;
count
-
2317451
(1 row)
Time: 1709.829 ms
billing=# \d bill_rpt_work
Table "reporting.bill_rpt_work"
Column |
Jan van der Weijde wrote:
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
installation and just execute a simple SELECT * FROM table.
I am using a separate client and server (both XP in the test
environme
Unfortunately a large C program has already been written.. But if a
function like PQsetFetchSize() was available in libpq, that would also
solve the problem.
From: Shoaib Mir [mailto:[EMAIL PROTECTED]
Sent: Monday, January 15, 2007 13:49
To: Jan van der Weijde
Cc
If you go with Java, you can make it faster by using setFetchSize (JDBC
functionality) from client and that will help you with the performance in
case of fetching large amounts of data.
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Jan van der Weijde <[EMAIL PROTECT
Something like this will help you..
===
CREATE OR REPLACE FUNCTION public.check_table(varchar, varchar)
RETURNS boolean AS $$
DECLARE
v_cnt integer;
v_tbl boolean;
BEGIN
SELECT count(1) INTO v_cnt FROM pg_tables where tablename = $1 and
schemaname = $2;
IF v_cnt
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
installation and just execute a simple SELECT * FROM table.
I am using a separate client and server (both XP in the test
environment), but that should not make
A. Kretschmer wrote:
> am Mon, dem 15.01.2007, um 13:18:11 +0100 mailte Moritz Bayer folgendes:
>> Dear list,
>>
>> I would like to create a function which gets a tablename and checks if the
>> specific table exists.The return value should be a bool.
>> Now I'm wondering how to do this the best
am Mon, dem 15.01.2007, um 13:18:11 +0100 mailte Moritz Bayer folgendes:
> Dear list,
>
> I would like to create a function which gets a tablename and checks if the
> specific table exists.The return value should be a bool.
> Now I'm wondering how to do this the best way.
>
> Any suggestions?
Dear list,
I would like to create a function which gets a tablename and checks if the
specific table exists.The return value should be a bool.
Now I'm wondering how to do this the best way.
Any suggestions?
kind regards and thanks in advance,
Moritz
Has anyone played around with the new PHP ODO drivers and been able to
successfully set up an object using persistent connections? I tried to
follow the documentation in the PHP manual to send an array in the PDO
constructor but receive a warning message that the underlying driver
does not supp
Jan van der Weijde wrote:
> Thank you.
> It is true he want to have the first few record quickly and then
> continue with the next records. However without LIMIT it already takes a
> very long time before the first record is returned.
> I reproduced this with a table with 1.1 million records on a
Oh yes, need to have a condition first for which you have partitioned
tables. Only in that case it will work with partitions.
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Richard Huxton wrote:
Shoaib Mir wrote:
> You can also opt for partitioning the tables and t
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes a
very long time before the first record is returned.
I reproduced this with a table with 1.1 million records on an XP mac
Shoaib Mir wrote:
You can also opt for partitioning the tables and this way select will only
get the data from the required partition.
Not in the case of SELECT * FROM though. Unless you access the
specific partitioned table.
On 1/15/07, Richard Huxton wrote:
Jan van der Weijde wrote:
>
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes a
very long time before the first record is returned.
I reproduced this with a table with 1.1 million records on an XP machine
and in my case it took
You can also opt for partitioning the tables and this way select will only
get the data from the required partition.
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Richard Huxton wrote:
Jan van der Weijde wrote:
> Hello all,
>
> one of our customers is using Postgre
Jan van der Weijde wrote:
Hello all,
one of our customers is using PostgreSQL with tables containing millions
of records. A simple 'SELECT * FROM ' takes way too much time in
that case, so we have advised him to use the LIMIT and OFFSET clauses.
That won't reduce the time to fetch millions
Hello all,
one of our customers is using PostgreSQL with tables containing millions
of records. A simple 'SELECT * FROM ' takes way too much time in
that case, so we have advised him to use the LIMIT and OFFSET clauses.
However now he has a concurrency problem. Records deleted, added or
updated
COPY command might also help
COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO
'/usr1/proj/bray/sql/a_list_countries.copy';
Details can be found at -->
http://www.postgresql.org/docs/current/static/sql-copy.html
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
O
74 matches
Mail list logo