If you are using a Linux system then you can make a partition of xGB and place PGDATA there. This may work for other OS as well.
Aaron
On 12/5/05, surabhi.ahuja <[EMAIL PROTECTED]> wrote:
here is a question
say i have a database and all the files(data files) indexes etc must be going to the
here is a
question
say i have a database and all the
files(data files) indexes etc must be going to the PGDATA directory
The question is this:
is there any way by which i can specify :
to reserve x GB amount of space to this database (this x includes all the space
which the database wi
Ok,
i found only one postmaster and one plpgsql.
I did what you said. I have installed postgresql 8.0.4 and removed all.
Same problem.
Greetings
Martijn van Oosterhout wrote:
On Mon, Dec 05, 2005 at 11:04:11AM +0100, Maik Troemel wrote:
Ok,
there is only one "plpgsql.so" but how can I fin
"Florian G. Pflug" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>>> Based on this email, should we be showing ROLE from SHOW ALL?
>> Only if you think we should be showing session_authorization too.
>> That was marked "no_show_all" quite a long time ago, and we have
>> not got complaints about it.
On Mon, 5 Dec 2005, Marcus Engene wrote:
I realized from the documentation that I'm not looking for
compound words after all, I meant "exact phrase".
I can't see how to make rank tell me which results has an
exact phrase? Like "there must be a occurence of 'new' before
'york'" (stemmed not re
On Dec 5, 2005, at 6:05 PM, Mark Harrison wrote:
Right now I'm going over the output of "psql -echo-hidden" to see how
to pull the appropriate schema information from postgresql. The final
output will be sqlite statements to create tables which mirror the
structure in the master postgresql dat
Steve Crawford wrote:
Joshua D. Drake wrote:
Steve Crawford wrote:
Changing the owner of a database is easy in 8.x (alter database...)
but seems less so in 7.4.
Is there a method of safely changing the owner of the database (via
command or via updating pg* tables) in 7.4?
Use a transactio
Joshua D. Drake wrote:
Steve Crawford wrote:
Changing the owner of a database is easy in 8.x (alter database...)
but seems less so in 7.4.
Is there a method of safely changing the owner of the database (via
command or via updating pg* tables) in 7.4?
Use a transaction and use update to pg_
Steve Crawford wrote:
Changing the owner of a database is easy in 8.x (alter database...)
but seems less so in 7.4.
Is there a method of safely changing the owner of the database (via
command or via updating pg* tables) in 7.4?
Use a transaction and use update to pg_database
Joshua D. Drak
Tom Lane wrote:
Bruce Momjian writes:
Based on this email, should we be showing ROLE from SHOW ALL?
Only if you think we should be showing session_authorization too.
That was marked "no_show_all" quite a long time ago, and we have
not got complaints about it...
Hm, but before 8.1 there was n
So I need to mirror some data to sqlite. This will be a new application,
so we'll be able to keep the schema pretty simple so as not to overwhelm
sqlite... most of it is just strings and numbers anyways.
Right now I'm going over the output of "psql -echo-hidden" to see how
to pull the appropriat
Changing the owner of a database is easy in 8.x (alter database...) but
seems less so in 7.4.
Is there a method of safely changing the owner of the database (via
command or via updating pg* tables) in 7.4?
Cheers,
Steve
---(end of broadcast)---
On Mon, 2005-12-05 at 16:30, Ed L. wrote:
> On Monday December 5 2005 3:17 pm, Tom Lane wrote:
> > There isn't any particularly good reason to be resizing
> > shared_buffers on the fly anyway; much easier to let the
> > kernel adapt the size of its disk cache instead. Best
> > practice for shared_
On Monday December 5 2005 3:17 pm, Tom Lane wrote:
> There isn't any particularly good reason to be resizing
> shared_buffers on the fly anyway; much easier to let the
> kernel adapt the size of its disk cache instead. Best
> practice for shared_buffers is to set it somewhere in the
> range of 10K
Hi!
CSN wrote:
> I'm thinking of enabling searches that use queries like "select *
> from items where title ilike '%term%'". The items table has tens
> of thousands of rows. Is it worth worrying about the performance
> of such a query (since, if I'm not mistaken, it will never use
> indices). If i
David Link <[EMAIL PROTECTED]> writes:
> more importantly, do you think the problem I am having is due to these
> configuration short comings?
Yeah, the planning thresholds ...
regards, tom lane
---(end of broadcast)---
TIP
Bruce Momjian writes:
> Ed L. wrote:
>> We have ~75 pgsql clusters running in environments where downtime
>> comes at a premium cost. We often run multiple clusters on a
>> single box, and find it necessary to adjust the size of the
>> static DB cache as we add or move clusters. Unfortunately
Thanks for your reply, Tom. Different folks have made different
suggestions. Can you suggest more reasonable values for these? But
more importantly, do you think the problem I am having is due to these
configuration short comings?
Thanks much.
David take a look at:
http://www.varlena.
"Florian G. Pflug" <[EMAIL PROTECTED]> writes:
> INFO: index "pg_toast_2144146_index" now contains 1971674 row versions
> in 10018 pages
> DETAIL: 4 index row versions were removed.
> 2489 index pages have been deleted, 0 are currently reusable.
> Just for the archives - I finally solved the pro
Tom Lane wrote:
David Link <[EMAIL PROTECTED]> writes:
Certain SQL Queries, I believe those with many table joins, when run as
EXPLAIN plans, never return.
I'd guess that one or all of these settings are excessive:
geqo_threshold = 14
from_collapse_limit = 13
join_collapse_limit
Jaime Casanova wrote:
when you have thoses cases, you can take a look in pg_stats_activity
to find the offending query...
or simply logs all queries
Thanks for the advice. I also turned on stat_command_string
---(end of broadcast)---
TIP 3:
Ed L. wrote:
> We have ~75 pgsql clusters running in environments where downtime
> comes at a premium cost. We often run multiple clusters on a
> single box, and find it necessary to adjust the size of the
> static DB cache as we add or move clusters. Unfortunately, that
> means some downtime
On 12/5/05, Marcus Engene <[EMAIL PROTECTED]> wrote:
> Oleg Bartunov wrote:
> > On Mon, 5 Dec 2005, Marcus Engene wrote:
> >
> >> Hi,
> >>
> >> I use the tsearch full text search with pg 8.0.3. It works great, but
> >> I wonder if it's possible to search for compound words?
> >> Ie if I search for
Jim C. Nasby wrote:
>>We started a VACUUM (not a VACUUM FULL) on one of your postgres 7.4.9
>>databases a few days ago. It's still running yet, and says the
>>folloing about once per second:
>>
>>INFO: index "pg_toast_2144146_index" now contains 1971674 row versions
>>in 10018 pages
>>DETAIL: 4
We have ~75 pgsql clusters running in environments where downtime
comes at a premium cost. We often run multiple clusters on a
single box, and find it necessary to adjust the size of the
static DB cache as we add or move clusters. Unfortunately, that
means some downtime. It would be extremel
Tom Lane wrote:
Eric E <[EMAIL PROTECTED]> writes:
Here's the test function:
...
my $data_handle = spi_exec_query('SELECT * FROM
schema1."table_of_approximately 3 rows";');
Well, the plperl documentation does point out that spi_exec_query should
only be used when you know that t
vishal saberwal <[EMAIL PROTECTED]> writes:
> So are you suggesting, I need to send the Large object ID to the client?
> Since the application is time critical, is there a way to skip one of the
> two steps (querying once for LOID and then again for its data) to a one step
> by sending the Object
Eric E <[EMAIL PROTECTED]> writes:
> Here's the test function:
> ...
> my $data_handle = spi_exec_query('SELECT * FROM
> schema1."table_of_approximately 3 rows";');
Well, the plperl documentation does point out that spi_exec_query should
only be used when you know that the result set will be
=?UTF-8?B?0JDQvdC00YDQtdC5?= <[EMAIL PROTECTED]> writes:
> Can someone answer - why in PostgreSQL 8.1 have been removed such
> functions as 'makeaclitem(...)' and
> 'information_schema.pg_keypositions()'.
Because code using them would be broken anyway by other changes in 8.1
--- for example,
Oleg Bartunov wrote:
On Mon, 5 Dec 2005, Marcus Engene wrote:
Hi,
I use the tsearch full text search with pg 8.0.3. It works great, but
I wonder if it's possible to search for compound words?
Ie if I search for "New York" i want to get a match on
New York has traffic problems.
but not on
N
Hi,
I hope the following link will help you.
http://www.postgresql.org/docs/8.0/static/plpgsql-expressions.htmlOn 12/5/05,
Byrne Kevin-kbyrne01 <[EMAIL PROTECTED]> wrote:
Has
anyone seen any strange behaviour as regards the now() function when
accessing a db via odbc layer. The behaviour I have
So are you suggesting, I need to send the Large object ID to the client?
Since the application is time critical, is there a way to skip one of
the two steps (querying once for LOID and then again for its data) to a
one step by sending the Object data in the first call?
Are there any examples or ps
Hi Tom,
It was a backend crash on my production 7.4 instance, running on Suse
Linux 9.1 Pro, installed from Suse's precompiled binaries, on what turns
out to be questionable hardware.
I wrote a PL/PERL function specifically to crash the database in this
way, and ran it against a test instance
Hello!
Can someone answer - why in PostgreSQL 8.1 have been removed such
functions as 'makeaclitem(...)' and
'information_schema.pg_keypositions()'. Are there any similar functions
now and where can I find the list of all deprecated functions?
David Link <[EMAIL PROTECTED]> writes:
> Certain SQL Queries, I believe those with many table joins, when run as
> EXPLAIN plans, never return.
I'd guess that one or all of these settings are excessive:
> geqo_threshold = 14
> from_collapse_limit = 13
> join_collapse_limit = 13
Keep in mind t
I'm thinking of enabling searches that use queries like "select * from items
where title ilike
'%term%'". The items table has tens of thousands of rows. Is it worth worrying
about the
performance of such a query (since, if I'm not mistaken, it will never use
indices). If it is,
what's the best o
On 12/5/05, David Link <[EMAIL PROTECTED]> wrote:
> Hi, This has become a major problem for us. Thank you in advance for
> your help.
>
> OS: SUSE Linux 2.6.5-7.191-bigsmp
> PostgreSQL: 7.4.8
> Application: ModPerl Web application using DBI.pm
> Database size: 100 Gb, 1025 Tables.
>
> Problem: EXP
Eric E <[EMAIL PROTECTED]> writes:
> I've recently had a couple worrisome experiences with runaway
> queries in postgres 7.4 and 8.0, and I'm wondering what I can do to
> control or prevent these. When I ran a query against a large table, I
> accidentally omitted mistyped a table name, and
Hi, This has become a major problem for us. Thank you in advance for
your help.
OS: SUSE Linux 2.6.5-7.191-bigsmp
PostgreSQL: 7.4.8
Application: ModPerl Web application using DBI.pm
Database size: 100 Gb, 1025 Tables.
Problem: EXPLAIN SELECT ... does not return.
Description:
The Application
[Didn't get any replies on the Perform list -- hoping someone can help
me here]
Hi. We have a server provided for a test of a web application with the
following specifications:
1 Dual core 1.8GHz Opteron chip
6 GB RAM
approx 250GB of RAID10 storage (LSI card + BBU, 4 x 15000 RPM,16MB
Hi all,
I've recently had a couple worrisome experiences with runaway
queries in postgres 7.4 and 8.0, and I'm wondering what I can do to
control or prevent these. When I ran a query against a large table, I
accidentally omitted mistyped a table name, and a table was
automatically added, r
"Ed L." <[EMAIL PROTECTED]> writes:
> I need some troubleshooting help with two hung postmasters
> this morning, both PostgreSQL 7.4.6 on ia64-hp-hpux11.23,
> compiled by gcc 3.4.2, each on a separate rx4640.
> (gdb) bt
> #0 0xc0304230:0 in _accept_sys+0x30 () from /usr/lib/hpux64/libc.so
I need some troubleshooting help with two hung postmasters
this morning, both PostgreSQL 7.4.6 on ia64-hp-hpux11.23,
compiled by gcc 3.4.2, each on a separate rx4640. Existing
connections were processing queries just fine. Any new
connection attempts hung with no server log output. The
postma
On 12/5/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> In plpgsql, you're supposed to use exception blocks instead.
AFAIC, SAVEPOINT usage isn't permitted in any procedural language. But
it also isn't documented. (I couldn't find any phrase mentioning about
this in the docs.)
One more question, if we'
[EMAIL PROTECTED] writes:
> I would like using savepoints in my stored functions but I always get the
> error
> ERROR: SPI_execute_plan failed executing query "SAVEPOINT my_savepoint":
In plpgsql, you're supposed to use exception blocks instead. See
http://www.postgresql.org/docs/8.1/static/pl
Hi community,
I would like using savepoints in my stored functions but I always get the
error
ERROR: SPI_execute_plan failed executing query "SAVEPOINT my_savepoint":
SPI_ERROR_TRANSACTION
CONTEXT: PL/pgSQL function "savepoint_test" line 3 at SQL statement
My test function can be found below
Byrne Kevin-kbyrne01 <[EMAIL PROTECTED]> writes:
> Moserver receives the event - timestamps it as 't1'
> -- time lapse before moserver computes the transaction and gives it to odbc.
> Txn_begin- now() gets frozen to 't2'
> Insert - now() should put it as 't2'
> Txn_end()- done.
> So firstly t2 sh
unsubscribe
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Hi community,
I would like using savepoints in my
stored functions but I always get the error
ERROR: SPI_execute_plan
failed executing query "SAVEPOINT my_savepoint": SPI_ERROR_TRANSACTION
CONTEXT: PL/pgSQL function
"savepoint_test" line 3 at SQL statement
My test function can be found below.
can anyone explain the reason for the difference in the
following 2 query plans, or offer any advice? the two queries
are identical apart from the limit clause.
the performance here is fine and is the same for LIMIT >= 2
EXPLAIN SELECT _t.* FROM
tokens.ta_tokens _t INNER JOIN
Hi,the following link might help you.http://www.postgresql.org/docs/8.0/static/plpgsql-expressions.htmlOn 12/5/05, Byrne Kevin-kbyrne01 <[EMAIL PROTECTED]> wrote: Has anyone seen any strange behaviour as regards the now() function when accessing a db via odbc layer. The behavio
There are practical applications, eg, 1024-bit keys are fairly common
objects in cryptography these days, and that equates to about 10^308.
I don't really foresee anyone trying to run crypto algorithms with SQL
NUMERIC arithmetic, though ...
2046 bit keys are becoming more common. However, math
Has anyone seen any strange behaviour as regards the now() function when
accessing a db via odbc layer. The behaviour I have seen (which I have
mentioned in pervious post) is as follows:
I have a trigger set up on a db - when a row is added to a certain table (say
Table A) in my db the trigger
Hello
I am an absolute beginner in postgres
and apologize in advance for a possible silly question.
I am running postgreSQL 8.0 on a Suse
Linux box
I need to write a function in a postgres
database that performs an extraction on three archives and writes the extracted
records in a text file.
I a
On Mon, 5 Dec 2005, Marcus Engene wrote:
Hi,
I use the tsearch full text search with pg 8.0.3. It works great, but I
wonder if it's possible to search for compound words?
Ie if I search for "New York" i want to get a match on
New York has traffic problems.
but not on
New axe murderer incide
Hi,
I use the tsearch full text search with pg 8.0.3. It works great, but I
wonder if it's possible to search for compound words?
Ie if I search for "New York" i want to get a match on
New York has traffic problems.
but not on
New axe murderer incident in brittish York.
Is this possible?
On Mon, Dec 05, 2005 at 11:04:11AM +0100, Maik Troemel wrote:
> Ok,
> there is only one "plpgsql.so" but how can I find out if there is more
> than postmaster?
> (i'm a newbie in linux concerning administration and things like this)
Well, it looks like you've compiled the server yourself, right?
This is a little vague...There is a way to recover the data. Make postmaster come back up. Unless you're talking about postmaster not coming up due to corrupted data files, or a hardware failure.You do need to use pg_dump at regular intervals. It is common practice to back up data, after all.
No
David Fetter wrote:
== PostgreSQL Weekly News - December 05 2005 ==
Chris Campbell of Big Nerd Ranch shows how to lower a query's priority.
http://weblog.bignerdranch.com/?p=11
I wouldn't normally bother nitpicking news items, but since this is
presumably being held up as expert advice and
Ok,
there is only one "plpgsql.so" but how can I find out if there is more
than postmaster?
(i'm a newbie in linux concerning administration and things like this)
Greetings
Martijn van Oosterhout wrote:
On Mon, Dec 05, 2005 at 10:40:46AM +0100, Maik Troemel wrote:
Hello,
When I run:
cre
hi all..
sometime back i had asked a question
is ther no way to recove the database in case postmaster fails to come
up??
i dont remember what was there in the log file ... i had seen it sometime
back ...for which i dont have the logs
i dint check if by restarting the system the problem woul
On Mon, Dec 05, 2005 at 10:40:46AM +0100, Maik Troemel wrote:
> Hello,
>
> When I run:
> createlang plpgsql wetter
>
> i get the error:
> ERROR: could not load library "/usr/local/pgsql/lib/plpgsql.so":
> /usr/local/pgsql/lib/plpgsql.so: undefined symbol: PG_exception_stack
Looks like your com
Hello,
When I run:
createlang plpgsql wetter
i get the error:
ERROR: could not load library "/usr/local/pgsql/lib/plpgsql.so":
/usr/local/pgsql/lib/plpgsql.so: undefined symbol: PG_exception_stack
I found something in the net:
"I think you've got a version mismatch problem --- specifically,
Hi Nikolay,Were you able to get any closer to your objective on this email thread?I spent quite a bit of time researching the web on this question back in June - Is there an Open Source option for doing "round trip" data modeling for PostgreSQL?As far as the E-R or UML diagramming, Dia and Umbrello
thanks for all your responses,
really appreciate it,
I am sorry but I am not as familiar with this as much as you are.
So are you suggesting, I need to send the Large object ID to the client?
Since the application is time critical, is there a way to skip one of
the two steps (querying once for L
65 matches
Mail list logo