Jeremy Palmer writes:
> Ok I have attached the map, or least what I think the map is.
Yup, that's what I was after. It looks like the main problem is here:
> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440
> used
> ExecutorState: 122880 total in 4 blocks; 63984
Merlin Moncure writes:
> I think you may have uncovered a leak (I stand corrected).
> The number of schemas in your test is irrelevant -- the leak is
> happening in proportion to the number of views (set via \setrandom
> tidx 1 10). At 1 I don't think it exists at all -- at 100 memory use
> grow
Jeremy Palmer writes:
> I running PostgreSQL 9.0.3 and getting an out of memory error while running a
> big transaction. This error does not crash the backend.
If it's a standard "out of memory" message, there should be a memory
context map dumped to postmaster's stderr. (Which is inconvenient
Thanks Tom,
>> I don't think this really works if multiple processes try to update the
table concurrently --- does that ever happen in your apps?
<<
Technically possible, but the production reality makes it unlikely.
Operationally, it makes no sense for it to be run more than once, or by more
tha
Hi All,
I running PostgreSQL 9.0.3 and getting an out of memory error while running a
big transaction. This error does not crash the backend.
The nature of this transaction is it is sequentially applying data updates to a
large number (104) of tables, then after applying those updates, a serie
Hi,
I'm trying to get my database to use LDAP for authentication but
whenever I'm adding anything LDAP related to pg_hba.conf, postgres won't
start properly but shuts down silently without any error messages. Even
using the -d flag doesn't help anything to get any useful error outputs.
Since d
On Wed, Apr 13, 2011 at 4:45 AM, Henry C. wrote:
> Greets,
>
> Pg 9.0.3.
>
> I'm trying out Pg's built-in replication for the first time and noticed
> something odd.
>
> On the slave I see the following in the logs (after rsyncing all from master
> to slave and firing up Pg on the slave):
>
> ...
Carlo Stonebanks writes:
> A few years ago I asked about creating a single UPDATE statement to assign
> id's from a sequence, with the sequences applied in a particular order. In
> other words, order the table, then apply nextval-generated id's to the id
> field in question.
> Here is the original
Merlin Moncure-2 wrote:
>
>
> I think you may have uncovered a leak (I stand corrected).
>
> The number of schemas in your test is irrelevant -- the leak is
> happening in proportion to the number of views (set via \setrandom
> tidx 1 10). At 1 I don't think it exists at all -- at 100 memory u
A few years ago I asked about creating a single UPDATE statement to assign
id's from a sequence, with the sequences applied in a particular order. In
other words, order the table, then apply nextval-generated id's to the id
field in question.
Here is the original post:
http://archives.postgresq
On Apr 12, 2011, at 10:33 AM, Bill Moran wrote:
> In response to Joel Stevenson :
>
>> select pg_total_relation_size('obj1') as o1, pg_total_relation_size( (select
>> reltoastrelid from pg_class where relname = 'obj1' ) ) as otoast1,
>> pg_total_relation_size('obj2') as o2, pg_total_relation_s
On Tue, Apr 12, 2011 at 12:48 PM, Shianmiin wrote:
>
> Merlin Moncure-2 wrote:
>>
>>
>> I am not seeing your results. I was able to run your test on a stock
>> config (cut down to 50 schemas though) on a vm with 512mb of memory.
>> What is your shared buffers set to?
>>
>>
>
> The shared buffers
On Fri, Apr 1, 2011 at 2:39 PM, Merlin Moncure wrote:
> On Wed, Mar 30, 2011 at 3:56 PM, Mike Orr wrote:
>> I'm converting a MySQL webapp to PostgreSQL. I have a backup server
>> which is refreshed twice daily with mysqldump/mysql and has a
>> continuously-running copy of the webapp. I want to re
Hi Pavel,
>> If so, are there some examples how to use "COPY FROM STDIN" with the
>> native C API?
>
> look to source code on \copy implementation in psql
>
> http://doxygen.postgresql.org/bin_2psql_2copy_8c.html
>
> http://www.postgresql.org/docs/8.1/static/libpq-copy.html
Thanks for the pointer
Greets,
Pg 9.0.3.
I'm trying out Pg's built-in replication for the first time and noticed
something odd.
On the slave I see the following in the logs (after rsyncing all from master
to slave and firing up Pg on the slave):
...
restored log file "0001018E000E" from archive
restored l
Thanks a lot , will try this one
Regards
On Tue, Apr 12, 2011 at 1:59 PM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> akp geek wrote:
>
> > Hi all -
> >
> > Is it possible to to split the data of a column into
> multiple
> > lines. We are have a column which is tex
akp geek wrote:
> Hi all -
>
> Is it possible to to split the data of a column into multiple
> lines. We are have a column which is text. when the query is excecuted, I
> wanted to display the text of the column in separate lines. Is it possible ?
Sure, for instace with a func
Merlin Moncure-2 wrote:
>
>
> I am not seeing your results. I was able to run your test on a stock
> config (cut down to 50 schemas though) on a vm with 512mb of memory.
> What is your shared buffers set to?
>
>
The shared buffers was set to 32MB as in default postgresql.conf
To save you so
In response to Joel Stevenson :
> Hi all,
>
> I'm trying to do some comparisons between the EXTERNAL and the EXTENDED
> storage methods on a bytea column and from the outside the setting doesn't
> appear to affect the value stored on initial insert, but perhaps I'm looking
> at the wrong numbe
Hi all -
Is it possible to to split the data of a column into
multiple lines. We are have a column which is text. when the query is
excecuted, I wanted to display the text of the column in separate lines. Is
it possible ?
thanks for the help
> t...@fuzzy.cz writes:
>>> Query1
>>> -- the first select return 10 rows
>>> SELECT a, b
>>> FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
>>> Where table1_id NOT IN (SELECT DISTINCT table1_id FROM table3)
>>> EXCEPT
>>> -- this select return 5 rows
>>> SELECT a, b
>>> FROM table1 LEFT JO
t...@fuzzy.cz writes:
>> Query1
>> -- the first select return 10 rows
>> SELECT a, b
>> FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
>> Where table1_id NOT IN (SELECT DISTINCT table1_id FROM table3)
>> EXCEPT
>> -- this select return 5 rows
>> SELECT a, b
>> FROM table1 LEFT JOIN table2 o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, Apr 12, 2011 at 10:29:44AM -0400, Tom Lane wrote:
> to...@tuxteam.de writes:
> > When PREPARing statements, the type guessing machinery seems to behave
> > differently for VARCHAR and TEXT. Is this intentional?
>
> Your example works for me, i
to...@tuxteam.de writes:
> When PREPARing statements, the type guessing machinery seems to behave
> differently for VARCHAR and TEXT. Is this intentional?
Your example works for me, in all branches back to 8.2:
regression=# create table foo(a text, b varchar);
CREATE TABLE
regression=# PREPARE s1
On Fri, Apr 8, 2011 at 5:07 PM, Shianmiin wrote:
>
> Merlin Moncure-2 wrote:
>>
>> On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin
>> wrote:
>>> Further clarification,
>>>
>>> if I run two concurrent threads
>>>
>>> pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
>>>
>>> both b
On Tue, Apr 12, 2011 at 12:58 AM, Uwe Schroeder wrote:
>
>
>> Uwe Schroeder writes:
>> > I have a 8.3 database and decided for various reasons to upgrade to 8.4.
>> > I also tried 9.0 - same results. On the exactly same hardware with the
>> > exactly same configuration, some queries perform a fac
Hi,
I have a problem, and here it is:
when execute this:
select to_tsvector('simple', 'a.')
I god just one result: 'a' because "." is a "Space symbol"
So, my question is what is the best way to remove a few chars from a
space symbol list, for this (simple) dictionary or another (newly
created)
Hi,
Check the postgresql.conf file, and the maximum connection settings... Or
check the postgresql.log for other errors, but it seems to me the maximum
connection will be the problem...
Regards,
Cvc
On 2011.04.12. 11:07, "AI Rumman" wrote:
I am connecting to Postgresql 9 from my php application
Greetings,
I've got a postgresql-8.4.7 instance running on 64bit Linux that
recently failed a SQL UPDATE with the error:
ERROR: index row requires 8968 bytes, maximum size is 8191
The index in question that failed is defined as:
"results_failinfo_index" btree (failinfo)
Its extermely rare, but n
>
> Query1
> -- the first select return 10 rows
> SELECT a, b
> FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
> Where table1_id NOT IN (SELECT DISTINCT table1_id FROM table3)
> EXCEPT
> -- this select return 5 rows
> SELECT a, b
> FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
> Wh
Well after a few days of further investigation I still can't track the issue
down. The main problem I can only reproduce the error running the whole
transaction. So I can't isolate the problem down to a simple use case or even
smaller subset of the transaction, which would have been nice for pos
I am connecting to Postgresql 9 from my php application using pg_connect.
After 30 concurrent connections from a single host, I am getting database
connection error at my app.
Does any one have any idea why the problem is occurring.
Query1
-- the first select return 10 rows
SELECT a, b
FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
Where table1_id NOT IN (SELECT DISTINCT table1_id FROM table3)
EXCEPT
-- this select return 5 rows
SELECT a, b
FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
Where table1_id NOT
Hi postgres fans,
i'm already using the crosstab function but it shows up an issue with multiple
results in the same category. example table:
row_namecatvalue
--+---+---
row1 cat1val1
row1 cat1val2
row1 cat3val3
row1 cat4val4
I probably do not give size and performance characteristics as much
precedence as I should but using a varchar makes the model more flexible if
you decide to change the identifier format. If you plan on simply using a
serial that starts at 1-million then OK but if you are picking these numbers
on
35 matches
Mail list logo