On 13/08/10 08:38, Tatsuo Ishii wrote:
>> It's slower than smaller numbers, and if you actually dirty a
>> significant portion of it you can have a checkpoint that takes hours to
>> sync, completely trashing system responsiveness for a good portion of it.
>
> So how much is the reasonal upper li
VTD-XML 2.9, the next generation XML Processing API for SOA and Cloud
computing, has been released. Please visit
https://sourceforge.net/projects/vtd-xml/files/ to download the latest version.
a.. Strict Conformance
a.. VTD-XML now fully conforms to XML namespace 1.0 spec
b.. Performan
On Aug 12, 2010, at 2:43 PM, Peter Geoghegan wrote:
> Hello,
>
> I would like to set up pgAgent on windows. I'm using the EDB 8.4.4
> package. I know that it is bad practice to put the password directly
> in the conninfo string, so I've avoided that. I want to use pgpass
> authentication, per th
> It's slower than smaller numbers, and if you actually dirty a
> significant portion of it you can have a checkpoint that takes hours to
> sync, completely trashing system responsiveness for a good portion of it.
So how much is the reasonal upper limit of shared_buffers at this
point? If it's o
On Thu, 2010-08-12 at 20:01 -0400, Greg Smith wrote:
> Joshua D. Drake wrote:
> > "Can we just say in the docs say 25% of memory to shared_buffers"
> > Yes, in fact we can. With the caveat of Windows, the reality is this
> > isn't going to hurt nearly as much as a untuned version of PostgreSQL
> >
Ma Sivakumar wrote:
There was also a point about making the installer create a recommended
postgresql.conf file.
After installation when the server is started for the first time, try
to start with postgresql.conf.suggested
If it starts successfully, let the user know that the server is
started
Tatsuo Ishii wrote:
What's the problem with 18GB shared_buffers exactly
It's slower than smaller numbers, and if you actually dirty a
significant portion of it you can have a checkpoint that takes hours to
sync, completely trashing system responsiveness for a good portion of it.
--
Greg Smi
> With modern servers often shipping with 72GB of RAM now, that would make
> shared_buffers set to 18GB. This is an absolutely disastrous setting
> for PostgreSQL in its current state; I'm seeing servers with that much
> RAM that suffer enormous problems with a far lower shared_buffers than
>
Joshua D. Drake wrote:
"Can we just say in the docs say 25% of memory to shared_buffers"
Yes, in fact we can. With the caveat of Windows, the reality is this
isn't going to hurt nearly as much as a untuned version of PostgreSQL
will.
With modern servers often shipping with 72GB of RAM now, t
"Carlo Stonebanks" writes:
> Ref these two queries against a view:
> -- QUERY 1, executes < 0.5 secs
> SELECT *
> FROM mdx_core.vw_provider AS p
> WHERE provider_id IN (13083101)
> -- QUERY 2, executes > 13.5 secs
> SELECT *
> FROM mdx_core.vw_provider AS p
> WHERE provider_id IN (SELECT 13083101
Oops, my bad - this should be in PG PERFORM! Posting it there...
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
akp geek writes:
> In the postgres db we have, I see there are many pg_toast_temp and
> pg_temp schemas are getting created.
This is expected if you're using temp tables. That's where they live.
> Is it ok to drop those schemas ? Is
> there a way to prevent them being created?
No, an
Dear all -
In the postgres db we have, I see there are many pg_toast_temp and
pg_temp schemas are getting created. Is it ok to drop those schemas ? Is
there a way to prevent them being created? Can you please share you
thoughts?
Regards
Hi,
On 13 August 2010 00:09, Odd Man wrote:
> replaced with version two that uses partitions, truncate, copy and cleanup
> of old data once daily.
It makes me think that you have developed some rotation mechanism to
process incoming data. If so there is a batch-processing queue-based
solution ca
On 12 August 2010 21:09, Odd Man wrote:
> Hi,
>
> In my current project we have unusual (at least for me) conditions for
> relation db, namely:
> * high write/read ratio (writes goes from bulk data updates/inserts (every
> couple or minutes or so))
> * loosing some recent part of data (last hour f
Hi,
In my current project we have unusual (at least for me) conditions for
relation db, namely:
* high write/read ratio (writes goes from bulk data updates/inserts (every
couple or minutes or so))
* loosing some recent part of data (last hour for example) is OK, it can be
easy restored
First vers
Ref these two queries against a view:
-- QUERY 1, executes < 0.5 secs
SELECT *
FROM mdx_core.vw_provider AS p
WHERE provider_id IN (13083101)
-- QUERY 2, executes > 13.5 secs
SELECT *
FROM mdx_core.vw_provider AS p
WHERE provider_id IN (SELECT 13083101)
I am using the simple IN (SELECT n) to si
On 12/08/10 20.44, Tom Lane wrote:
Edoardo Panfili writes:
On 12/08/10 18.59, Edoardo Panfili wrote:
I am in some trouble with my tables defined using inheritance,
No, foreign keys do not play very nicely with inheritance. There is
some explanation in the manual, in the "Caveats" subsection
About the shared buffers size configuration discussion:
Like a few others here, I've spent a sizable proportion of my career
dealing with this issue (not with PG, with other products I've developed
that had a similar in-memory page pool).
There are roughly six stages in understanding this pro
Edoardo Panfili writes:
> On 12/08/10 18.59, Edoardo Panfili wrote:
>> I am in some trouble with my tables defined using inheritance,
No, foreign keys do not play very nicely with inheritance. There is
some explanation in the manual, in the "Caveats" subsection under
"Inheritance" --- see bottom
Hello,
I would like to set up pgAgent on windows. I'm using the EDB 8.4.4
package. I know that it is bad practice to put the password directly
in the conninfo string, so I've avoided that. I want to use pgpass
authentication, per the instructions for pgAgent.
I get an fe_sendauth error message in
On 12/08/10 18.59, Edoardo Panfili wrote:
hi,
I am in some trouble with my tables defined using inheritance, This is a
semplified test case:
---
create table sub1( name1 text) inherits(father);
create table sub2( name2 text) inherits(father);
create table other (description t
On Thu, Aug 12, 2010 at 8:42 PM, Joshua D. Drake wrote:
>
> GOOD PERFORMANCE IS ALWAYS RELIANT ON PROPER HARDWARE, DATABASE DESIGN
> AND APPLICATION ARCHITECTURE. THIS TABLE IS A HINT ONLY. YOU WILL LIKELY
> HAVE TO TUNE BEYOND THIS.
>
> shared_buffers = 25% of available memory
> work_mem = 2-4MB
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 8/12/10 11:23 AM, Tom Lane wrote:
> =?ISO-8859-15?Q?Torsten_Z=FChlsdorff?= writes:
>>> How annoying :-(. I think what you need to do is use truss or strace
>>> or local equivalent with the follow-forks flag, so that you can see what
>>> the stand-
hi,
I am in some trouble with my tables defined using inheritance, This is a
semplified test case:
---
create table sub1( name1 text) inherits(father);
create table sub2( name2 text) inherits(father);
create table other (description text, id integer);
-- I know, the contrai
On 12 Aug 2010, at 16:04, Torsten Zühlsdorff wrote:
> Ok, next round. I just have truss as an option, because strace didn't work at
> my AMD64. Hope its helpfull:
I haven't used it yet, but I've heard good things about DTrace, which is
apparently in base these days.
Alban Hertroys
--
Screwin
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> I have a simple perl program (see below) with DBI call to connect to the
> Postgresql db on a CentOS system. I don't know why the top command
> shows it takes more than 110m (under VIRT column). I tried with newer
> DBI and DBD versions s
Hi,
I have a simple perl program (see below) with DBI call to connect to the
Postgresql db on a CentOS system. I don't know why the top command
shows it takes more than 110m (under VIRT column). I tried with newer
DBI and DBD versions still have the same result. However, I tried it on
host
=?ISO-8859-15?Q?Torsten_Z=FChlsdorff?= writes:
>> How annoying :-(. I think what you need to do is use truss or strace
>> or local equivalent with the follow-forks flag, so that you can see what
>> the stand-alone backend process does, not just initdb itself.
> Ok, next round. I just have truss
On 8/12/2010 10:56 AM, Pavel Stehule wrote:
so this means a xor operation
2010/8/12 Stephen Cook:
What does the hash mark (#) mean in plpgsql?
I saw it used in the pseudo_encrypt function @
http://wiki.postgresql.org/wiki/Pseudo_encrypt, on the line:
r2 := l1 # 1366.0 * r1 + 150889) % 714
On Thu, 2010-08-12 at 11:59 +0530, Ma Sivakumar wrote:
> What does a migrating PHP/MySQL user do? If MySQL performs fast just
> out of box (I have not used MySQL), what is different there? Do MySQL
> defaults give better performance? How do they arrive at those
> defaults?
I have been watching th
Pavel Stehule writes:
> try:
> postgres=# select oprcode, oprleft::regtype, oprright::regtype from
> pg_operator where oprname = '#';
Or even easier:
regression=# \do+ #
List of operators
Schema | Name | Left arg type | Right arg type | Result type
Stephen Cook wrote:
> What does the hash mark (#) mean in plpgsql?
>
> I saw it used in the pseudo_encrypt function @
> http://wiki.postgresql.org/wiki/Pseudo_encrypt, on the line:
>
> r2 := l1 # 1366.0 * r1 + 150889) % 714025) / 714025.0) * 32767)::int;
>
> My google-fu has failed
On Thursday 12 August 2010 7:51:39 am Stephen Cook wrote:
> What does the hash mark (#) mean in plpgsql?
>
> I saw it used in the pseudo_encrypt function @
> http://wiki.postgresql.org/wiki/Pseudo_encrypt, on the line:
>
> r2 := l1 # 1366.0 * r1 + 150889) % 714025) / 714025.0) * 32767)::int;
>
hello
try:
postgres=# select oprcode, oprleft::regtype, oprright::regtype from
pg_operator where oprname = '#';
oprcode| oprleft | oprright
---+--+--
path_npoints | -| path
box_intersect | box | box
poly_npoints | -| polygon
lseg_int
What does the hash mark (#) mean in plpgsql?
I saw it used in the pseudo_encrypt function @
http://wiki.postgresql.org/wiki/Pseudo_encrypt, on the line:
r2 := l1 # 1366.0 * r1 + 150889) % 714025) / 714025.0) * 32767)::int;
My google-fu has failed me on this one, and I needs to know.
Than
Hey all,
I have three questions about libpq.
Is there a way to get TOO MANY CONNECTIONS error code from libpq
immediately after calling PQconnect* () ? Or how to determine such error
not parsing PQerrorMessage() ?
It seams, that all of the error codes may be obtained by calling
PQresultErrorField(
Hi Tom,
Please notice, that after changing the IPC-Settings of the system, no
core-file is dumped anymore. Quiet interessting.
How annoying :-(. I think what you need to do is use truss or strace
or local equivalent with the follow-forks flag, so that you can see what
the stand-alone backend
=?ISO-8859-15?Q?Torsten_Z=FChlsdorff?= writes:
> Please notice, that after changing the IPC-Settings of the system, no
> core-file is dumped anymore. Quiet interessting.
How annoying :-(. I think what you need to do is use truss or strace
or local equivalent with the follow-forks flag, so that
On Wed, 2010-08-11 at 09:53 -0400, Merlin Moncure wrote:
> On Wed, Aug 11, 2010 at 8:42 AM, Rafal Pietrak
> wrote:
[]
> >
> > SELECT min(A[1]) as a1, min(A[2]) as a2, ...
> >
> > This is because aggregate functions are not defined on ARRAY types. Or
> > may be there is an easier and more read
On Thu, 12 Aug 2010 12:50:49 +0100
Thom Brown wrote:
> On 12 August 2010 12:14, Ivan Sergio Borgonovo
> wrote:
> > I've
> > delete from catalog_items where ItemID in (select id from
> > import.Articoli_delete);
> >
> > id and ItemID have an index.
> >
> > catalog_items is ~1M rows
> > Articoli_d
On Thu, 12 Aug 2010 17:14:17 +0530
Jayadevan M wrote:
> > I've
> > delete from catalog_items where ItemID in (select id from
> > import.Articoli_delete);
> Does catalog_items have child tables where the FK columns are not
> indexed? Regards,
Possibly, but very small ones.
What I missed to say i
> I've
> delete from catalog_items where ItemID in (select id from
> import.Articoli_delete);
>
> id and ItemID have an index.
>
> catalog_items is ~1M rows
> Articoli_delete is less than 2K rows.
>
> This query has been running for roughly 50min.
> Right now it is the only query running.
>
> Postg
On 12 August 2010 12:14, Ivan Sergio Borgonovo wrote:
> I've
> delete from catalog_items where ItemID in (select id from
> import.Articoli_delete);
>
> id and ItemID have an index.
>
> catalog_items is ~1M rows
> Articoli_delete is less than 2K rows.
>
> This query has been running for roughly 50m
> From: Ivan Sergio Borgonovo
> To: pgsql-general@postgresql.org
> Date: 12/08/2010 16:43
> Subject: [GENERAL] delete query taking way too long
> Sent by: pgsql-general-ow...@postgresql.org
>
> I've
> delete from catalog_items where ItemID in (select id from
> import.Articoli_delete);
Does catalo
What's the output of explain?
--- On Thu, 12/8/10, Ivan Sergio Borgonovo wrote:
> From: Ivan Sergio Borgonovo
> Subject: [GENERAL] delete query taking way too long
> To: pgsql-general@postgresql.org
> Date: Thursday, 12 August, 2010, 12:14
> I've
> delete from catalog_items where ItemID in (sel
I've
delete from catalog_items where ItemID in (select id from
import.Articoli_delete);
id and ItemID have an index.
catalog_items is ~1M rows
Articoli_delete is less than 2K rows.
This query has been running for roughly 50min.
Right now it is the only query running.
PostgreSQL 8.3.4 on x86_64-
On Thursday 12. August 2010 08.29.13 Ma Sivakumar wrote:
> What does a migrating PHP/MySQL user do? If MySQL performs fast just
> out of box (I have not used MySQL), what is different there? Do MySQL
> defaults give better performance? How do they arrive at those
> defaults?
>
> Or is it a complet
Thanks! Yup the LEFT JOIN is probably a cleaner syntax over the nested query..
On Thu, Aug 12, 2010 at 1:37 AM, Thom Brown wrote:
> On 12 August 2010 09:16, Mike Christensen wrote:
>> Damn I'm the master at posting stuff then figuring it out like 5
>> seconds later.. Is there an approach bette
Hello,
Excerpts from Torsten ZÌhlsdorff's message of mié ago 11 02:52:34 -0400 2010:
Bad system call (core dumped)
I think you should try harder to generate the core file. Maybe you have
too low an "ulimit -c" setting?
The kernel message indicates that core *is* being dumped. Possibly
Hello,
The first suspicious i can see are a lots of "ERR#32 'Broken pipe'" entries.
This is the result of postgres crashing and thus initdb being unable to
write any more data to it.
I think you should try harder to generate the core file. Maybe you have
too low an "ulimit -c" setting?
The
On 12 August 2010 09:16, Mike Christensen wrote:
> Damn I'm the master at posting stuff then figuring it out like 5
> seconds later.. Is there an approach better than this?
>
> select RecipeId, Rating, array_to_string(ARRAY(select Tag from
> RecipeTags where RecipeId = R.RecipeId), ' ') from Reci
I have the table recipes (ID/Name):
1 - Pancakes
2 - Chicken dish
3 - Tacos
Then I have the table RecipeTags (RecipeID/Tag)
1 - A
1 - C
2 - A
2 - D
3 - E
3 - F
3 - G
I want to query for all recipes, but join in the recipe tags.
However, rather than having a row for each recipe tag, I want to
co
Damn I'm the master at posting stuff then figuring it out like 5
seconds later.. Is there an approach better than this?
select RecipeId, Rating, array_to_string(ARRAY(select Tag from
RecipeTags where RecipeId = R.RecipeId), ' ') from Recipes R;
On Thu, Aug 12, 2010 at 1:12 AM, Mike Christensen
On Thu, Aug 12, 2010 at 12:37 PM, Scott Marlowe wrote:
> On Wed, Aug 11, 2010 at 11:41 PM, Greg Smith wrote:
> > Sandeep Srinivasa wrote:
> >>
> >> Maybe a tabular form would be nice - "work_mem" under...
> >
> > The problem with work_mem in particular is that the useful range depends
> > quite
Hello
2010/8/12 Georgi Ivanov :
> Hi,
> I have query like this
> Select * from tabelname limit 10 OFFSET 10;
>
> If i increase the OFFSET to 1000 for example, the query runs slower . The
> bigger is OFFSET the slower is the query.
>
sure - database have to process all rows and skip a first n of r
Le 12/08/2010 09:43, Georgi Ivanov a écrit :
> [...]
> I have query like this
> Select * from tabelname limit 10 OFFSET 10;
>
> If i increase the OFFSET to 1000 for example, the query runs slower . The
> bigger is OFFSET the slower is the query.
>
> This is standard pagination feature i use for m
Hi,
I have query like this
Select * from tabelname limit 10 OFFSET 10;
If i increase the OFFSET to 1000 for example, the query runs slower . The
bigger is OFFSET the slower is the query.
This is standard pagination feature i use for my website.
Actually the query is little bit more complex than t
On Wed, Aug 11, 2010 at 11:41 PM, Greg Smith wrote:
> Sandeep Srinivasa wrote:
>>
>> Maybe a tabular form would be nice - "work_mem" under...
>
> The problem with work_mem in particular is that the useful range depends
> quite a bit on how complicated you expect the average query running to be.
59 matches
Mail list logo