hartrc wrote:
> Basically my question is:
> Is there currently any way to avoid wal generation during data load
for
> given tables and then have point in time recovery after that?
>
> Background and blurb
> The situation i'm referring to here is for a development environment.
I
> require point in
Alan Nilsson wrote:
> Is it the case that extensions can be added to only one schema? If
so, what is the recommended
> practice for accessing a function from an extension in multiple
schemas?
Yes, a (relocatable) extension can belong to only one schema.
There are two ways to access the functions:
Scot Kreienkamp wrote:
> I have a problem that I've been struggling with for quite some time.
Every once in a while I will get
> a connection that goes to idle in transaction on an in-house
programmed application that connects with
> JDBC. That happens fairly regularly and the programmers are tryi
Hi all!
We have at present the following parameters related to shared memory:
*postgres*
shared_buffers = 7GB
max_connections = 1500
max_locks_per_transaction = 1024
max_prepared_transactions = 0 (not set)
*system*
SHMALL = 2097152
SHMMAX = 17 670 512 640
SHMMNI = 4096
The amount of RAM is 24
Hi,
On Wed, 2012-09-26 at 13:39 +0400, Alexander Shutyaev wrote:
> We need to increase max_connections to 3000. When we tried to do this
> we got an error
Not an answer to your question but: Are you talking about 3k
*concurrent* connections? What are the server specs? I am not sure you
can h
Hi:
The size of our database is growing rather rapidly. We're concerned
about how well Postgres will scale for OLAP-style queries over terabytes
of data. Googling around doesn't yield great results for vanilla
Postgres in this application, but generally links to other software like
Greenplum, Nete
On 09/25/2012 08:25 AM, Daniele Varrazzo wrote:
On Tue, Sep 25, 2012 at 3:47 PM, Adrian Klaver wrote:
To elaborate:
test=> SELECT current_setting('server_version_num');
current_setting
-
90009
Yes, but knowing that, how does he run a statement only if version
e.g. >= 8040
Hello,
I know that the optimizer, uses the number of rows to calculate the total
query execution time. In complex queries, it is very difficult to know the
number of rows in the expected result. This certainly affects the optimizer to
very great extent. I have a view and it should return aro
Thanks Steve and Andres,
I read these articles
http://www.linuxinsight.com/optimize_postgresql_database_size.html
http://yuval.bar-or.org/blog/2010/08/sluggish-postgresql-databases-and-reindexing-indexes/
http://www.if-not-true-then-false.com/2009/partitioning-large-postgresql-tables-and-h
Ah. A pair of constraints. I see.
Thanks!
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.black
Hi,
i'm new user i'm this problem:
My database Pgadmin III up to yesterday connect, tomorrow display this
message:
Could not connect to server: connection refused (0x274D/10061) Is the
server running on host "localhost" (127.0.0.1) and accepting TCP/IP
connections on port 5432?
I'm scanning
>> I am wondering, why the historical data (real result of the query) does
not affect the execution plan. For example, If I ran the query 10 times I
always get around 6500 instead of 5.5 million in the top most execution plan
node.
The likely accurate, if cheeky, answer is simply that no one has
Hello David,
Thanks for the quick response, I will follow up with the performance group
after preparing some case scenarios.
Regards
From: David Johnston
To: 'salah jubeh' ; 'pgsql'
Sent: Wednesday, September 26, 2012 4:41 PM
Subject: RE: [GENERAL] este
On 09/26/2012 07:39 AM, fra2012 wrote:
Hi,
i'm new user i'm this problem:
My database Pgadmin III up to yesterday connect, tomorrow display this
message:
Could not connect to server: connection refused (0x274D/10061) Is the
server running on host "localhost" (127.0.0.1) and accepting TCP/IP
On 09/26/2012 08:57 AM, Empire Software srl wrote:
Dear mr Adrian,
I reinstall postgres 9.1 but the problem does not change.
I open the config file pg_hda.conf the adress is 127.0.0.1/32
I am CCing the list so more eyes can see.
pg_hba.conf controls who gets in once a connection is made. Y
On Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman wrote:
>
> Hi
> Thanks for your answer.
> I understood that the server is ok memory wise.
> What can I check on the client side or the DB queries?
Well you're connecting to localhost so I'd expect you to show a memory
issue in free I'm not seeing
Hi
Thanks again.
Right now, this is *free -m and ps aux* and non of the crons can run -
can't allocate memory.
cif@domU-12-31-39-08-06-20:~$ free -m
total used free sharedbuffers cached
Mem: 17079 12051 5028 0270 9578
-
On Tue, Sep 25, 2012 at 9:09 AM, hartrc wrote:
> My version: PostgreSQL v9.1.5
> Version string: "PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by
> gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit"
>
> Basically my question is:
> Is there currently any way to avoid wal gen
Dear all,
I have issue with the red portion function below;
This IF last_id IS NULL THEN suppose to test if last_id is NULL, that is the
select query did not found it
then execute the rest of the red sql but it always fail to insert the
NEW.amount into amount , every other things fine.
Kin
Why not use IF FOUND?
RobR
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of SUNDAY A. OLUTAYO
Sent: Wednesday, September 26, 2012 12:43 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] trigger and plpgsq help needed
Dear all,
I have issue
On 09/26/2012 10:42 AM, SUNDAY A. OLUTAYO wrote:
> This IF last_id IS NULL THEN suppose to test if last_id is NULL, that is
> the select query did not found it
> then execute the rest of the red sql but it always fail to insert the
> *NEW.amount *into*amount*, every other things fine.
I believe wh
Hey Everyone,
I seem to be getting an inaccurate cost from explain. Here are two examples
for one query with two different query plans:
exchange_prod=# set enable_nestloop = on;
SET
exchange_prod=#
exchange_prod=# explain analyze SELECT COUNT(DISTINCT "exchange_uploads"."id")
FROM "exchange_u
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Thomas Kellerer
> Sent: Tuesday, September 25, 2012 5:25 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] idle in transaction query makes server unresp
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Albe Laurenz
> Sent: Wednesday, September 26, 2012 5:15 AM
> To: Scot Kreienkamp; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] idle in transaction query make
On 09/26/2012 09:42 AM, SUNDAY A. OLUTAYO wrote:
Dear all,
I have issue with the red portion function below;
This IF last_id IS NULL THEN suppose to test if last_id is NULL, that is
the select query did not found it
then execute the rest of the red sql but it always fail to insert the
*NEW.amou
Hi,
After upgrading from 8.3 I found an unusual error related to a plpgsql function.
The database includes a table named "detail".
The procedure/function in question includes a declaration of detail%rowtype.
Loading the server from a dump-all at the time of the upgrade went
fine and the functio
On 09/26/2012 01:38 PM, Robert Sosinski wrote:
I seem to be getting an inaccurate cost from explain. Here are two
examples for one query with two different query plans:
Well, there's this:
Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (actual
time=163.275..186869.844 rows=43904 loop
Em 26/09/2012 17:03, Shaun Thomas escreveu:
On 09/26/2012 01:38 PM, Robert Sosinski wrote:
I seem to be getting an inaccurate cost from explain. Here are two
examples for one query with two different query plans:
Well, there's this:
Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (act
On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
> When I disable nested loop, I get a cost of 2,535,992.34 which runs in
> only 133,447.790 ms. We have run queries on our database with a cost
> of 200K cost b
Hi,
I am reading the docu and am looking for an example our explanation as to the
difference of setof and arrays.
But maybe this is not even relevant. The reason I am looking for this is
because I need to build a function that looks for all records whose ids are in
a pool of other records.
Ma
On Sep 26, 2012, at 16:28, Alexander Reichstadt wrote:
> Hi,
>
> I am reading the docu and am looking for an example our explanation as to the
> difference of setof and arrays.
>
> But maybe this is not even relevant. The reason I am looking for this is
> because I need to build a function th
Heureka, it works.
Thanks!
Am 26.09.2012 um 22:37 schrieb David Johnston:
> On Sep 26, 2012, at 16:28, Alexander Reichstadt wrote:
>
>> Hi,
>>
>> I am reading the docu and am looking for an example our explanation as to
>> the difference of setof and arrays.
>>
>> But maybe this is not even
On Wed, Sep 26, 2012 at 5:50 AM, Ryan Kelly wrote:
> Hi:
>
> The size of our database is growing rather rapidly. We're concerned
> about how well Postgres will scale for OLAP-style queries over terabytes
> of data. Googling around doesn't yield great results for vanilla
> Postgres in this applicat
On 26/09/12 23:50, Ryan Kelly wrote:
Hi:
The size of our database is growing rather rapidly. We're concerned
about how well Postgres will scale for OLAP-style queries over terabytes
of data. Googling around doesn't yield great results for vanilla
Postgres in this application, but generally links
Hi,
On 26 September 2012 21:50, Ryan Kelly wrote:
> The size of our database is growing rather rapidly. We're concerned
> about how well Postgres will scale for OLAP-style queries over terabytes
> of data. Googling around doesn't yield great results for vanilla
> Postgres in this application, but
On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski <
dep...@depesz.com> wrote:
> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
> > The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
> > When I disable nested loop, I get a cost of 2,535,992.34 which ru
On 09/26/2012 12:55 PM, Jim Wilson wrote:
Hi,
After upgrading from 8.3 I found an unusual error related to a plpgsql function.
The database includes a table named "detail".
The procedure/function in question includes a declaration of detail%rowtype.
Loading the server from a dump-all at the t
Edson Richter writes:
>> That said, looking at your actual query:
>>
>> SELECT COUNT(DISTINCT eu.id)
>> FROM exchange_uploads eu
>> JOIN upload_destinations ud ON ud.id = eu.upload_destination_id
>> LEFT JOIN uploads u ON u.id = eu.upload_id
>> LEFT JOIN import_errors ie ON ie.exchange_upload_id
Hi, I compiled PostgreSql 9.1 from sources in a OpenSuse 10.1 PowerPC
machine. While trying to test one application, I've got errors just
before connecting to the database, and found my app is loading
linux-vdso64.so.1 while libpq.so uses linux-vdso32.so.1
This means the PostgreSql libraries where
On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski
wrote:
> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
>> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
>> When I disable nested loop, I get a cost of 2,535,992.34 which runs in
>> only 133,447.7
I just don't get how we are supposed to use LIKE with backslashes in
strings in 8.4. This is particularly vexing, because I have a field
containing UNC paths that I need to search on (and eventually update). I
have been looking at this page for guidance:
http://www.postgresql.org/docs/8.4/static/
Jim Wilson writes:
> After upgrading from 8.3 I found an unusual error related to a plpgsql
> function.
> The database includes a table named "detail".
> The procedure/function in question includes a declaration of detail%rowtype.
Hmm. The reason that doesn't work is that DETAIL is now a keyw
Hello Good Evening all,
does anybody knows how can I have something like:
select function_that_returns_int(NEW.field1) into NEW.field2;
i.e., getting a function output and assign it to a variable inside the trigger.
in a trigger function?
I'm always getting:
ERROR: SELECT query has no destinatio
On 09/26/12 17:56, joao viegas wrote:
>
> does anybody knows how can I have something like:
> select function_that_returns_int(NEW.field1) into NEW.field2;
I think you want:
select into NEW.field2 function_that_returns_int(NEW.field1);
or even:
NEW.field2 := function_that_returns_int(NEW.f
All,
* Scott Marlowe (scott.marl...@gmail.com) wrote:
> If you want fastish OLAP on postgres you need to do several things.
[...]
All good suggestions.
I'd recommend looking at ROLAP approaches and doing aggregations and
materialized views first.. Will depend on exactly what you need/are
lookin
joao viegas writes:
> does anybody knows how can I have something like:
> select function_that_returns_int(NEW.field1) into NEW.field2;
Works for me ...
create table fooey (q1 int, q2 int);
create function myt() returns trigger language plpgsql as $$
begin
select abs(new.q1) into new.q2;
re
On Sep 26, 2012, at 20:42, Chris Curvey wrote:
> I just don't get how we are supposed to use LIKE with backslashes in strings
> in 8.4. This is particularly vexing, because I have a field containing UNC
> paths that I need to search on (and eventually update). I have been looking
> at this p
I have a database export done via copy. It's all the tables, contraints, blah
blah.
I know (a couple of years ago) that I just used the defaults. REALLY fast btw,
I
remember.
All the table create commands work fine.
I cannot get the whole file to import using (as postgres) psql -d database_n
On 09/26/2012 07:41 PM, Dennis Gearon wrote:
I have a database export done via copy. It's all the tables, contraints,
blah blah.
More information may be in order.
What versions of Postgres are you using?
Are you going from one version to another?
Did you really use COPY for everything(schema i
It turns out that I had made the export from psql, a text based export.
So I read that it was actually needing to be imported using '\i
filename_in_local_directory'
Dennis Gearon
Never, ever approach a computer saying or even thinking "I will just do this
quickly."
Hi,
Could you provide the steps and commands you used to compile 9.1?
I want to reproduce your case in my machine
Regards
Arthur
On 27 Sep 2012, at 7:46 AM, Leonardo M. Ramé wrote:
> Hi, I compiled PostgreSql 9.1 from sources in a OpenSuse 10.1 PowerPC
> machine. While trying to test one appli
Obviously OLAP performance depends to some extent on what you want to do
with it. My recommendation is to follow Scott's advice first (keep things
simple as long as you can), and then look at supplemental approaches when
you reach the limit there.
The big limit IMO is the lack of intraquery para
52 matches
Mail list logo