On Nov 14, 2011, at 0:35, Amit Dor-Shifer wrote:
> On Mon, Nov 14, 2011 at 4:29 PM, Amit Dor-Shifer
> wrote:
> Hi,
> I've got this table:
> create table phone_calls
> (
> start_time timestamp,
> device_id integer,
> term_status integer
> );
>
> It describes phone call events. A 'te
>
> Question: what can I do to rsync only the new additions in every table
> starting 00:00:01 until 23:59:59 for each day?
>
A table level replication (like Slony) should help here.
Or
A trigger based approach with dblink would be an-other (but, a bit complex)
option.
Thanks
VB
On Mon, Nov 14, 2011 at 4:29 PM, Amit Dor-Shifer
wrote:
> Hi,
> I've got this table:
> create table phone_calls
> (
> start_time timestamp,
> device_id integer,
> term_status integer
> );
>
> It describes phone call events. A 'term_status' is a sort-of an exit
> status for the call, wh
Hi,
I've got this table:
create table phone_calls
(
start_time timestamp,
device_id integer,
term_status integer
);
It describes phone call events. A 'term_status' is a sort-of an exit status
for the call, whereby a value != 0 indicates some sort of error.
Given that, I wish to retriev
I'm desperately trying to get a hold of the latest RPM's for PostgreSQL 9.0..5
for SLES 11 SP1 x86_64 I simply can not find these anywhere !!
It seems that the good folk over at software.opensuse.org are only compiling
9.1.x now. Rather annoying to say the least for those of us who don't wa
On 12/11/11 20:51, alextc wrote:
> Hi Ray,
>
> Have you got any luck to get around this issue?
>
> I am having the same issue. I just installed PostgreSQL 9.1 with Stack
> Builder 3.0.0.
>
> Every time I was trying to install additional software I received the error
> message popped out saying .
On 11/13/2011 06:09 PM, Alexander Burbello wrote:
Hi folks,
My server has a daily routine to import a dump file, however its taking long
time to finish it.
The original db has around 200 MB and takes 3~4 minutes to export (there are
many blob fields), however it takes 4 hours to import using p
Hi,
On 14 November 2011 11:09, Alexander Burbello wrote:
> What can I do to tune this database to speed up this restore??
> My current db parameters are:
> shared_buffers = 256MB
> maintenance_work_mem = 32MB
You should increase maintenance_work_mem as much as you can.
full_page_writes, archive_
Hi folks,
My server has a daily routine to import a dump file, however its taking
long time to finish it.
The original db has around 200 MB and takes 3~4 minutes to export (there
are many blob fields), however it takes 4 hours to import using pg_restore.
What can I do to tune this database to spe
On 11/13/11 3:45 AM, alextc wrote:
I am working with Windows OS but is there any official (not 3rd party like
the EnterpriseDB one) PostgreSQL installer for Windows?
not any more... but, you don't have to use the 'stackbuilder' to run
postgres
--
john r pierceN 37
On Sun, Nov 13, 2011 at 12:28 PM, Ludo Smissaert wrote:
> The algorithm is that I am returning a SETOF cursors pointing
> to two different tables and data of these two tables will be
> printed by the client like this:
Have you actually measure the cost of adding the order by to the
select from th
On 11/13/11 17:58, David Johnston wrote:
On Nov 13, 2011, at 11:13, Ludo Smissaert wrote:
Within a PL/PgSQL function I do a
CREATE TEMPORARY TABLE v_temp ON COMMIT DROP AS SELECT ctime FROM
source ORDER BY ctime WITH DATA;
Then I use the v_temp in the same transaction block:
FOR v_ctime
On Nov 13, 2011, at 11:13, Ludo Smissaert wrote:
> Greetings,
>
> Within a PL/PgSQL function I do a
>
> CREATE TEMPORARY TABLE v_temp ON COMMIT DROP
> AS
> SELECT ctime FROM source ORDER BY ctime
> WITH DATA;
>
> Then I use the v_temp in the same transaction block:
>
> FOR v_ctime IN
>SEL
"Clark C. Evans" writes:
> Even so, the CREATE DATABASE... WITH TEMPLATE still has a set of
> additional issues with it. It ties up the hard drive with activity
> and then extra space while it duplicates data. Further, it causes
> the shared memory cache to be split between the original and the
Greetings,
Within a PL/PgSQL function I do a
CREATE TEMPORARY TABLE v_temp ON COMMIT DROP
AS
SELECT ctime FROM source ORDER BY ctime
WITH DATA;
Then I use the v_temp in the same transaction block:
FOR v_ctime IN
SELECT ctime FROM v_temp
LOOP
END LOOP;
Now I am curious, will t
On Sun, Nov 13, 2011 at 3:07 PM, Clark C. Evans wrote:
> Could their be a way to put the database in "read only" mode,
> where it rejects all attempts to change database state with an
> appropriate application level error message? We could then
> update our application to behave appropriately wh
On Sun, Nov 13, 2011 at 10:45 PM, Andy Colson wrote:
> On 11/13/2011 07:51 AM, Gregg Jaskiewicz wrote:
>>
>> pg_dump -Fc already compresses, no need to pipe through gzip
>>
>
> I dont think that'll use two core's if you have 'em. The pipe method will
> use two cores, so it should be faster. (ass
On Sunday, November 13, 2011 7:33 AM, "Simon Riggs"
wrote:
> On Sat, Nov 12, 2011 at 9:40 PM, Clark C. Evans
> > [We] should be using "CREATE DATABASE ... WITH TEMPLATE".
> > However, this has two big disadvantages. First, it only works
> > if you can kick the users off the clone. Secondly, i
On 11/13/2011 07:51 AM, Gregg Jaskiewicz wrote:
pg_dump -Fc already compresses, no need to pipe through gzip
I dont think that'll use two core's if you have 'em. The pipe method will use
two cores, so it should be faster. (assuming you are not IO bound).
-Andy
--
Sent via pgsql-general ma
On Nov 13, 2011 7:39 PM, "Phoenix Kiula"
>
> Question: what can I do to rsync only the new additions in every table
> starting 00:00:01 until 23:59:59 for each day?
You can't really. You can rsync the whole thing and it can be faster, but
you can't really just copy the last changes as a diff.
Tha
pg_dump -Fc already compresses, no need to pipe through gzip
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thanks for your help, John.
I am working with Windows OS but is there any official (not 3rd party like
the EnterpriseDB one) PostgreSQL installer for Windows?
Thanks.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Error-with-Application-Stack-Builder-3-0-0-tp4986863p49
You could also do a
pg_dump -Fc | gzip -1 -c > dumpfile.gz
at the cost of a slightly larger (but faster backup).
Actually if you're going this route, you could skip even the pg_dump
compression as well...
pg_dump db | gzip -1 -c > dumpfile.gz
--
Robins Tharakan
--
Sent via pgsql-general m
What "other methods" do you recommend? That was in fact my question.
Do I need to install some modules?
Well depending on your PG version you could read up about the various
backup methods. I believe you'll be interested in 24.3 there when you
ask for WAL archiving. The good thing is, its usef
I know it's a no-no to respond to my own posts, but here's what I'm
going to do.
I'll test newer revisions of 8.3 and also 9.1 in the out-of-disk-space
scenario and report back :P
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
NVM the implementation, but ability to clone the database without
disconnects would be very good for backups and testing.
We also create loads of templates, so that would make it more practical.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscrip
On Sun, Nov 13, 2011 at 8:42 PM, Robins Tharakan
wrote:
> Hi,
>
> Well, the 'complex' stuff is only as there for larger or high-traffic DBs.
> Besides at 60GB that is a largish DB in itself and you should begin to try
> out a few other backup methods nonetheless. That is moreso, if you are
> takin
Hi,
Well, the 'complex' stuff is only as there for larger or high-traffic
DBs. Besides at 60GB that is a largish DB in itself and you should begin
to try out a few other backup methods nonetheless. That is moreso, if
you are taking entire DB backups everyday, you would save a considerable
lot
Hi.
I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.
However, my db is now 60GB in size, so this daily operation is making
less and less sense. (Some of you may think this is foolish to begin
with).
Question: what can I do
On 11/12/11 5:00 AM, alextc wrote:
Hi all,
I am new to PostgreSQL. I have recently installed PostgreSQL 9.1 with
Application Stack Builder 3.0.0. However, I have ever had the Stack Builder
work while trying to install new software.
The error message is as below.
http://postgresql.1045698.n5.nab
Hi Ray,
Have you got any luck to get around this issue?
I am having the same issue. I just installed PostgreSQL 9.1 with Stack
Builder 3.0.0.
Every time I was trying to install additional software I received the error
message popped out saying ...http://www.postgresql.org/application-v2.xml
cann
Hi all,
I am new to PostgreSQL. I have recently installed PostgreSQL 9.1 with
Application Stack Builder 3.0.0. However, I have ever had the Stack Builder
work while trying to install new software.
The error message is as below.
http://postgresql.1045698.n5.nabble.com/file/n4986863/postgreSQL.jpg
32 matches
Mail list logo