I also guessed the same at initial stage of debugging. So i tried to export
the tbl_voucher data
to a file and it works fine. Then i googled and found some link, its
explaines the reason is
higher size of the database. But didnt get any proper solution in the
internet.
--
View this message in con
Hello
2011/8/26 Niyas :
> Hi All,
>
> I have some other issue related to taking backup of the database having
> bigger size. I have been getting the
> following errors
>
> anil@ubuntu107:~/Desktop$ pg_dump -Uadmin -h192.168.2.5 dbname >
> filename.sql
>
> pg_dump: Dumping the contents of table "t
On 26 Srpen 2011, 12:46, Niyas wrote:
> Actually database is not crashed. I can run my application perfectly.
That does not mean one of the backends did not crash. Check the log.
Tomas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
h
Actually database is not crashed. I can run my application perfectly.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/backup-strategies-for-large-databases-tp4697145p4737697.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-ge
On 26 Srpen 2011, 11:48, Niyas wrote:
> Hi All,
>
> I have some other issue related to taking backup of the database having
> bigger size. I have been getting the
> following errors
>
> anil@ubuntu107:~/Desktop$ pg_dump -Uadmin -h192.168.2.5 dbname >
> filename.sql
>
> pg_dump: Dumping the content
Hi All,
I have some other issue related to taking backup of the database having
bigger size. I have been getting the
following errors
anil@ubuntu107:~/Desktop$ pg_dump -Uadmin -h192.168.2.5 dbname >
filename.sql
pg_dump: Dumping the contents of table "tbl_voucher" failed:
PQgetCopyData() faile
On 08/13/2011 05:44 PM, MirrorX wrote:
at the moment, the copy of the PGDATA folder (excluding pg_xlog folder), the
compression of it and the storing of it in a local storage disk takes about
60 hours while the file size is about 550 GB. the archives are kept in a
different location so that not a
On 08/15/11 4:12 PM, Scott Marlowe wrote:
Exactly. Sometimes PITR is the right answer, sometimes partitioning is.
those answer two completely different questions.
--
john r pierceN 37, W 122
santa cruz ca mid-left coast
--
Sent via pgsql
On Mon, Aug 15, 2011 at 5:06 PM, MirrorX wrote:
> i looked into data partitioning and it is definitely something we will use
> soon. but, as far as the backups are concerned, how can i take a backup
> incrementally? if i get it correctly, the idea is to partition a big table
> (using a date field
i looked into data partitioning and it is definitely something we will use
soon. but, as far as the backups are concerned, how can i take a backup
incrementally? if i get it correctly, the idea is to partition a big table
(using a date field for example) and then take each night for example a dump
On Sun, Aug 14, 2011 at 12:44 AM, MirrorX wrote:
> the issue here is that the server is heavily loaded. the daily traffic is
> heavy, which means the db size is increasing every day (by 30 gb on
> average)
> and the size is already pretty large (~2TB).
>
> at the moment, the copy of the PGDATA fo
thx a lot. i will definitely look into that option
in the meantime, if there are any other suggestions i 'd love to hear them
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/backup-strategies-for-large-databases-tp4697145p4698006.html
Sent from the PostgreSQL - general m
One possible answer to your issues is data partitioning. By
partitioning your data by date or primary key or some other field, you
can backup individual partitions for incremental backups. I run a
stats database that partitions by day daily and we can just backup
yesterday's partition each night.
Ivan Voras wrote:
> Leigh Dyer wrote:
>> Hi,
>>
>>For years now I've simply backed up my databases by doing a nightly
>>pg_dump, but since we added the ability for users to import binary
files
>>in to our application, which are stored in a bytea fields, the dump
>>sizes have gone through the roof —
Leigh Dyer wrote:
Hi,
For years now I've simply backed up my databases by doing a nightly
pg_dump, but since we added the ability for users to import binary files
in to our application, which are stored in a bytea fields, the dump
sizes have gone through the roof — even with gzip compression, th
Richard P. Welty wrote:
but what are the consequences of backing up a WAL file
if the archive process (probably scp in this case) is running
when the backup copy is made? the whole thing won't make it onto
tape, are there any downsides to running a recover with
an incomplete WAL file?
The WAL f
Richard P. Welty writes:
a couple of gig, not really all that much. the problem is that there is
an expectation of one or more persons/organizations going through
due diligence on the operation, and i'm not sure that a fuzzy
"somewhere online" file storage service will pass the smell test for
ma
Francisco Reyes wrote:
Richard P. Welty writes:
actually, what it will come down to is the cost of an upgraded
connection vs $60/month
rent for 3Us of rack space to place a DLT autoloader in the
colocation facility.
How much data are you looking to backup?
There are companies that do rsync s
Richard P. Welty writes:
actually, what it will come down to is the cost of an upgraded
connection vs $60/month
rent for 3Us of rack space to place a DLT autoloader in the colocation
facility.
How much data are you looking to backup?
There are companies that do rsync services.
Just saw one la
Bill Moran wrote:
As an aside, you can only fit so many gallons into a 10 gallon
container. You might simply have to accept that your requirements
now exceed the capacity of the RR connection and upgrade.
actually, what it will come down to is the cost of an upgraded
connection vs $60/month
"Richard P. Welty" <[EMAIL PROTECTED]> wrote:
>
> so the outfit i'm currently working for on a quasi-full time
> basis has what amounts to an OLTP database server in colocation.
> the footprint in the rack is very small, that is, there's no
> DLT autoloader or anything of that sort in the rack.
>
21 matches
Mail list logo