hello,
not possible in your case.
To my understading, no tool can directly transform binary dump(Oralce &
MSSQL) to csv file format.
Steven
2017-06-02 5:37 GMT+08:00 Nicolas Paris :
> > If they aren't too big, you might get away by installing the express
> edition of the respective DBMS
>
> Thanks for the suggestion. Problem is the data is highly sensible and
> cannot go on the cloud or non trusted place
Sounds like the real question now is not how to import the data, but
how to convert the backups you have to CSV or similar?
Another idea for SQL Server is to use the bcp utility.
> Or spin up an AWS SQL Server instance:
>
> https://aws.amazon.com/windows/resources/amis/
>
Thanks for the suggestion. Problem is the data is highly sensible and
cannot go on the cloud or non trusted place
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On 06/02/2017 09:31 AM, Neil Anderson wrote:
On 1 June 2017 at 17:37, Nicolas Paris wrote:
If they aren't too big, you might get away by installing the express edition of
the respective DBMS, then import them using the native tools, then export the
data as CSV files.
Good idea. I think SQL
On 1 June 2017 at 17:37, Nicolas Paris wrote:
>> If they aren't too big, you might get away by installing the express edition
>> of the respective DBMS, then import them using the native tools, then export
>> the data as CSV files.
Good idea. I think SQL Server Express is limited to 10GB on the
> If they aren't too big, you might get away by installing the express edition
> of the respective DBMS, then import them using the native tools, then export
> the data as CSV files.
Thanks Thomas. Both are binaries. The oracle's one is a 30TB database...
--
Sent via pgsql-general mailing lis
Nicolas Paris schrieb am 31.05.2017 um 16:43:
Hi,
I have dumps from oracle and microsoft sql server (no more details). Is it possible to
load them "directly" into postgres (without oracle/mssql license)?
dump -> csv -> postgtres
or something ?
If those are binary dumps (e.g. a DataPump dump
Thanks all,
The point is I only have access to dump files, no ora/mssql server instance
access. I have noticed the warning around legality on that question. The
best solution for me is to ask to each. Once get answer, I will come back
here to provide the answer.
2017-06-01 4:14 GMT-04:00 vinny :
On 2017-05-31 16:43, Nicolas Paris wrote:
Hi,
I have dumps from oracle and microsoft sql server (no more details).
Is it possible to load them "directly" into postgres (without
oracle/mssql license)?
dump -> csv -> postgtres
or something ?
Thanks a lot
A very, *very* short trip to google s
> On May 31, 2017, at 9:27 AM, David G. Johnston
> wrote:
>
> On Wed, May 31, 2017 at 7:43 AM, Nicolas Paris wrote:
> Hi,
>
> I have dumps from oracle and microsoft sql server (no more details). Is it
> possible to load them "directly" into postgres (without oracle/mssql
> license)?
> dump
On Wed, May 31, 2017 at 7:43 AM, Nicolas Paris wrote:
> Hi,
>
> I have dumps from oracle and microsoft sql server (no more details). Is it
> possible to load them "directly" into postgres (without oracle/mssql
> license)?
> dump -> csv -> postgtres
> or something ?
>
>
Asking the public mailing
Hi,
I have dumps from oracle and microsoft sql server (no more details). Is it
possible to load them "directly" into postgres (without oracle/mssql
license)?
dump -> csv -> postgtres
or something ?
Thanks a lot
On Tue, Oct 18, 2016 at 3:11 PM, Patrick B wrote:
> Thank you guys... good to know that pg_dump does all the job for me :)
>
> So.. If I only dump using the --schema-only option, it will dump all the
> schemas, constraints, indexes and tables?
>
> Because probably, I'll have to import the data ma
Thank you guys... good to know that pg_dump does all the job for me :)
So.. If I only dump using the --schema-only option, it will dump all the
schemas, constraints, indexes and tables?
Because probably, I'll have to import the data manually. NOt in a single
pg_restore I mean. (AWS issue)
On Mon, Oct 17, 2016 at 8:32 PM, Patrick B wrote:
> Hi guys,
>
> I need to export an entire database to another server, for testing purpose.
>
> Is there any way to export all indexes and constraints ?
> Postgres 9.2
> Patrick
>
By default pg_dump will export the constraints. It will export the
On 18-10-2016 06:32, Patrick B wrote:
Hi guys,
I need to export an entire database to another server, for testing
purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
Patrick
Hello,
pg_dump database is that you need, but if you asking do you can export
data + inde
On 10/17/2016 08:32 PM, Patrick B wrote:
Hi guys,
I need to export an entire database to another server, for testing purpose.
Is there any way to export all indexes and constraints ?
pg_dump the_database
Postgres 9.2
Patrick
--
Adrian Klaver
adrian.kla...@aklaver.com
--
Sent via pgsql-
On 10/17/2016 8:32 PM, Patrick B wrote:
I need to export an entire database to another server, for testing
purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
on the existing machine, as the postgres user
pg_dump -Fc -d databasename -f filename.pgdump
then, on
Hi guys,
I need to export an entire database to another server, for testing purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
Patrick
Whithout -E there was the same error.
The DB was create with:
create database "davical" WITH ENCODING = 'UTF8' LC_CTYPE='en_US.utf8'
LC_COLLATE='en_US.utf8' TEMPLATE template0;
I have fix the error with
localedef -f UTF-8 -i en_US en_US.UTF-8
After a restart of postgres all is fine.
Thanks a
On 03/05/2015 02:48 AM, basti wrote:
Hello I have a postgres cluster with following databases
davical| pgadmin | UTF8 | en_US.utf8 |
en_US.utf8 |
test| test | LATIN9| en_US.iso885915 |
en_US.iso885915 |
foo
Hello I have a postgres cluster with following databases
davical| pgadmin | UTF8 | en_US.utf8 |
en_US.utf8 |
test| test | LATIN9| en_US.iso885915 |
en_US.iso885915 |
foo| postgres | SQL_ASCII | en_US.
Chris Curvey wrote:
> Perhaps a pair of triggers? An insert-or-update trigger on foo, and a
> delete-or-update trigger on bar?
Using a foreign key constraint is best if that can do the right
thing. If that doesn't work, triggers like you describe are
probably the best option, but you need to c
Chris Curvey-3 wrote
>> -Original Message-
>> From: Tom Lane [mailto:
> tgl@.pa
> ]
>> Sent: Thursday, August 07, 2014 2:50 PM
>> To: Chris Curvey
>> Cc:
> pgsql-general@
>> Subject: Re: [GENERAL] dump/restore with a hidden dependency?
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Thursday, August 07, 2014 2:50 PM
> To: Chris Curvey
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] dump/restore with a hidden dependency?
>
> Chris Curvey writes:
> &
Chris Curvey writes:
> I have a database with the following structure:
> Create table bar...
> Create function subset_of_bar ... (which does a select on a subset of "bar")
> Create table foo...
> Alter table foo add constraint mycheck check subset_of_bar(id);
Basically, that's broken in any numbe
> -Original Message-
> From: Shaun Thomas [mailto:stho...@optionshouse.com]
> Sent: Thursday, August 07, 2014 12:43 PM
> To: Chris Curvey; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] dump/restore with a hidden dependency?
>
> On 08/07/2014 10:00 A
On 08/07/2014 01:09 PM, Chris Curvey wrote:
The "disable trigger" statement runs without error, but does not seem
to have any effect.
:(
Apparently this trick only works for disabling foreign keys. I'm not
sure how to temporarily disable check constraints. You might have to
drop the constra
On 08/07/2014 10:00 AM, Chris Curvey wrote:
I’ve done some searching and am coming up empty. Is there a way to get
pg_restore to apply constraints AFTER loading all the tables
Kinda. PostgreSQL applies constraints with hidden system-level triggers.
An easy way to turn them off is to use this
I have a database with the following structure:
Create table bar...
Create function subset_of_bar ... (which does a select on a subset of "bar")
Create table foo...
Alter table foo add constraint mycheck check subset_of_bar(id);
I pg_dumped my database, and tried to pg_restore it on another serve
Hi Dennis,
I already manage. The problem as because I wasn't in the right folder...rookie
mistake.
Thank you for your email.
Regards,
José Santos
From: den...@kabonkulator.com
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Dump Database
Date: Mon, 17 Mar 2014 08:14:03
José Pedro Santos
Sent: Monday, March 17, 2014 7:50 AM
To: Postgres Ajuda
Subject: [GENERAL] Dump Database
Dear all,
I'm trying to dump my database using the following command after I enter as
su postgres:
pg_dump - U postgres mydb -f mydb.sql
... but I'm always having the followi
pgsql-general-ow...@postgresql.org wrote on 17/03/2014 12:50:20:
> From: José Pedro Santos
> To: Postgres Ajuda ,
> Date: 17/03/2014 12:56
> Subject: [GENERAL] Dump Database
> Sent by: pgsql-general-ow...@postgresql.org
>
> Dear all,
>
> I'm trying to dump
Dear all,
I'm trying to dump my database using the following command after I enter as su
postgres:
pg_dump - U postgres mydb -f mydb.sql
... but I'm always having the following message:
pg_dump [archiver] Could not open output file "mydb.sql" : Permission denied
I try to use the same state
On Sun, Sep 1, 2013 at 10:35:57AM -0700, Jeff Davis wrote:
> [ late response, but might still be useful to someone ]
>
> You can work around the problem with a little effort if you call
> array_in directly. It takes the type output (cstring), element type
> (oid), and element typmod (integer).
>
On Mon, 2013-07-08 at 14:20 -0500, Jerry Sievers wrote:
> Planning to pg_upgrade some large (3TB) clusters using hard link
> method. Run time for the upgrade itself takes around 5 minutes.
> Nice!! Origin version 8.4 and destination version 9.1.
>
> Unfortunately the post-upgrade analyze of the
For that matter, for the first time we tried enforcing some of the "rules"
of CFs this time, and I'd like to hear if
people think that helped. I think he merit of "fast promote" is - allowing
quick connection by skipping checkpoint and its demerit is - taking little
bit longer when crash-recove
Oh, sorry. I mixed up dumps...
I am migrating from mysql and by mistake I tried to apply dump from "mysqldump
--compat=postgresql" that was named very similar to dump of finally converted
database produced by pg_dump (for the purpose of copy from test to main
server). Bash comletitions and then
On 08/02/2013 05:03 PM, tot-to wrote:
I have two installation of postgresql-server-9.2.4 on Gentoo.
I try to just copy database from one to another.
According to the documentation
http://www.postgresql.org/docs/9.2/static/backup-dump.html I created dump file:
psql -U > dumpfile.sql
I would
I have two installation of postgresql-server-9.2.4 on Gentoo.
I try to just copy database from one to another.
According to the documentation
http://www.postgresql.org/docs/9.2/static/backup-dump.html I created dump file:
psql -U > dumpfile.sql
copied it to another machine, manually created t
Originally posted on pg_admin but got no bites there...
Admins;
Planning to pg_upgrade some large (3TB) clusters using hard link
method. Run time for the upgrade itself takes around 5 minutes.
Nice!! Origin version 8.4 and destination version 9.1.
Unfortunately the post-upgrade analyze of the
On 03/19/2013 01:02 PM, Luke Luke wrote:
Hi guys,
Is it possible to do a postgre dump where I will dump from database
complete schema[A] (table+data,procedures) and from another schema[B]
only stored procedures by one dump commnad?
AFAIK there is no way to dump just stored procedures(functions)
Hi guys,
Is it possible to do a postgre dump where I will dump from database complete
schema[A] (table+data,procedures) and from another schema[B] only stored
procedures by one dump commnad?
Thanks
I'm sorry my first example was incomplete
I need to migrate data from postgresql to oracle
thus I have to use
dump --column-inserts instead of copy
to have an output like this but order by pk:
INSERT INTO test (id, note, id_father) VALUES (6, 'Homer Simpson ', 5);
INSERT INTO test (id, note, id_f
On 12/22/2012 09:29 AM, jo wrote:
Hi all,
I would like to know if it is possible to dump a table ordered by its
primary key.
Take a look at the this test table...
\d test
Table "public.test"
Column| Type | Modifiers
---+-+---
: pgsql-general@postgresql.org
Sent: Saturday, December 22, 2012 6:29 PM
Subject: [GENERAL] dump order by
Hi all,
I would like to know if it is possible to dump a table ordered by its primary
key.
Take a look at the this test table...
\d test
Table "public.test&qu
Hi all,
I would like to know if it is possible to dump a table ordered by its
primary key.
Take a look at the this test table...
\d test
Table "public.test"
Column| Type | Modifiers
---+-+---
id
marian krucina writes:
> Example:
> in PG91:
> CREATE FUNCTION function_y(x INT) RETURNS INT AS $$ SELECT $1*$1 $$
> LANGUAGE SQL;
> CREATE FUNCTION function_x(x INT) RETURNS INT AS $$ SELECT
> function_y($1) $$ LANGUAGE SQL;
> CREATE SCHEMA schema_a;
> CREATE TABLE schema_a.table_a(i INT);
> CREA
Example:
in PG91:
CREATE FUNCTION function_y(x INT) RETURNS INT AS $$ SELECT $1*$1 $$
LANGUAGE SQL;
CREATE FUNCTION function_x(x INT) RETURNS INT AS $$ SELECT
function_y($1) $$ LANGUAGE SQL;
CREATE SCHEMA schema_a;
CREATE TABLE schema_a.table_a(i INT);
CREATE INDEX ON schema_a.table_a(function_x(i
marian krucina writes:
> pg_upgrade failed on own server, because we used functions from public
> schema in index. We install common functions (e.g. postgresql contrib)
> to public schema. Tables and indexes are in another schema, and names
> of functions without a schema name.
Are you sure that
> Any help in getting function argument names is appreciated. Thank you
take a look at pg_catalog.pg_get_function_arguments(oid)
regards, jan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-gen
audit(v_IncidentId,'Modified',v_Message);
RETURN;
END; $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION oms.om_change_i_division(integer, text, text) OWNER TO
oru;
From: Raghavendra [mailto:raghavendra@enterprisedb.com]
Sent: Wednesday, February 15, 2012 1
sql-general@postgresql.org
Subject: Re: [GENERAL] Dump functions alone
One more thing you can also get it from pg_get_functiondef() system
function.
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra
wrote
One more thing you can also get it from pg_get_functiondef() system
function.
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra <
raghavendra@enterprisedb.com> wrote:
> You have two options.
>
>- Use contri
You have two options.
- Use contrib module pg_extractor
https://github.com/omniti-labs/pg_extractor
- Use pg_proc catalog to get function definition
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 6:59 PM, Rajan, Pavithra
Hello all- Is there a way to just dump functions in a schema in to a
txt file/ sql file ? Thank you.
"Bob Pawley" writes:
> I am attempting to dump a database using PostgreDAC.
> I am getting the following error message which I donât understand.
> Can someone shed some light on this?
> Error message from server: ERROR: column "tgisconstraint" does not exist
The pg_trigger.tgisconstraint co
On 11/04/11 10:22 AM, Bob Pawley wrote:
I am attempting to dump a database using PostgreDAC.
this postgresDAC?
http://www.microolap.com/products/connectivity/postgresdac/
thats a commercial product, you probably should contact them for support.
--
john r pierceN 3
Hi
I am attempting to dump a database using PostgreDAC.
I am getting the following error message which I don’t understand.
Can someone shed some light on this?
“Error message from server: ERROR: column "tgisconstraint" does not exist
LINE 1: ...c AS tgfname, tgtype, tgnargs, tgargs, tgenabled
Dmitry Koterov writes:
> Is there any way (or hack) to dump the whole database, but to exclude the
> DATA from a table within this dump? (DDL of the table should not be
> excluded: after restoring the data the excluded table should look "empty".)
The pg_staging tool allows you to do that quite ea
Short description:
* Need: migration of database, with roles relevant to it.
* "From" server: pg 8.1; no superuser access.
* "To" server: pg 8.4; full access.
* Database in question: depends on few roles, mainly group roles used
to set permission; there are other roles (login roles) which are
membe
On Thursday, August 18, 2011 3:25:59 pm Dmitry Koterov wrote:
> Mmm, --disable-triggers is not surely enough - we also have RULEs and (much
> worse) INDEXes.
>
> If we create all indices and then restore all data, it is MUCH SLOWER than
> restore the data first and then - create all indices.
> So
Mmm, --disable-triggers is not surely enough - we also have RULEs and (much
worse) INDEXes.
If we create all indices and then restore all data, it is MUCH SLOWER than
restore the data first and then - create all indices.
So I think that there is no work-around really...
I propose to include an op
Thanks, "pg_dump --data-only --disable-triggers" is the king.
(Unfortunately it is not supported by pg_dumpall, but it is entirely another
story. :-)
On Fri, Aug 19, 2011 at 12:36 AM, Adrian Klaver wrote:
> On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
> > 1. I need to shorten p
On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
> 1. I need to shorten pg_dump results (for backup purposes), so pg_restore
> is too late for that..
>
> 2. If I use "pg_dump -s" separately, the data may not load (or load to
> slow) after that, because all indices/foreign keys are al
1. I need to shorten pg_dump results (for backup purposes), so pg_restore is
too late for that...
2. If I use "pg_dump -s" separately, the data may not load (or load to slow)
after that, because all indices/foreign keys are already there. Is there a
way to split "pg_dump -s" into 2 parts: the firs
On Thursday, August 18, 2011 12:49:45 pm Dmitry Koterov wrote:
> Hello.
>
> Is there any way (or hack) to dump the whole database, but to exclude the
> DATA from a table within this dump? (DDL of the table should not be
> excluded: after restoring the data the excluded table should look "empty".)
Hello.
Is there any way (or hack) to dump the whole database, but to exclude the
DATA from a table within this dump? (DDL of the table should not be
excluded: after restoring the data the excluded table should look "empty".)
I see -T switch of pg_dump, but seems -T excludes the data AND the DDL o
On 07/05/11 4:31 AM, Condor wrote:
Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried
about running a database server on it.
Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.
Dne 5.7.2011 13:31, Condor napsal(a):
> On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
>> On 5/07/2011 5:00 PM, Condor wrote:
>>> Hello ppl,
>>> can I ask how to dump large DB ?
>>
>> Same as a smaller database: using pg_dump . Why are you trying to
>> split your dumps into 1GB files? What
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to
split your dumps into 1GB files? What does that gain you?
Are you using some kind of old fi
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to split
your dumps into 1GB files? What does that gain you?
Are you using some kind of old file system and operating system that
cannot handle file
* Condor wrote:
Problem was when I start: pg_dump dbname | split -b 1G - filenameI
unable to restore it correct. When I start restore DB i got error from
sql he did not like one line. I make investigation and the problem was
in last line of first file value field was something like '"This is a '
Hello ppl,
can I ask how to dump large DB ? I read documentation but I has a
problem with split that was year ago and did not use it after then.
Problem was when I start: pg_dump dbname | split -b 1G - filename I
unable to restore it correct. When I start restore DB i got error from
sql he did
On 05/19/2011 10:10 AM, Craig de Stigter wrote:
1. I'm assuming that the dump format can contain arbitrary sql commands,
so a pg_restore of this nature should be run under an untrusted account
in its own restricted schema. Can someone confirm that this is the case?
Correct. You very definitely
Hi list
We're writing a plugin for our website that loads single-table database
dumps created by untrusted users. My question is two-fold:
1. I'm assuming that the dump format can contain arbitrary sql commands, so
a pg_restore of this nature should be run under an untrusted account in its
own re
Doug Kyle writes:
> I have this error from pg_dumpall:
> pg_dump: failed sanity check, parent table OID 27974 of pg_rewrite entry
> OID 28689 not found
> ...
> So I'm thinking I'll delete from pg_rewrite where oid=28689, but I'm not
> sure if I should do anything with pg_class or pg_depends - or
I have this error from pg_dumpall:
pg_dump: failed sanity check, parent table OID 27974 of pg_rewrite entry
OID 28689 not found
I found a rule was dropped when this server was setup as s Slony replica
because it conflicted with the Slony deny access trigger
select oid,rulename from pg_rewrit
> Note that cluster on a randomly ordered large table can be
> prohibitively slow, and it might be better to schedule a
> short downtime to do the following (pseudo code)
> alter table tablename rename to old_tablename; create table
> tablename like old_tablename; insert into tablename select *
On Wed, Feb 17, 2010 at 3:44 PM, karsten vennemann wrote:
>
> >>> vacuum should clean out the dead tuples, then cluster on any large tables
> >>> that are bloated will sort them out without needing too much temporary
> >>> space.
>
> Yes ok am running a vacuum full on a large table (150GB) and
February 09, 2010 23:30
To: karsten vennemann
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] dump of 700 GB database
Hello
2010/2/10 karsten vennemann
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. Wh
karsten vennemann wrote:
I have to write a 700 GB large database to a dump to clean out a lot
of dead records on an Ubuntu server with postgres 8.3.8. What is the
proper procedure to succeed with this - last time the dump stopped at
3.8 GB size I guess. Should I combine the -Fc option of pg_dum
Hello
2010/2/10 karsten vennemann
> I have to write a 700 GB large database to a dump to clean out a lot of
> dead records on an Ubuntu server with postgres 8.3.8. What is the proper
> procedure to succeed with this - last time the dump stopped at 3.8 GB size I
> guess. Should I combine the -Fc
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. What is the proper
procedure to succeed with this - last time the dump stopped at 3.8 GB size I
guess. Should I combine the -Fc option of pg_dump and and the spli
I have to write a 700 GB large database to a dump to clean out a lot of dead
records on an Ubuntu server with postgres 8.3.8. What is the proper procedure
to succeed with this - last time the dump stopped at 3.8 GB size I guess.
Should I combine the -Fc option of pg_dump and and the split comman
When we upgraded from linux-2.6.24 to ./linux-2.6.27, our pg_dump
duration increased by 20%. My first attempt at resolution was to boot
with elevator=deadline. However that's actually the default IO
scheduler in both kernels.
The two dmesg's are at:
https://www.norchemlab.com/tmp/linux-2.6.24-22
Sorry, I missed that. Thanks again.
Now to put this all into effect.
John
On Mon, 25 Aug 2008 14:25:12 -0700, Joshua Drake wrote:
>On Mon, 25 Aug 2008 17:05:53 -0400
>"John T. Dow" <[EMAIL PROTECTED]> wrote:
>
>> Joshua
>>
>> Thank you very much for answering these various questions.
>>
>> I
On Mon, 25 Aug 2008 17:05:53 -0400
"John T. Dow" <[EMAIL PROTECTED]> wrote:
> Joshua
>
> Thank you very much for answering these various questions.
>
> I guess the compressed format is the best overall solution, except
> for roles. I find myself having a table with other information about
> user
Joshua
Thank you very much for answering these various questions.
I guess the compressed format is the best overall solution, except for roles. I
find myself having a table with other information about users (application
specific user type, etc) so perhaps the thing to do is record enough
info
On Mon, 25 Aug 2008 13:37:13 -0400
"John T. Dow" <[EMAIL PROTECTED]> wrote:
> Joshua
>
> The TOC feature sounds good, as does converting a single table to
> plain text.
>
> But I can't find documentation for the TOC feature under pg_dump or
> pg_restore. I'm looking in postgresql-8.2.1-US.pdf.
Joshua
The TOC feature sounds good, as does converting a single table to plain text.
But I can't find documentation for the TOC feature under pg_dump or pg_restore.
I'm looking in postgresql-8.2.1-US.pdf.
Neither could I see anything about converting a single table to a plain text
dump.
Also,
On Mon, 25 Aug 2008 10:21:54 -0400
"John T. Dow" <[EMAIL PROTECTED]> wrote:
> By "bad data", I mean a character that's not UTF8, such as hex 98.
>
> As far as I can tell, pg_dump is the tool to use. But it has
> serious drawbacks.
>
> If you dump in the custom format, the data is compressed (nic
Tom
My mistake in not realizing that 8.1 and later can dump large objects in the
plain text format. I guess when searching for answers to a problem, the posted
information doesn't always specify the version. So, sorry about that.
But the plain text format still has serious problems in that the
"John T. Dow" <[EMAIL PROTECTED]> writes:
> If you dump in plain text format, you can at least inspect the dumped
> data and fix it manually or with iconv. But the plain text
> format doesn't support large objects (again, not nice).
It does in 8.1 and later ...
> Also, neither of these methods ge
By "bad data", I mean a character that's not UTF8, such as hex 98.
As far as I can tell, pg_dump is the tool to use. But it has
serious drawbacks.
If you dump in the custom format, the data is compressed (nice) and
includes large objects (very nice). But, from my tests and the postings of
others,
On Sun, Jun 22, 2008 at 3:11 AM, Stuart Luppescu <[EMAIL PROTECTED]> wrote:
> On 木, 2008-06-19 at 11:57 +0200, David wrote:
>> > pg_restore: [tar archiver] could not open TOC file for input: No
>> such
>> > file or directory
>>
>> It sounds like the tar file is no longer being created.
>>
>> Try ma
: Saturday, June 21, 2008 9:11 PM
Subject: Re: [GENERAL] Dump and restore problem
On 木, 2008-06-19 at 11:57 +0200, David wrote:
> pg_restore: [tar archiver] could not open TOC file for input: No
such
> file or directory
It sounds like the tar file is no longer being created.
Try manuall
On 木, 2008-06-19 at 11:57 +0200, David wrote:
> > pg_restore: [tar archiver] could not open TOC file for input: No
> such
> > file or directory
>
> It sounds like the tar file is no longer being created.
>
> Try manually running the commands, and verify that the dump, restore,
> and rsync still w
On Wed, Jun 18, 2008 at 11:41 PM, Stuart Luppescu <[EMAIL PROTECTED]> wrote:
[...]
> pg_restore: [tar archiver] could not open TOC file for input: No such
> file or directory
It sounds like the tar file is no longer being created.
Try manually running the commands, and verify that the dump, res
Hello, I had a very nice system where I mirrored everything to another
machine each night, so in case of disaster, I could easily switch over
to the mirror.
The backup script uses a line like this:
pg_dump -b -F t -h $postgresql_hostname $i > "$location_backup_dir/`date
+%B-%Y`/$date_info/postgre
1 - 100 of 237 matches
Mail list logo