hello,
not possible in your case.
To my understading, no tool can directly transform binary dump(Oralce &
MSSQL) to csv file format.
Steven
2017-06-02 5:37 GMT+08:00 Nicolas Paris :
> > If they aren't too big, you might get away by installing the express
> edition of the respective DBMS
>
> Thanks for the suggestion. Problem is the data is highly sensible and
> cannot go on the cloud or non trusted place
Sounds like the real question now is not how to import the data, but
how to convert the backups you have to CSV or similar?
Another idea for SQL Server is to use the bcp utility.
> Or spin up an AWS SQL Server instance:
>
> https://aws.amazon.com/windows/resources/amis/
>
Thanks for the suggestion. Problem is the data is highly sensible and
cannot go on the cloud or non trusted place
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On 06/02/2017 09:31 AM, Neil Anderson wrote:
On 1 June 2017 at 17:37, Nicolas Paris wrote:
If they aren't too big, you might get away by installing the express edition of
the respective DBMS, then import them using the native tools, then export the
data as CSV files.
Good idea. I think SQL
On 1 June 2017 at 17:37, Nicolas Paris wrote:
>> If they aren't too big, you might get away by installing the express edition
>> of the respective DBMS, then import them using the native tools, then export
>> the data as CSV files.
Good idea. I think SQL Server Express is limited to 10GB on the
> If they aren't too big, you might get away by installing the express edition
> of the respective DBMS, then import them using the native tools, then export
> the data as CSV files.
Thanks Thomas. Both are binaries. The oracle's one is a 30TB database...
--
Sent via pgsql-general mailing lis
Nicolas Paris schrieb am 31.05.2017 um 16:43:
Hi,
I have dumps from oracle and microsoft sql server (no more details). Is it possible to
load them "directly" into postgres (without oracle/mssql license)?
dump -> csv -> postgtres
or something ?
If those are binary dumps (e.g. a DataPump dump
Thanks all,
The point is I only have access to dump files, no ora/mssql server instance
access. I have noticed the warning around legality on that question. The
best solution for me is to ask to each. Once get answer, I will come back
here to provide the answer.
2017-06-01 4:14 GMT-04:00 vinny :
On 2017-05-31 16:43, Nicolas Paris wrote:
Hi,
I have dumps from oracle and microsoft sql server (no more details).
Is it possible to load them "directly" into postgres (without
oracle/mssql license)?
dump -> csv -> postgtres
or something ?
Thanks a lot
A very, *very* short trip to google s
> On May 31, 2017, at 9:27 AM, David G. Johnston
> wrote:
>
> On Wed, May 31, 2017 at 7:43 AM, Nicolas Paris wrote:
> Hi,
>
> I have dumps from oracle and microsoft sql server (no more details). Is it
> possible to load them "directly" into postgres (without oracle/mssql
> license)?
> dump
On Wed, May 31, 2017 at 7:43 AM, Nicolas Paris wrote:
> Hi,
>
> I have dumps from oracle and microsoft sql server (no more details). Is it
> possible to load them "directly" into postgres (without oracle/mssql
> license)?
> dump -> csv -> postgtres
> or something ?
>
>
Asking the public mailing
On Tue, Oct 18, 2016 at 3:11 PM, Patrick B wrote:
> Thank you guys... good to know that pg_dump does all the job for me :)
>
> So.. If I only dump using the --schema-only option, it will dump all the
> schemas, constraints, indexes and tables?
>
> Because probably, I'll have to import the data ma
Thank you guys... good to know that pg_dump does all the job for me :)
So.. If I only dump using the --schema-only option, it will dump all the
schemas, constraints, indexes and tables?
Because probably, I'll have to import the data manually. NOt in a single
pg_restore I mean. (AWS issue)
On Mon, Oct 17, 2016 at 8:32 PM, Patrick B wrote:
> Hi guys,
>
> I need to export an entire database to another server, for testing purpose.
>
> Is there any way to export all indexes and constraints ?
> Postgres 9.2
> Patrick
>
By default pg_dump will export the constraints. It will export the
On 18-10-2016 06:32, Patrick B wrote:
Hi guys,
I need to export an entire database to another server, for testing
purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
Patrick
Hello,
pg_dump database is that you need, but if you asking do you can export
data + inde
On 10/17/2016 08:32 PM, Patrick B wrote:
Hi guys,
I need to export an entire database to another server, for testing purpose.
Is there any way to export all indexes and constraints ?
pg_dump the_database
Postgres 9.2
Patrick
--
Adrian Klaver
adrian.kla...@aklaver.com
--
Sent via pgsql-
On 10/17/2016 8:32 PM, Patrick B wrote:
I need to export an entire database to another server, for testing
purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
on the existing machine, as the postgres user
pg_dump -Fc -d databasename -f filename.pgdump
then, on
Whithout -E there was the same error.
The DB was create with:
create database "davical" WITH ENCODING = 'UTF8' LC_CTYPE='en_US.utf8'
LC_COLLATE='en_US.utf8' TEMPLATE template0;
I have fix the error with
localedef -f UTF-8 -i en_US en_US.UTF-8
After a restart of postgres all is fine.
Thanks a
On 03/05/2015 02:48 AM, basti wrote:
Hello I have a postgres cluster with following databases
davical| pgadmin | UTF8 | en_US.utf8 |
en_US.utf8 |
test| test | LATIN9| en_US.iso885915 |
en_US.iso885915 |
foo
Chris Curvey wrote:
> Perhaps a pair of triggers? An insert-or-update trigger on foo, and a
> delete-or-update trigger on bar?
Using a foreign key constraint is best if that can do the right
thing. If that doesn't work, triggers like you describe are
probably the best option, but you need to c
Chris Curvey-3 wrote
>> -Original Message-
>> From: Tom Lane [mailto:
> tgl@.pa
> ]
>> Sent: Thursday, August 07, 2014 2:50 PM
>> To: Chris Curvey
>> Cc:
> pgsql-general@
>> Subject: Re: [GENERAL] dump/restore with a hidden dependency?
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Thursday, August 07, 2014 2:50 PM
> To: Chris Curvey
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] dump/restore with a hidden dependency?
>
> Chris Curvey writes:
> &
Chris Curvey writes:
> I have a database with the following structure:
> Create table bar...
> Create function subset_of_bar ... (which does a select on a subset of "bar")
> Create table foo...
> Alter table foo add constraint mycheck check subset_of_bar(id);
Basically, that's broken in any numbe
> -Original Message-
> From: Shaun Thomas [mailto:stho...@optionshouse.com]
> Sent: Thursday, August 07, 2014 12:43 PM
> To: Chris Curvey; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] dump/restore with a hidden dependency?
>
> On 08/07/2014 10:00 A
On 08/07/2014 01:09 PM, Chris Curvey wrote:
The "disable trigger" statement runs without error, but does not seem
to have any effect.
:(
Apparently this trick only works for disabling foreign keys. I'm not
sure how to temporarily disable check constraints. You might have to
drop the constra
On 08/07/2014 10:00 AM, Chris Curvey wrote:
I’ve done some searching and am coming up empty. Is there a way to get
pg_restore to apply constraints AFTER loading all the tables
Kinda. PostgreSQL applies constraints with hidden system-level triggers.
An easy way to turn them off is to use this
Hi Dennis,
I already manage. The problem as because I wasn't in the right folder...rookie
mistake.
Thank you for your email.
Regards,
José Santos
From: den...@kabonkulator.com
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Dump Database
Date: Mon, 17 Mar 2014 08:14:03
What does the shell command: pwd show when run immediately after you issue
a su postgres command?
You may want to try su - postgres to ensure you are in the postgres users
home directory.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jos
pgsql-general-ow...@postgresql.org wrote on 17/03/2014 12:50:20:
> From: José Pedro Santos
> To: Postgres Ajuda ,
> Date: 17/03/2014 12:56
> Subject: [GENERAL] Dump Database
> Sent by: pgsql-general-ow...@postgresql.org
>
> Dear all,
>
> I'm trying to dump my database using the following comm
On Sun, Sep 1, 2013 at 10:35:57AM -0700, Jeff Davis wrote:
> [ late response, but might still be useful to someone ]
>
> You can work around the problem with a little effort if you call
> array_in directly. It takes the type output (cstring), element type
> (oid), and element typmod (integer).
>
On Mon, 2013-07-08 at 14:20 -0500, Jerry Sievers wrote:
> Planning to pg_upgrade some large (3TB) clusters using hard link
> method. Run time for the upgrade itself takes around 5 minutes.
> Nice!! Origin version 8.4 and destination version 9.1.
>
> Unfortunately the post-upgrade analyze of the
For that matter, for the first time we tried enforcing some of the "rules"
of CFs this time, and I'd like to hear if
people think that helped. I think he merit of "fast promote" is - allowing
quick connection by skipping checkpoint and its demerit is - taking little
bit longer when crash-recove
Oh, sorry. I mixed up dumps...
I am migrating from mysql and by mistake I tried to apply dump from "mysqldump
--compat=postgresql" that was named very similar to dump of finally converted
database produced by pg_dump (for the purpose of copy from test to main
server). Bash comletitions and then
On 08/02/2013 05:03 PM, tot-to wrote:
I have two installation of postgresql-server-9.2.4 on Gentoo.
I try to just copy database from one to another.
According to the documentation
http://www.postgresql.org/docs/9.2/static/backup-dump.html I created dump file:
psql -U > dumpfile.sql
I would
On 03/19/2013 01:02 PM, Luke Luke wrote:
Hi guys,
Is it possible to do a postgre dump where I will dump from database
complete schema[A] (table+data,procedures) and from another schema[B]
only stored procedures by one dump commnad?
AFAIK there is no way to dump just stored procedures(functions)
I'm sorry my first example was incomplete
I need to migrate data from postgresql to oracle
thus I have to use
dump --column-inserts instead of copy
to have an output like this but order by pk:
INSERT INTO test (id, note, id_father) VALUES (6, 'Homer Simpson ', 5);
INSERT INTO test (id, note, id_f
On 12/22/2012 09:29 AM, jo wrote:
Hi all,
I would like to know if it is possible to dump a table ordered by its
primary key.
Take a look at the this test table...
\d test
Table "public.test"
Column| Type | Modifiers
---+-+---
Hello,
Did you try to set the constraints as deferrable
i.e.
begin;
set constraints all deferred;
...
commit;
Also, you can set this by forign key , have a look on
http://www.postgresql.org/docs/9.0/static/sql-createtable.html
Regards
From: jo
To: p
marian krucina writes:
> Example:
> in PG91:
> CREATE FUNCTION function_y(x INT) RETURNS INT AS $$ SELECT $1*$1 $$
> LANGUAGE SQL;
> CREATE FUNCTION function_x(x INT) RETURNS INT AS $$ SELECT
> function_y($1) $$ LANGUAGE SQL;
> CREATE SCHEMA schema_a;
> CREATE TABLE schema_a.table_a(i INT);
> CREA
Example:
in PG91:
CREATE FUNCTION function_y(x INT) RETURNS INT AS $$ SELECT $1*$1 $$
LANGUAGE SQL;
CREATE FUNCTION function_x(x INT) RETURNS INT AS $$ SELECT
function_y($1) $$ LANGUAGE SQL;
CREATE SCHEMA schema_a;
CREATE TABLE schema_a.table_a(i INT);
CREATE INDEX ON schema_a.table_a(function_x(i
marian krucina writes:
> pg_upgrade failed on own server, because we used functions from public
> schema in index. We install common functions (e.g. postgresql contrib)
> to public schema. Tables and indexes are in another schema, and names
> of functions without a schema name.
Are you sure that
> Any help in getting function argument names is appreciated. Thank you
take a look at pg_catalog.pg_get_function_arguments(oid)
regards, jan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-gen
audit(v_IncidentId,'Modified',v_Message);
RETURN;
END; $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION oms.om_change_i_division(integer, text, text) OWNER TO
oru;
From: Raghavendra [mailto:raghavendra@enterprisedb.com]
Sent: Wednesday, February 15, 2012 1
sql-general@postgresql.org
Subject: Re: [GENERAL] Dump functions alone
One more thing you can also get it from pg_get_functiondef() system
function.
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra
wrote
One more thing you can also get it from pg_get_functiondef() system
function.
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra <
raghavendra@enterprisedb.com> wrote:
> You have two options.
>
>- Use contri
You have two options.
- Use contrib module pg_extractor
https://github.com/omniti-labs/pg_extractor
- Use pg_proc catalog to get function definition
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 6:59 PM, Rajan, Pavithra
"Bob Pawley" writes:
> I am attempting to dump a database using PostgreDAC.
> I am getting the following error message which I donât understand.
> Can someone shed some light on this?
> Error message from server: ERROR: column "tgisconstraint" does not exist
The pg_trigger.tgisconstraint co
On 11/04/11 10:22 AM, Bob Pawley wrote:
I am attempting to dump a database using PostgreDAC.
this postgresDAC?
http://www.microolap.com/products/connectivity/postgresdac/
thats a commercial product, you probably should contact them for support.
--
john r pierceN 3
Dmitry Koterov writes:
> Is there any way (or hack) to dump the whole database, but to exclude the
> DATA from a table within this dump? (DDL of the table should not be
> excluded: after restoring the data the excluded table should look "empty".)
The pg_staging tool allows you to do that quite ea
On Thursday, August 18, 2011 3:25:59 pm Dmitry Koterov wrote:
> Mmm, --disable-triggers is not surely enough - we also have RULEs and (much
> worse) INDEXes.
>
> If we create all indices and then restore all data, it is MUCH SLOWER than
> restore the data first and then - create all indices.
> So
Mmm, --disable-triggers is not surely enough - we also have RULEs and (much
worse) INDEXes.
If we create all indices and then restore all data, it is MUCH SLOWER than
restore the data first and then - create all indices.
So I think that there is no work-around really...
I propose to include an op
Thanks, "pg_dump --data-only --disable-triggers" is the king.
(Unfortunately it is not supported by pg_dumpall, but it is entirely another
story. :-)
On Fri, Aug 19, 2011 at 12:36 AM, Adrian Klaver wrote:
> On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
> > 1. I need to shorten p
On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
> 1. I need to shorten pg_dump results (for backup purposes), so pg_restore
> is too late for that..
>
> 2. If I use "pg_dump -s" separately, the data may not load (or load to
> slow) after that, because all indices/foreign keys are al
1. I need to shorten pg_dump results (for backup purposes), so pg_restore is
too late for that...
2. If I use "pg_dump -s" separately, the data may not load (or load to slow)
after that, because all indices/foreign keys are already there. Is there a
way to split "pg_dump -s" into 2 parts: the firs
On Thursday, August 18, 2011 12:49:45 pm Dmitry Koterov wrote:
> Hello.
>
> Is there any way (or hack) to dump the whole database, but to exclude the
> DATA from a table within this dump? (DDL of the table should not be
> excluded: after restoring the data the excluded table should look "empty".)
On 07/05/11 4:31 AM, Condor wrote:
Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried
about running a database server on it.
Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.
Dne 5.7.2011 13:31, Condor napsal(a):
> On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
>> On 5/07/2011 5:00 PM, Condor wrote:
>>> Hello ppl,
>>> can I ask how to dump large DB ?
>>
>> Same as a smaller database: using pg_dump . Why are you trying to
>> split your dumps into 1GB files? What
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to
split your dumps into 1GB files? What does that gain you?
Are you using some kind of old fi
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to split
your dumps into 1GB files? What does that gain you?
Are you using some kind of old file system and operating system that
cannot handle file
* Condor wrote:
Problem was when I start: pg_dump dbname | split -b 1G - filenameI
unable to restore it correct. When I start restore DB i got error from
sql he did not like one line. I make investigation and the problem was
in last line of first file value field was something like '"This is a '
On 05/19/2011 10:10 AM, Craig de Stigter wrote:
1. I'm assuming that the dump format can contain arbitrary sql commands,
so a pg_restore of this nature should be run under an untrusted account
in its own restricted schema. Can someone confirm that this is the case?
Correct. You very definitely
Doug Kyle writes:
> I have this error from pg_dumpall:
> pg_dump: failed sanity check, parent table OID 27974 of pg_rewrite entry
> OID 28689 not found
> ...
> So I'm thinking I'll delete from pg_rewrite where oid=28689, but I'm not
> sure if I should do anything with pg_class or pg_depends - or
> Note that cluster on a randomly ordered large table can be
> prohibitively slow, and it might be better to schedule a
> short downtime to do the following (pseudo code)
> alter table tablename rename to old_tablename; create table
> tablename like old_tablename; insert into tablename select *
On Wed, Feb 17, 2010 at 3:44 PM, karsten vennemann wrote:
>
> >>> vacuum should clean out the dead tuples, then cluster on any large tables
> >>> that are bloated will sort them out without needing too much temporary
> >>> space.
>
> Yes ok am running a vacuum full on a large table (150GB) and
February 09, 2010 23:30
To: karsten vennemann
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] dump of 700 GB database
Hello
2010/2/10 karsten vennemann
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. Wh
karsten vennemann wrote:
I have to write a 700 GB large database to a dump to clean out a lot
of dead records on an Ubuntu server with postgres 8.3.8. What is the
proper procedure to succeed with this - last time the dump stopped at
3.8 GB size I guess. Should I combine the -Fc option of pg_dum
Hello
2010/2/10 karsten vennemann
> I have to write a 700 GB large database to a dump to clean out a lot of
> dead records on an Ubuntu server with postgres 8.3.8. What is the proper
> procedure to succeed with this - last time the dump stopped at 3.8 GB size I
> guess. Should I combine the -Fc
Sorry, I missed that. Thanks again.
Now to put this all into effect.
John
On Mon, 25 Aug 2008 14:25:12 -0700, Joshua Drake wrote:
>On Mon, 25 Aug 2008 17:05:53 -0400
>"John T. Dow" <[EMAIL PROTECTED]> wrote:
>
>> Joshua
>>
>> Thank you very much for answering these various questions.
>>
>> I
On Mon, 25 Aug 2008 17:05:53 -0400
"John T. Dow" <[EMAIL PROTECTED]> wrote:
> Joshua
>
> Thank you very much for answering these various questions.
>
> I guess the compressed format is the best overall solution, except
> for roles. I find myself having a table with other information about
> user
Joshua
Thank you very much for answering these various questions.
I guess the compressed format is the best overall solution, except for roles. I
find myself having a table with other information about users (application
specific user type, etc) so perhaps the thing to do is record enough
info
On Mon, 25 Aug 2008 13:37:13 -0400
"John T. Dow" <[EMAIL PROTECTED]> wrote:
> Joshua
>
> The TOC feature sounds good, as does converting a single table to
> plain text.
>
> But I can't find documentation for the TOC feature under pg_dump or
> pg_restore. I'm looking in postgresql-8.2.1-US.pdf.
Joshua
The TOC feature sounds good, as does converting a single table to plain text.
But I can't find documentation for the TOC feature under pg_dump or pg_restore.
I'm looking in postgresql-8.2.1-US.pdf.
Neither could I see anything about converting a single table to a plain text
dump.
Also,
On Mon, 25 Aug 2008 10:21:54 -0400
"John T. Dow" <[EMAIL PROTECTED]> wrote:
> By "bad data", I mean a character that's not UTF8, such as hex 98.
>
> As far as I can tell, pg_dump is the tool to use. But it has
> serious drawbacks.
>
> If you dump in the custom format, the data is compressed (nic
Tom
My mistake in not realizing that 8.1 and later can dump large objects in the
plain text format. I guess when searching for answers to a problem, the posted
information doesn't always specify the version. So, sorry about that.
But the plain text format still has serious problems in that the
"John T. Dow" <[EMAIL PROTECTED]> writes:
> If you dump in plain text format, you can at least inspect the dumped
> data and fix it manually or with iconv. But the plain text
> format doesn't support large objects (again, not nice).
It does in 8.1 and later ...
> Also, neither of these methods ge
On Sun, Jun 22, 2008 at 3:11 AM, Stuart Luppescu <[EMAIL PROTECTED]> wrote:
> On 木, 2008-06-19 at 11:57 +0200, David wrote:
>> > pg_restore: [tar archiver] could not open TOC file for input: No
>> such
>> > file or directory
>>
>> It sounds like the tar file is no longer being created.
>>
>> Try ma
: Saturday, June 21, 2008 9:11 PM
Subject: Re: [GENERAL] Dump and restore problem
On 木, 2008-06-19 at 11:57 +0200, David wrote:
> pg_restore: [tar archiver] could not open TOC file for input: No
such
> file or directory
It sounds like the tar file is no longer being created.
Try manuall
On 木, 2008-06-19 at 11:57 +0200, David wrote:
> > pg_restore: [tar archiver] could not open TOC file for input: No
> such
> > file or directory
>
> It sounds like the tar file is no longer being created.
>
> Try manually running the commands, and verify that the dump, restore,
> and rsync still w
On Wed, Jun 18, 2008 at 11:41 PM, Stuart Luppescu <[EMAIL PROTECTED]> wrote:
[...]
> pg_restore: [tar archiver] could not open TOC file for input: No such
> file or directory
It sounds like the tar file is no longer being created.
Try manually running the commands, and verify that the dump, res
Stefan Schwarzer wrote:
Don't know if the problem occured because I hadn't dumped and restored
my data as postgres, but as another postgres-user. Now, it works. But I
re-installed/compiled postgres/postgis, so, can't really say why it
works now...
Since you were getting backend crashes, I'd
I am using Navicat to transfer data from one database to another.
But it soon gives me an error message like the following:
I think you'll probably have to ask the navicat people.
If you want to use pg_dump to transfer data from 8.1 to 8.2
though, use the version of pg_dump that ships with 8
Stefan Schwarzer wrote:
I am using Navicat to transfer data from one database to another. But
it soon gives me an error message like the following:
I think you'll probably have to ask the navicat people.
If you want to use pg_dump to transfer data from 8.1 to 8.2 though,
use the version of pg
I am using Navicat to transfer data from one database to another.
But it soon gives me an error message like the following:
I think you'll probably have to ask the navicat people.
If you want to use pg_dump to transfer data from 8.1 to 8.2 though,
use the version of pg_dump that ships with 8
Stefan Schwarzer wrote:
Hi,
I am using Navicat to transfer data from one database to another. But it
soon gives me an error message like the following:
I think you'll probably have to ask the navicat people.
If you want to use pg_dump to transfer data from 8.1 to 8.2 though, use
the version
[EMAIL PROTECTED] wrote:
hi,everyone,first thank you help me relove the problem about complex type.
now i have other two question about complex type field.
firstly, i want to know if the complex type data can be dump into the database
by command 'copy'?
secondly, i try it but failure. The follow
On 14/03/2008, brian <[EMAIL PROTECTED]> wrote:
> The version you dump it from is unlikely to be difficult to find ten
> years from now. I'd just make sure to append the pg version to the
> archive so it's obvious to any future data archaeologists what's needed
> to breathe life back into it.
Le
Ron Mayer wrote:
If one wanted to dump some postgres databases for long term
archival storage (maybe decades), what's the recommended
dump format? Is the tar or plain text preferred, or is
there some other approach (xml? csv?) I should be looking
at instead?
Or should we just leave these in so
Ron Mayer <[EMAIL PROTECTED]> writes:
> If one wanted to dump some postgres databases for long term
> archival storage (maybe decades), what's the recommended
> dump format?
Plain text pg_dump output, without question. Not only is it the most
likely to load without problems, but if necessary you
But I don't really get how the final command will look like.
pg_restore -L -d
But somewhere I have to indicate the original file, no? I mean,
where do
all my data now come from? The original dump contains all data; the
newly created via
pg_restore -l geodataportal.public | grep
Stefan Schwarzer wrote:
> But I don't really get how the final command will look like.
>
> pg_restore -L -d
>
> But somewhere I have to indicate the original file, no? I mean, where do
> all my data now come from? The original dump contains all data; the
> newly created via
>
> pg_resto
how can I dump a schema with all tables, but without the functions?
There's no built-in single command for that. You can accomplish
it by
using pg_restore -l to make a list of objects, then edit the list,
then pg_restore -L to restore only the objects in the edited list.
Hmmm.. I probably sh
Stefan Schwarzer wrote:
how can I dump a schema with all tables, but without the functions?
There's no built-in single command for that. You can accomplish it by
using pg_restore -l to make a list of objects, then edit the list,
then pg_restore -L to restore only the objects in the edited list
how can I dump a schema with all tables, but without the functions?
There's no built-in single command for that. You can accomplish it by
using pg_restore -l to make a list of objects, then edit the list,
then pg_restore -L to restore only the objects in the edited list.
Hmmm.. I probably sho
Stefan Schwarzer <[EMAIL PROTECTED]> writes:
> how can I dump a schema with all tables, but without the functions?
There's no built-in single command for that. You can accomplish it by
using pg_restore -l to make a list of objects, then edit the list,
then pg_restore -L to restore only the object
> how can I dump a schema with all tables, but without the functions? Is
> there a way to do it, or do I have to manually drop the functions
> later when having used the pg_restore?
Stef,
You can edit the data between dump and restore, to comment out the
function references. Or, you can use the "-
Gurjeet Singh wrote:
Why not give it a try once? Dump and restore once and see for
yourself. You'd have done that by now, but if you haven't do give it a
try instead of waiting any more. You may learn a thing or two in the
process...
On 11/29/07, *Norberto Delle* <[EMAIL PROTECTED]
Gurjeet Singh wrote:
Why not
give it a try once? Dump and restore once and see for yourself. You'd
have done that by now, but if you haven't do give it a try instead of
waiting any more. You may learn a thing or two in the process...
On 11/29/07, Norberto Delle <[EMAIL PROTECTED]>
wrote
Why not give it a try once? Dump and restore once and see for yourself.
You'd have done that by now, but if you haven't do give it a try instead of
waiting any more. You may learn a thing or two in the process...
Best regards,
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes |
\d my.table is showing me the primary key as:
"unitstat_pkey" PRIMARY KEY, btree (id)
But the looking into the table structure produced by
pg_dump -s -n my -t table db
I'm getting gthe primary key shown as:
ADD CONSTRAINT unistat_pkey PRIMARY KEY (id);
That has been the name of it, but I alte
On Aug 8, 2007, at 14:18 , Emi Lu wrote:
Is there a way that I can only dump all objects under a specific
schema?
Have you checked the pg_dump documentation?
http://www.postgresql.org/docs/8.2/interactive/app-pgdump.html
In particular, the -n flag might interest you.
Michael Glaesemann
gr
1 - 100 of 176 matches
Mail list logo