On Mon, Sep 10, 2012 at 10:21 PM, Tom Lane wrote:
> Mike Christensen writes:
>> Oh reading the online docs, it looks like what I may have wanted was:
>> --format=custom
>
> Right. That does everything tar format does, only better --- the only
> thing tar format beats it at is you can disassemble
Mike Christensen writes:
>> Is the TAR format just the raw SQL commands, just tar'ed and then sent
>> over the wire?
Sorta. If you pull it apart with tar, you'll find out there's a SQL
script that creates the database schema, and then a separate tar-member
file containing the data for each table
Mike Christensen writes:
> Oh reading the online docs, it looks like what I may have wanted was:
> --format=custom
Right. That does everything tar format does, only better --- the only
thing tar format beats it at is you can disassemble it with tar. Back
in the day that seemed like a nice thing
On Mon, Sep 10, 2012 at 10:06 PM, Mike Christensen wrote:
> On Mon, Sep 10, 2012 at 9:57 PM, Tom Lane wrote:
>> Jeff Janes writes:
>>> On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen
>>> wrote:
Is there something that can be done smarter with this error message?
pg_dump: dumpi
On Mon, Sep 10, 2012 at 9:57 PM, Tom Lane wrote:
> Jeff Janes writes:
>> On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen wrote:
>>> Is there something that can be done smarter with this error message?
>>>
>>> pg_dump: dumping contents of table pages
>>> pg_dump: [tar archiver] archive member t
Jeff Janes writes:
> On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen wrote:
>> Is there something that can be done smarter with this error message?
>>
>> pg_dump: dumping contents of table pages
>> pg_dump: [tar archiver] archive member too large for tar format
>> pg_dump: *** aborted because
Is there a place to download pgAdmin 1.16 for openSuse (or a
repository I can add?)
All I can find is packages for 1.14, however this version is unable to
connect to Postgres 9.2 databases. Thanks!
Mike
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to y
On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen wrote:
> Is there something that can be done smarter with this error message?
>
>
> pg_dump: dumping contents of table pages
> pg_dump: [tar archiver] archive member too large for tar format
> pg_dump: *** aborted because of error
Maybe it could t
Is there something that can be done smarter with this error message?
pg_dump: dumping contents of table pages
pg_dump: [tar archiver] archive member too large for tar format
pg_dump: *** aborted because of error
If there's any hard limits (like memory, or RAM) that can be checked
before it spen
Em 10/09/2012 19:06, Kevin Grittner escreveu:
Edson Richter wrote:
this automatic compression applies to bytea fields?
Yes, but keep in mind that anything which is already compressed or
encrypted will probably not compress much if at all. Many of the
binary objects you might want to sto
On Thu, Aug 16, 2012 at 9:30 AM, Kevin Grittner
wrote:
> Jeff Janes wrote:
>
>> So a server that is completely free of
>> user activity will still generate an endless stream of WAL files,
>> averaging one file per max(archive_timeout, checkpoint_timeout).
>> That comes out to one 16MB file per ho
Hi All,
Hope you can assist and that I am posting to the right forum.
We currently have multiple Postgresql 9 instances running with warm standby,
and the replication work wonderfully.
The problem is the following, we take the slave database out of recovery and it
works perfectly, but when we
Edson Richter wrote:
> this automatic compression applies to bytea fields?
Yes, but keep in mind that anything which is already compressed or
encrypted will probably not compress much if at all. Many of the
binary objects you might want to store in the database probably
already use compressio
Em 10/09/2012 16:09, Edson Richter escreveu:
Em 10/09/2012 15:35, Tom Lane escreveu:
Edson Richter writes:
I would like to know if there is any plan to implement compressed
fields
(just a "flag" in the field definition, like "not null") at database
side (these fields are and will never be ind
Em 10/09/2012 15:35, Tom Lane escreveu:
Edson Richter writes:
I would like to know if there is any plan to implement compressed fields
(just a "flag" in the field definition, like "not null") at database
side (these fields are and will never be indexed neither used for search).
Any field value
Edson Richter writes:
> I would like to know if there is any plan to implement compressed fields
> (just a "flag" in the field definition, like "not null") at database
> side (these fields are and will never be indexed neither used for search).
Any field value over a couple kilobytes is compres
Hi,
My application have few binary fields that accept files. Most of them
are XML files archived for reference only.
I would like to know if there is any plan to implement compressed fields
(just a "flag" in the field definition, like "not null") at database
side (these fields are and will nev
Craig Gibson writes:
> I get a daily CSV file of 6.5 million records. I create a temporary
> table and COPY them in. On completion I create an index on the mdnid
> column. This column is also indexed in table 2. This part is very
> fast. I had some 'checkpoint too often' issues, but that I have
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Craig Gibson
> Sent: Monday, September 10, 2012 12:34 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Performance issue with cross table updates
>
> Hi all
Hi all
I am no database wizard so I am hoping someone may be able to assist me :)
I get a daily CSV file of 6.5 million records. I create a temporary
table and COPY them in. On completion I create an index on the mdnid
column. This column is also indexed in table 2. This part is very
fast. I had
On Sep 7, 2012, at 2:19 PM, David Johnston wrote:
>
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Michael Sacket
> Sent: Friday, September 07, 2012 2:09 PM
> To: PG-General Mailing List
> Subject: [GENERAL] INSERT… RETURNING for copying r
I have observed that currently incase there is a network break between
master and standby, walsender process gets terminated immediately, however
walreceiver detects the breakage after long time.
I could see that there is replication_timeout configuration parameter,
walsender checks for replic
W dniu 2012-09-10 08:19, Arvind Singh pisze:
I need some help or even a simple link that is related to this subject
you might be interested in reading this: http://bucardo.org/wiki/Tail_n_mail
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
Roman Golis wrote:
> I am trying to load data into a rather simple table:
>
> CREATE TABLE "public"."files" (
> "id" SERIAL,
> "idchar" CHAR(32) NOT NULL,
> "content" BYTEA,
> CONSTRAINT "files_pkey" PRIMARY KEY("id")
> ) WITHOUT OIDS;
>
> with this command:
>
> copy files (idchar, conte
Arvind Singh wrote:
> I am in a project that uses PostGresSql v9.0. We are developing an
application in CSharp to parse the
> PG server activity log installed on Windows 2003/XP or higher.
>
> Our application will :
> Scan the Log for a given search text and Post rows found
> Produce statistics re
On 09/09/12 11:19 PM, Arvind Singh wrote:
I am in a project that uses PostGresSql v9.0. We are developing an
application in CSharp to parse the PG server activity log installed on
Windows 2003/XP or higher.
Our application will :
Scan the Log for a given search text and Post rows found
Produce
26 matches
Mail list logo