On Tue, Jun 4, 2024 at 7:36 PM Ron Johnson <ronljohnso...@gmail.com> wrote:

> On Tue, Jun 4, 2024 at 3:47 PM Gavin Roy <gav...@aweber.com> wrote:
>
>>
>> On Tue, Jun 4, 2024 at 3:15 PM Ron Johnson <ronljohnso...@gmail.com>
>> wrote:
>>
>>>
>>> But why tar instead of custom? That was part of my original question.
>>>
>>
>> I've found it pretty useful for programmatically accessing data in a dump
>> for large databases outside of the normal pg_dump/pg_restore workflow. You
>> don't have to seek through one large binary file to get to the data section
>> to get at the data.
>>
>
> Interesting.  Please explain, though, since a big tarball _is_ "one large
> binary file" that you have to sequentially scan.  (I don't know the
> internal structure of custom format files, and whether they have file
> pointers to each table.)
>

Not if you untar it first.


> Is it because you need individual .dat "COPY" files for something other
> than loading into PG tables (since pg_restore --table=xxxx does that, too),
> and directory format archives can be inconvenient?
>

In the past I've used it for data analysis outside of Postgres.
-- 
*Gavin M. Roy*
CTO
AWeber

Reply via email to