Yes,  this is what we have been doing now:- Backup using pg_dump, create
the new database at the destination, manually create the tables which give
problems, and then do the pg_restore. Another solution for us is to backup
using pg_dump that comes with pgAdmin (Windows), rsync it to the
destination server and then do the pg_restore on the server.

On Tue, Feb 23, 2021 at 3:21 PM Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 2/23/21 12:57 PM, Santosh Udupi wrote:
> > So that it makes it a lot easier for the application logic just to
> > collect json fields and update in one column "info" instead of including
> > multiple columns in the insert/update statements.
>
> I doubt it, but then again this why I don't answer Postgres/JSON SO
> questions anymore. Trying to apply logic to the contortions people go to
> make their life more difficult left me with headaches. At any rate this
> is getting off-topic for the the dump/restore issue you have. When you
> look at the restored tables in each variation of the restore database do
> they look the same as below? If you drop the table in the problem
> database and then recreate it using the script below and then populate
> it with data does it work?
>
>
> >
> > On Tue, Feb 23, 2021 at 12:20 PM Adrian Klaver
> > <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> >     On 2/23/21 12:15 PM, Santosh Udupi wrote:
> >      > Here is my table structure. I will try to get the pg_dump output
> for
> >      > this table in both the versions.
> >      >
> >      > create table tbl_main(
> >      >
> >      >     item_id int GENERATED ALWAYS AS IDENTITY,
> >      >     -----------------------------------------------------
> >      >     operating_offices int [] GENERATED ALWAYS AS (
> >      >     nullif(array[(info->>'o')::int], '{NULL}') ) stored ,
> >      >     -----------------------------------------------------
> >      >     primary_bill_to_id int   GENERATED ALWAYS as
> ((info->>'vp')::int)
> >      >     stored ,
> >      >     ----------------------------------------------
> >      >     item_status_array text [] GENERATED ALWAYS as ( array[
> >      >     coalesce(info->>'qr', info->>'s'), info->>'v'] ) stored ,
> >      >     -------------------------------------------------
> >      >     info jsonb
> >      >     ------------------------------
> >      >     ,is_complete bool  GENERATED ALWAYS as (coalesce(
> >      >     (info->>'lf')::bool = true or (info->>'lg')::bool = true,
> false))
> >      >     stored
> >      >     --------------------------------------------
> >      >     ,is_deleted bool GENERATED ALWAYS as ( coalesce(
> >      >     (info->>'cv')::bool, false) ) stored
> >      >     ------------------------------
> >      >     ,is_a_template bool GENERATED ALWAYS as ( coalesce(
> >      >     (info->>'cw')::bool, false) ) stored
> >      >     -------------------------------------------
> >      >     ,created_by_user_id int
> >      >     ,created_on timestamptz default now()
> >      >     ----------------------------------
> >      >     ,primary key(item_id,created_on )
> >      >
> >      >
> >      > ) partition by range (created_on) ;
> >
> >     Which generates(pun intended) the question, why? You are
> deconstructing
> >     info into its component parts after the fact, why not just input the
> >     data directly into the fields.
> >
> >      >
> >      >
> ---=================================================================
> >      > -- *** index
> >      >
> >      >     CREATE INDEX tbl_main_idxgin ON tbl_main USING gin (info);
> >      >
> >      >
> >      >
> ---=================================================================
> >      >   -- **** partitions
> >      >
> >      > -- default partition
> >      > create table tbl_main_partition_default
> >      > partition of tbl_main default;
> >      >
> >      > create table tbl_main_partition_2021
> >      > partition of tbl_main
> >      > for values from ('2020-01-01') to ('2022-01-01');
> >      >
> >      > create table tbl_main_partition_2022
> >      > partition of tbl_main
> >      > for values from ('2022-01-01') to ('2023-01-01');
> >      >
> >      > create table tbl_main_partition_2023
> >      > partition of tbl_main
> >      > for values from ('2023-01-01') to ('2024-01-01');
> >      >
> >      >
> ---=================================================================
> >      >
> >      >
> >      >
> >      > On Tue, Feb 23, 2021 at 10:40 AM Tom Lane <t...@sss.pgh.pa.us
> >     <mailto:t...@sss.pgh.pa.us>
> >      > <mailto:t...@sss.pgh.pa.us <mailto:t...@sss.pgh.pa.us>>> wrote:
> >      >
> >      >     Santosh Udupi <em...@hitha.net <mailto:em...@hitha.net>
> >     <mailto:em...@hitha.net <mailto:em...@hitha.net>>> writes:
> >      >      > Both are different versions
> >      >      > The following works:
> >      >      > Version: pg_dump (pgAdmin Windows)  version:13.1
> >      >
> >      >      > The following does not work: (Does not populate the
> >     generated column
> >      >      > values)
> >      >      > Version: pg_dump (PostgreSQL v13.2 on Ubuntu 20.04)
> >     version 13.2
> >      >
> >      >     Hmm ... well, that would be a regression, but you're going to
> >     have
> >      >     to show us how to reproduce it.  I'm thinking there must be
> >     something
> >      >     odd about the way the table is declared.
> >      >
> >      >                              regards, tom lane
> >      >
> >
> >
> >     --
> >     Adrian Klaver
> >     adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>

Reply via email to