Hi, Adrian.

The largest one is 8 GB after compression.

I have a window of 8 hours to handle 30 GB total of backup at various sizes.



On Thu, Aug 22, 2024, 11:36 AM Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 8/22/24 04:06, Vince McMahon wrote:
> > Hi,
> >
> > I have some questions When doing pg_restore of backup of a database to a
> > NEW server.
>
> How large a backup?
>
> >
> > Is there a way to ensure the data integrity is in tact, and user ID and
> > access works liked how it was in the old server?
>
> As to user access, write tests that cover that and run on the new cluster.
>
> Data is trickier and if that is possible to a degree of certainty is
> going to depend on answer to the first question above.
>
> >
> > How to properly handle the materialized views when backing up and
> restoring?
>
> create materialized view prj_mv(p_item_no, year) as select p_item_no,
> year from projection with data;
>
> pg_dump -d production -U postgres -h localhost -t projection -t prj_mv
> -f prj.sql
>
> In prj.sql:
>
> CREATE MATERIALIZED VIEW public.prj_mv AS
>   SELECT p_item_no,
>      year
>     FROM public.projection
>    WITH NO DATA;
>
> COPY public.projection ( ...
>
>
> [...]
>
> REFRESH MATERIALIZED VIEW public.prj_mv;
>
> It is done for you.
>
> >
> > Thanks.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

Reply via email to