On Thu, Jul 18, 2019 at 10:09 PM Matthew Pounsett wrote:
> That would likely keep the extra storage requirements small, but still
> non-zero. Presumably the upgrade would be unnecessary if it could be done
> without rewriting files. Is there any rule of thumb for making sure one has
> enough
On Fri, Jul 19, 2019 at 4:41 PM Matthew Pounsett wrote:
> My current backup plan for this database is on-site replication, and a
> monthly pg_dump from the standby to be copied off-site. Doing per-table
> backups sounds like a great way to end up with an inconsistent backup, but
> perhaps I mi
Greetings,
* Matthew Pounsett (m...@conundrum.com) wrote:
> On Fri, 19 Jul 2019 at 11:25, Peter J. Holzer wrote:
> > On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> > > Okay. So I guess the short answer is no, nobody really knows how to
> > > judge how much space is required for an upgra
On 2019-07-19 11:37:52 -0400, Matthew Pounsett wrote:
> On Fri, 19 Jul 2019 at 11:25, Peter J. Holzer wrote:
>
> On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> > Okay. So I guess the short answer is no, nobody really knows how to
> > judge how much space is required for an u
Matthew Pounsett writes:
> [...] Is there any rule of thumb for making sure one has enough space
> available for the upgrade?
No, because it depends greatly on which version you are upgrading from
and which version you are upgrading to etc.
Perhaps you could carve out a slice of data, e.g. 1 GB
Matthew Pounsett writes:
> On Thu, 18 Jul 2019 at 19:53, Rob Sargent wrote:
>
> Can you afford to drop and re-create those 6 indices?
>
> Technically, yes. I don't see any reason we'd be prevented from doing that.
> But, rebuilding them will take a long time. That's a lot of downtime to incur
On Fri, 19 Jul 2019 at 11:25, Peter J. Holzer wrote:
> On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> > Okay. So I guess the short answer is no, nobody really knows how to
> > judge how much space is required for an upgrade? :)
>
> As I understand it, a pg_upgrade --link uses only negl
On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> Okay. So I guess the short answer is no, nobody really knows how to
> judge how much space is required for an upgrade? :)
As I understand it, a pg_upgrade --link uses only negligible extra
space. It duplicates a bit of householding informat
On Thu, 18 Jul 2019 at 09:44, Matthew Pounsett wrote:
>
> I've recently inherited a database that is dangerously close to outgrowing
> the available storage on its existing hardware. I'm looking for (pointers
> to) advice on scaling the storage in a financially constrained
> not-for-profit.
>
T
Hi Matt,
On Fri, Jul 19, 2019 at 10:41:31AM -0400, Matthew Pounsett wrote:
> On Fri, 19 Jul 2019 at 04:21, Luca Ferrari wrote:
>
> >
> > This could be trivial, but any chance you can partition the table
> > and/or archive unused records (at least temporarly)? A 18 TB table
> > quite frankly soun
On Fri, 19 Jul 2019 at 04:21, Luca Ferrari wrote:
>
> This could be trivial, but any chance you can partition the table
> and/or archive unused records (at least temporarly)? A 18 TB table
> quite frankly sounds a good candidate to contain records no one is
> interested in the near future.
>
Par
On Thu, 18 Jul 2019 at 21:59, Andy Colson wrote:
> >
>
> Now might be a good time to consider splitting the database onto multiple
> computers. Might be simpler with a mid-range database, then your plan for
> the future is "add more computers".
>
Hmm... yes. Range partitioning seems like a pos
On Thu, 18 Jul 2019 at 19:53, Rob Sargent wrote:
>
> >
> > That would likely keep the extra storage requirements small, but still
> non-zero. Presumably the upgrade would be unnecessary if it could be done
> without rewriting files. Is there any rule of thumb for making sure one
> has enough sp
On Thu, Jul 18, 2019 at 10:09 PM Matthew Pounsett wrote:
> That would likely keep the extra storage requirements small, but still
> non-zero. Presumably the upgrade would be unnecessary if it could be done
> without rewriting files. Is there any rule of thumb for making sure one has
> enough
On 7/18/19 8:44 AM, Matthew Pounsett wrote:
I've recently inherited a database that is dangerously close to outgrowing the
available storage on its existing hardware. I'm looking for (pointers to)
advice on scaling the storage in a financially constrained not-for-profit.
The current size of
>
> That would likely keep the extra storage requirements small, but still
> non-zero. Presumably the upgrade would be unnecessary if it could be done
> without rewriting files. Is there any rule of thumb for making sure one has
> enough space available for the upgrade? I suppose that wou
On Thu, 18 Jul 2019 at 13:34, Kenneth Marshall wrote:
> Hi Matt,
>
Hi! Thanks for your reply.
> Have you considered using the VDO compression for tables that are less
> update intensive. Using just compression you can get almost 4X size
> reduction. For a database, I would forgo the deduplica
Hi Matt,
On Thu, Jul 18, 2019 at 09:44:04AM -0400, Matthew Pounsett wrote:
> I've recently inherited a database that is dangerously close to outgrowing
> the available storage on its existing hardware. I'm looking for (pointers
> to) advice on scaling the storage in a financially constrained
> no
I've recently inherited a database that is dangerously close to outgrowing
the available storage on its existing hardware. I'm looking for (pointers
to) advice on scaling the storage in a financially constrained
not-for-profit.
The current size of the DB's data directory is just shy of 23TB. Whe
19 matches
Mail list logo