that.
Clusters in Petabyte range? We need to be able to substantiate that
with publicly documented cases. They also need to be pure PostgreSQL,
not "with added tech", no?
Also, I can't see that the 1.6 TB per row is accurate, because that
would mean 1600 toast pointers at 20 b
ng" doc patches to head only, someone
> complains that it should be backpatched, so I did that in this case. If
> we want to change that idea, we need to agree on the criteria.
>
True, but it's not a wording patch, you're entirely renaming a feature.
I agree with the chang
nd use it
Yes, those docs can change now.
> in some cases we
> have not use the lock.
>
Which code does not use the lock?
--
Simon Riggshttp://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
Mission Critical Databases
. They should be located
> in "19.5. Write Ahead Log"/"19.5.1. Settings". Thought?
>
+1
Why are there two settings for the same thing (COW support)?
Would we really want to allow setting one but not the other?
--
Simon Riggshttp://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
Mission Critical Databases
-
wheel| 4
bolt | 5
(2 rows)
Proposed change
sub_part | total_quantity
--+
wheel| 4
bolt | 20
Doc patch attached.
--
Simon Riggshttp://www.EnterpriseDB.com/
parts.sql
Description: Binary data
recursive_correction.v1.patch
Description: Binary data
postgres=# select pg_column_size( row() );
>
> pg_column_size
>
>
> 24
Yes, but it is MAXALIGNED, which is documented. So the value you see
is correct and matches the docs.
--
Simon Riggshttp://www.EnterpriseDB.com/