Alvaro,
* Alvaro Herrera (alvhe...@alvh.no-ip.org) wrote:
> For context: this was first reported in the Barman forum here:
> https://groups.google.com/forum/#!msg/pgbarman/3aXWpaKWRFI/weUIZxspDAAJ
> They are using Barman for the backups.
A, I see. I wasn't aware of that history.
> Stephen F
I just read the interesting article by Hans-Juergen Schoenig describing
how to speed up GROUP BY and JOIN. In the article, he mentions using an
"optimization barrier" where the SQL is
WITH x AS
Can somebody tell me where in the postgres docs I can find information
about this SQL?
TIA.
Pau
On Thu, Dec 28, 2017 at 9:22 AM, Paul Tilles wrote:
> I just read the interesting article by Hans-Juergen Schoenig describing
> how to speed up GROUP BY and JOIN. In the article, he mentions using an
> "optimization barrier" where the SQL is
>
> WITH x AS
>
> Can somebody tell me where in the po
The doc page you're seeking is at
https://www.postgresql.org/docs/current/static/queries-with.html
once inside the page, you can switch to another version if you wish to.
Best Regards,
On Thu, Dec 28, 2017 at 4:32 PM, Melvin Davidson
wrote:
>
>
> On Thu, Dec 28, 2017 at 9:22 AM, Paul Tilles
Em 28/12/2017 10:16, Stephen Frost escreveu:
Alvaro,
* Alvaro Herrera (alvhe...@alvh.no-ip.org) wrote:
For context: this was first reported in the Barman forum here:
https://groups.google.com/forum/#!msg/pgbarman/3aXWpaKWRFI/weUIZxspDAAJ
They are using Barman for the backups.
A, I see. I
Some thoughts
A tool to calculate a checksum of sorts based on the table (file) content would
provide a better surety of duplication than simply checking file size - like
differently vacuumed tables in each copy could have the same content but be
different file sizes.
Something like these
Greetings,
* Edson Carlos Ericksson Richter (rich...@simkorp.com.br) wrote:
> Would be possible to include in future versions:
> 1) After start standby, standby run all WAL files until it is
> synchronized with master (current behavior)
> 3) Before getting into "accept read only queries", check if
Greetings Brent,
* Brent Wood (pcr...@yahoo.com) wrote:
> A tool to calculate a checksum of sorts based on the table (file) content
> would provide a better surety of duplication than simply checking file size -
> like differently vacuumed tables in each copy could have the same content but
> b
Em 28/12/2017 16:06, Brent Wood escreveu:
Some thoughts
A tool to calculate a checksum of sorts based on the table (file)
content would provide a better surety of duplication than simply
checking file size - like differently vacuumed tables in each copy
could have the same content but be
Em 28/12/2017 16:26, Stephen Frost escreveu:
Greetings Brent,
* Brent Wood (pcr...@yahoo.com) wrote:
A tool to calculate a checksum of sorts based on the table (file) content would
provide a better surety of duplication than simply checking file size - like
differently vacuumed tables in each
Since there have been a couple threads on the hackers list about
temporal features [1, 2], I thought I'd share an extension I've been
writing for temporal foreign keys:
https://github.com/pjungwir/time_for_keys
There is a big test suite, but right now it is still basically a
proof-of-concept, res
Thank you for the details David and Stephen..
I am unable to recover the database which associated with user table space .
Please see the below test case and suggest me,if any one has the issues
while recovering single database.
*Test case: *
1) created tablespace tblsp1 .
2) created databa
On 12/28/17 3:38 PM, chiru r wrote:
Thank you for the details David and Stephen..
I am unable to recover the database which associated with user table space .
Please see the below test case and suggest me,if any one has the issues
while recovering single database.
*Test case: *
1) created
Please find the below details.
postgres=# select datname, oid from pg_database;
datname | oid
---+
template0 | 13289
postgres | 13294
template1 | 1
db1 | 770161
db2 | 770162
db3 | 770169
(6 rows)
On Thu, Dec 28, 2017 at 4:26 PM, David Steele w
On 12/28/17 5:15 PM, chiru r wrote:
Please find the below details.
postgres=# select datname, oid from pg_database;
datname | oid
---+
template0 | 13289
postgres | 13294
template1 | 1
db1 | 770161
db2 | 770162
db3 | 770169
(6 rows)
That
I am unable to copy the complete backup.manifest file due to security
reasons . please find the below contents.
[backup:db]
db-catalog-version=201510051
db-control-version=942
db-id=1
db-system-id=6444557285095914282
db-version="9.5"
[backup:option]
option-archive-check=true
option-archive-copy=f
16 matches
Mail list logo