Hello,
Increments in pgbackrest are done on file level which is not really
efficient. We have done parallelism, compression and page-level
increments (9.3+) in barman fork [1], but unfortunately guys from
2ndquadrant-it don’t hurry to work on it.
We're looking at page-level incremental backup
On 29.07.2016 08:30, Tomas Vondra wrote:
On 07/29/2016 08:04 AM, trafdev wrote:
Hi.
I have an OLAP-oriented DB (light occasional bulk writes and heavy
aggregated selects over large periods of data) based on Postgres 9.5.3.
Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on
On 19.09.2016 10:23, Mark Kirkwood wrote:
On 19/09/16 19:40, Job wrote:
Hello,
i would please like to have some suggestions to optimize Postgres 8.4
for a very heavy number of select (with join) queries.
The queries read data, very rarely they write.
We probably need to see schema and query
On 06.07.2016 17:06, trafdev wrote:
Wondering what are your CPU\RAM characteristics?
Intel Core i7-2600 Quad Core
32 GB DDR3 RAM
2x 3 TB SATA III HDD
HDD is:
Model Family: Seagate Barracuda XT
Device Model: ST33000651AS
Firmware Version: CC45
User Capacity:3,000,592,982,016 bytes
On 05.07.2016 17:35, trafdev wrote:
> [..]
Without TIMESTAMP cast:
QUERY PLAN
HashAggregate (cost=1405666.90..1416585.93 rows=335970 width=86)
(actual time=4797.272..4924.015 rows=126533 loops=1)
" Group Key: subid, sid"
Buffers: shared hit=1486949
-> Index Scan using ix_feed_sub_aid_date
On 02.07.2016 02:54, trafdev wrote:
Hi.
I'm trying to build an OLAP-oriented DB based on PostgresSQL.
User works with a paginated report in the web-browser. Interface allows
to fetch data for a custom date-range selection,
display individual rows (20-50 per page) and totals (for entire
select
Hello Ivan,
I have an application which stores a large amounts of hex-encoded hash
strings (nearly 100 GB of them), which means:
* The number of distinct characters (alphabet) is limited to 16
* Each string is of the same length, 64 characters
* The strings are essentially random
Creatin
Hello Alessandro,
2014-12-15 17:54:07 GMT DETAIL: Failed process was running: WITH upsert
AS (update MSG set
(slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_108,IR_120,IR_134,PRO,EPI,CLM,TAPE)
= (to_timestamp('201212032145',
'MMDDHH24MI'),2,'\xff','\xff','\xff','
On 24.07.2012 14:51, Laszlo Nagy wrote:
* UFS is not journaled.
There is journal support for UFS as far as i know. Please have a look
at the gjournal manpage.
>
Yes, but gjournal works for disk devices.
That isn't completly correct! gjournal works with all GEOM-devices,
which could be n
On 24.07.2012 14:51, Laszlo Nagy wrote:
* UFS is not journaled.
There is journal support for UFS as far as i know. Please have a look at
the gjournal manpage.
Greetings,
Torsten
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscrip
10 matches
Mail list logo