Maybe your memory budget does not meet the RAM on the machine?
The problem is not in the query you are looking for, but in the settings you
are using for Postgres.
regards,
fabio pardi
On 20/05/2020 09:30, Piotr Włodarczyk wrote:
> Hi folks,
>
> We met unexpected PostgreSQL shutdo
ybe slowly increasing the value of effective_io_concurrency.
Every workload is peculiar, so I suspect there is no silver bullet here. Also
the documentation gives you directions in that way...
regards,
fabio pardi
Hello,
I recently spent a bit of time benchmarking effective_io_concurrency on
Postgres.
I would like to share my findings with you:
https://portavita.github.io/2019-07-19-PostgreSQL_effective_io_concurrency_benchmarked/
Comments are welcome.
regards,
fabio pardi
a good remark, thanks. I did not think about it and I will keep it in
mind next time. I instead averaged the results over multiple runs, but setting
an explicit number of transactions is the way to go.
Results, by the way, were quite stable over all the runs (in terms of generated
WAL files and TPS).
regards,
fabio pardi
Hi,
Maybe of some interest for the past, present and future community, I
benchmarked the impact of wal_log_hints with and without wal_compression
enabled.
https://portavita.github.io/2019-06-14-blog_PostgreSQL_wal_log_hints_benchmarked/
comments are welcome.
regards,
fabio pardi
, but I think
that --size-only option is there probably to speed up operations. In
that case I do not see a reason for it since the data folder on the
standby is assumed to be empty
regards,
fabio pardi
to when the last checkpoint
occurred, if your standby allows that (hot_standby = on).
regards,
fabio pardi
> בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi
> <f.pa...@portavita.eu <mailto:f.pa...@portavita.eu>>:
>
>
>
> On 5/29/19 9:
On 5/30/19 5:08 PM, Haroldo Kerry wrote:
> Hello,
>
> We are migrating our PostgreSQL 9.6.10 database (with streaming
> replication active) to a faster disk array.
> We are using this opportunity to enable checksums,
I would stay away from performing 2 big changes in one go.
ing
"restored log file" as many WAL files were produced since the last
checkpoint
Hope it helps to clarify.
regards,
fabio pardi
>
> בתאריך יום ג׳, 28 במאי 2019 ב-13:54 מאת Fabio Pardi
> <f.pa...@portavita.eu <mailto:f.pa...@portavita.eu>>:
>
> H
the 30 minutes.
I would not be bothered if i were you, but can always force a checkpoint on the
master issuing:
CHECKPOINT ;
at that stage, on the standby logs you will see the messages:
restartpoint starting: ..
restartpoint complete: ..
regards,
fabio pardi
>
> בתאריך
and repmgr, and i might be
able to help you further.
regards,
fabio pardi
On 5/27/19 12:17 PM, Mariel Cherkassky wrote:
> standby_mode = 'on'
> primary_conninfo = 'host=X.X.X.X user=repmgr connect_timeout=10 '
> recovery_target_timeline = 'latest
Hi Mariel,
let s keep the list in cc...
settings look ok.
what's in the recovery.conf file then?
regards,
fabio pardi
On 5/27/19 11:23 AM, Mariel Cherkassky wrote:
> Hey,
> the configuration is the same as in the primary :
> max_wal_size = 2GB
> min_wal_size = 1GB
>
your whole configuration of your standby for easier
debug.
regards,
fabio pardi
On 5/27/19 10:49 AM, Mariel Cherkassky wrote:
> Hey,
> PG 9.6, I have a standalone configured. I tried to start up a secondary,
> run standby clone (repmgr). The clone process took 3 hours and during
&g
om which folder are you running that? And what is the PGDATA of your
standby server?
regards,
fabio pardi
an serving them.
regards,
fabio pardi
On 1/21/19 12:24 PM, Rangaraj G wrote:
> Hi,
>
>
>
> Memory and hard ware calculation :
>
>
>
>
>
> How much memory required to achieve performance with the 6GB RAM, 8
> Core, Max connection 1100 concurrent
EATE TABLE command you used,
you are right, there is only the command i used to transform the table
to jsonb.
Small detail, but I updated the post for clarity
and
> the complete size/structure of the JSON object, but really what a paper
> like this should include is a full script which creates all the tables,
> loads all the data, runs the analysis, calculates the results, etc.
>
Queries are shared, but without data, to share the rest is quite useless
in my opinion.
regards,
fabio pardi
signature.asc
Description: OpenPGP digital signature
.
That's a pity I know but i cannot do anything about it.
The queries we ran and the commands we used are mentioned in the blog
post but if you see gaps, feel free to ask.
regards,
fabio pardi
On 11/19/18 6:26 PM, Stephen Frost wrote:
> Greetings,
>
> * Fabio Pardi (f.pa...@porta
ctive feedbacks and perhaps fix mistakes or
inaccuracies you might find.
This is the link:
https://portavita.github.io/2018-10-31-blog_A_JSON_use_case_comparison_between_PostgreSQL_and_MongoDB/
We are open to any kind of feedback and we hope you enjoy the reading.
Regards,
Fabio Pardi and W
hat if one decides to increase block size, increasing storage
> space is not so significant, because it does not set minimum storage unit for
> a row.
>
ah, yes, correct. Now we are on the same page.
Good luck with the rest of things you are going to try out, and let us know
your fi
On 28/09/18 11:56, Vladimir Ryabtsev wrote:
>
> > It could affect space storage, for the smaller blocks.
> But at which extent? As I understand it is not something about "alignment" to
> block size for rows? Is it only low-level IO thing with datafiles?
>
Maybe 'for the smaller blocks' was not
through all this, I would first try to reload the data
with dump+restore into a new machine, and see how it behaves.
Hope it helps.
regards,
fabio pardi
>
>>> consecutive runs with SAME parameters do NOT hit the disk, only the
> first one does, consequent ones read only from buff
ith read speed ~130 MB/s,
> while with my query read at 1.8-2 MB/s.
> 2) iotop show higher IO % (~93-94%) with slower read speed (though it is not
> quite clear what this field is). A process from example above had ~55% IO
> with 130 MB/s while my process had ~93% with ~2MB/s.
>
I think because you are looking at 'IO' column which indicates (from manual)
'..the percentage of time the thread/process spent [..] while waiting on
I/O.'
> Regards,
> Vlad
>
regards,
fabio pardi
s Laurenz suggested (VACUUM FULL), you might want to move data around. You
can try also a dump + restore to narrow the problem to data or disk
- You might also want to try to see the disk graph of Windows, while you are
running your tests. It can show you if data (and good to know how much) is
actually fetching from disk or not.
regards,
fabio pardi
data.
>
> Vlad
I think this is not accurate. If you fetch from an index, then only the
blocks containing the matching records are red from disk and therefore
cached in RAM.
regards,
fabio pardi
gt;
> "Small' range: disk read rate is around 10-11 MB/s uniformly across the
> test. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?
> Shouldn't it be ~ 100% after drop_caches?).
> "Big" range: In most of time disk read speed was about 2 MB/s bu
RAID
setup close to what your needs and capabilities are (more reads? more writes?
SSD? HDD? cache? ...? )
If you only have 2 disks, your obliged (redundant) choice is RAID 1.
regards,
fabio pardi
On 18/07/18 03:24, Neto pr wrote:
>
>> As side note: why to run a test on a setup you
On 07/17/2018 04:05 PM, Neto pr wrote:
> 2018-07-17 10:55 GMT-03:00 Fabio Pardi :
>> Also i think it makes not much sense testing on RAID 0. I would start
>> performing tests on a single disk, bypassing RAID (or, as mentioned, at
>> least disabling cache).
>>
>
disabling cache).
The findings should narrow the focus
regards,
fabio pardi
On 07/17/2018 03:19 PM, Neto pr wrote:
> 2018-07-17 10:04 GMT-03:00 Neto pr :
>> Sorry.. I replied in the wrong message before ...
>> follows my response.
>> -
>>
>> Thanks all
running in the background?
Also, to benchmark disks i would not use a custom query but pgbench.
Be aware: running benchmarks is a science, therefore needs a scientific
approach :)
regards
fabio pardi
On 07/17/2018 07:00 AM, Neto pr wrote:
> Dear,
> Some of you can help me understan
29 matches
Mail list logo