On 22/05/19 4:28 PM, Lev Kokotov wrote:
Is it efficient to use Postgres as a column store by creating one
table per column?
I would query it with something like `[...] UNION SELECT value AS
FROM WHERE value = UNION [...]` to build a row.
I'm thinking since Postgres stores tables in cont
On 15/04/19 2:26 PM, Mark Kirkwood wrote:
>
> On 10/04/19 8:20 PM, Andreas Kretschmer wrote:
>>
>>
>> Am 10.04.19 um 07:40 schrieb Daulat Ram:
>>> We have two node postgresql database version 9.6 with streaming
>>> replication which is running on docker
On 10/04/19 8:20 PM, Andreas Kretschmer wrote:
Am 10.04.19 um 07:40 schrieb Daulat Ram:
We have two node postgresql database version 9.6 with streaming
replication which is running on docker environment, os Linux (Ubuntu)
and we have to migrate on PostgresQL11. I need your suggestions &
st
Hi, I can see two issues making you get variable results:
1/ Number of clients > scale factor
Using -c16 and -s 6 means you are largely benchmarking lock contention
for a row in the branches table (it has 6 rows in your case). So
randomness in *which* rows each client tries to lock will make f
If you have not amended any Postgres config parameters, then you'll get
checkpoints approx every 5 min or so. Thus using a Pgbench run time of
5min is going sometimes miss/sometimes hit a checkpoint in progress -
which will hugely impact test results.
I tend to do Pgbench runs of about 2x chec
Hi Neto,
In a great piece of timing my new experimental SSD arrived (WD Black 500
G 3D NAND MVME) [1]. Doing some runs of query 9:
- default config: 7 minutes
- work_mem=1G, max_parallel_workers_per_gather=4: 3-4 minutes
Increasing either of these much more got me OOM errors (16 G of ram).
f the time taken for this query is in writing and reading files for
sorting/hashing, so where pgsql_tmp is located hugely influences the
overall time.
regards
Mark
On 20/07/18 12:33, Mark Kirkwood wrote:
FWIW:
re-running query 9 using the SSD setup as 2x crucial M550 RAID0: 10
minutes.
FWIW:
re-running query 9 using the SSD setup as 2x crucial M550 RAID0: 10 minutes.
On 20/07/18 11:30, Mark Kirkwood wrote:
One more thought on this:
Query 9 does a lot pf sorting to disk - so there will be writes for
that and all the reads for the table scans. Thus the location of your
One more thought on this:
Query 9 does a lot pf sorting to disk - so there will be writes for that
and all the reads for the table scans. Thus the location of your
instance's pgsql_tmp directory(s) will significantly influence results.
I'm wondering if in your HDD test the pgsql_tmp on the *S
Ok, so you are using 1 instance and tablespaces. Also I see you are
restarting the instance between HDD and SSD tests, so all good there.
The point I made about having the OS on the SSD's means that if these
tests make your system swap, and your swap device is on the SSDs (which
is probably is
Yeah,
A +1 to telling us the model. In particular the later EVOs use TLC nand
with a small SLC cache... and when you exhaust the SLC cache the
performance can be worse than a HDD...
On 18/07/18 01:44, Nicolas Charles wrote:
Hi Neto,
You should list the SSD model also - there are pleinty of
Ok, so dropping the cache is good.
How are you ensuring that you have one test setup on the HDDs and one on
the SSDs? i.e do you have 2 postgres instances? or are you using one
instance with tablespaces to locate the relevant tables? If the 2nd case
then you will get pollution of shared_buffer
Can you post make and model of the SSD concerned? In general the cheaper
consumer grade ones cannot do sustained read/writes at anything like
their quoted max values.
regards
Mark
On 17/07/18 17:00, Neto pr wrote:
Dear,
Some of you can help me understand this.
This query plan is executed i
On 12/05/18 02:48, Evgeniy Shishkin wrote:
On May 11, 2018, at 15:11, Mark Kirkwood
<mailto:mark.kirkw...@catalyst.net.nz>> wrote:
On 11/05/18 23:23, Andreas Joseph Krogh wrote:
På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh
mailto:andr...@visena.com>
On 11/05/18 23:23, Andreas Joseph Krogh wrote:
På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh
mailto:andr...@visena.com>>:
På tirsdag 10. april 2018 kl. 19:41:59, skrev Craig James
mailto:cja...@emolecules.com>>:
On Tue, Apr 10, 2018 at 12:21 AM, Andreas Josep
The 512 Gb model is big enough that the SLC cache and performance is
gonna be ok. What would worry me is the lifetime: individual 512 Gb 850
EVOs are rated at 150 Tb over 5 years. Compare that to the Intel S3710 -
400 Gb is rated at 8 Pb over 5 years. These drives are fast enough so
that you *m
We have been using the Intel S3710 (or minor model variations thereof).
They have been great (consistent performance, power off safe and good
expected lifetime). Also 2 of them in RAID1 easily outperform a
reasonably large number of 10K spinners in RAID10.
Now you *can* still buy the S37xx series,
On 11/02/18 00:20, Robert Klemme wrote:
On Mon, Feb 5, 2018 at 5:22 PM, Andrew Kerber wrote:
Have them check the memory and CPU allocation of the hypervisor, make sure
its not overallocated. Make sure the partitions for stroage are aligned (see
here:
https://blogs.vmware.com/vsphere/2011/08/g
On 16/01/18 23:14, Neto pr wrote:
2018-01-15 20:04 GMT-08:00 Mark Kirkwood :
On 16/01/18 13:18, Fernando Hevia wrote:
The 6 Gb/s interface is capable of a maximum throughput of around 600
Mb/s. None of your drives can achieve that so I don't think you are limited
to the interface
On 16/01/18 13:18, Fernando Hevia wrote:
The 6 Gb/s interface is capable of a maximum throughput of around 600
Mb/s. None of your drives can achieve that so I don't think you are
limited to the interface speed. The 12 Gb/s interface speed advantage
kicks in when there are several drives in
On 28/11/17 07:40, Scott Marlowe wrote:
On Mon, Nov 27, 2017 at 11:23 AM, Don Seiler wrote:
Good afternoon.
We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX
machines. We currently have effective_io_concurrency set to the default of
1. I'm told that the data volume is a
21 matches
Mail list logo