Looks like Postgres will never "use" (visually) more than shared_buffers
size of memory.
Change it to 48GB, and in your "top" output you will see how memory
usage bumped up to this new limit.
But it's just a "visual" change, I doubt you'll get any benefits from it.
On 03/24/17 02:58, Pietro
Thanks Jov and Karl!
What do you think about:
primarycache=all
for SELECT queries over same data sets?
Yes.
Non-default stuff...
dbms/ticker-9.5 compressratio 1.88x -
dbms/ticker-9.5 mounted yes-
dbms/ticker-9.5 quota
> - from what I can see, Postgres uses memory too carefully. I would like
> somehow to force it to keep accessed data in memory as long as possible.
> Instead I often see that even frequently accessed data is pushed out of
> memory cache for no apparent reasons.
>
This is probably a consequence o
Hi.
I have an OLAP-oriented DB (light occasional bulk writes and heavy
aggregated selects over large periods of data) based on Postgres 9.5.3.
Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,
mirror).
The largest table is 13GB (with a 4GB index on it), other tables ar
Right, buffers are not rows, but still 8 times less...
The table I'm reading from is already aggregated on daily basis (so
there is no way to aggregate it more).
Will extending page to say 128K improve performance?
On 07/19/16 07:41, Jim Nasby wrote:
On 7/19/16 9:28 AM, trafdev wrote:
The difference is - you're fetching\grouping 8 times less rows than I:
Huh? The explain output certainly doesn't show that.
Why not?
My output:
Buffers: shared hit=1486949
Torsten's output:
Buffers: shared hit=155711
This is amount of rows fetched for further processing (when all data is
i
So does that mean Postgres is not capable to scan\aggregate less than 10
mln rows and deliver result in less than 2 seconds?
On 07/06/16 09:46, trafdev wrote:
Well, our CPU\RAM configs are almost same...
The difference is - you're fetching\grouping 8 times less rows than I:
You scan 16.
n
you) in 1.8 seconds and then spending rest (2.3 seconds) for aggregation...
So please try to extend dates range 8 times and repeat your test.
On 07/06/16 08:27, Torsten Zuehlsdorff wrote:
On 06.07.2016 17:06, trafdev wrote:
Wondering what are your CPU\RAM characteristics?
Intel Core i7
Wondering what are your CPU\RAM characteristics?
On 07/06/16 01:35, Torsten Zuehlsdorff wrote:
On 05.07.2016 17:35, trafdev wrote:
[..]
Without TIMESTAMP cast:
QUERY PLAN
HashAggregate (cost=1405666.90..1416585.93 rows=335970 width=86)
(actual time=4797.272..4924.015 rows=126533 loops=1
an 2-3 seconds) and there is no way to improve it.
On 07/05/16 04:39, Torsten Zuehlsdorff wrote:
On 02.07.2016 02:54, trafdev wrote:
> Hi.
>
> I'm trying to build an OLAP-oriented DB based on PostgresSQL.
>
> User works with a paginated report in the web-browser. Interfac
;s better, but still is far from "<2 secs" goal.
Any thoughts?
On 07/01/16 18:23, Tom Lane wrote:
trafdev writes:
CREATE INDEX ix_feed_sub_date
ON stats.feed_sub
USING brin
(date);
CREATE UNIQUE INDEX ixu_feed_sub
ON stats.feed_sub
USING btree
(date, gra
Hi.
I'm trying to build an OLAP-oriented DB based on PostgresSQL.
User works with a paginated report in the web-browser. Interface allows
to fetch data for a custom date-range selection,
display individual rows (20-50 per page) and totals (for entire
selection, even not visible on the current
12 matches
Mail list logo