On 17/07/2019 23:03, mayank rupareliya wrote:
[...]
Table and index are created using following query.
create table fields(user_id varchar(64), field varchar(64));
CREATE INDEX index_field ON public.fields USING btree (field);
[...]
Any particular reason for using varchar instead of text, for
Hi
On 2019-07-17 13:55:51 -0400, Alvaro Herrera wrote:
> Be careful with pg_buffercache though, as it can cause a hiccup in
> operation.
I think that's been fixed a few years back:
commit 6e654546fb61f62cc982d0c8f62241b3b30e7ef8
Author: Heikki Linnakangas
Date: 2016-09-29 13:16:30 +0300
On 2019-Jun-26, Justin Pryzby wrote:
> > Also, Should pg_buffercache perhaps be run at the beginning and end of the
> > week, to see if there is a significant difference?
>
> Yes; buffercache can be pretty volatile, so I'd save it numerous times each at
> beginning and end of week.
Be careful wi
On Wed, 26 Jun 2019 at 15:18, Tom Lane wrote:
> Alvaro Herrera writes:
> > On 2019-Jun-26, Hugh Ranalli wrote:
> >> From my research in preparing for the upgrade, I understood transparent
> >> huge pages were a good thing, and should be enabled. Is this not
> correct?
>
> > It is not.
>
> Yeah .
On Wed, Jul 17, 2019 at 4:04 AM mayank rupareliya
wrote:
> create table fields(user_id varchar(64), field varchar(64));
> CREATE INDEX index_field ON public.fields USING btree (field);
>
> Any suggestions for improvement?
>
Reduce the number of rows by constructing a relationally normalized data
Am 17.07.19 um 14:48 schrieb Tomas Vondra:
Either that, or try creating a covering index, so that the query can
do an
index-only scan. That might reduce the amount of IO against the table,
and
in the index the data should be located close to each other (same page or
pages close to each other
On Wed, Jul 17, 2019 at 02:53:20PM +0300, Sergei Kornilov wrote:
Hello
Please recheck with track_io_timing = on in configuration. explain
(analyze,buffers) with this option will report how many time we spend
during i/o
Buffers: shared hit=2 read=31492
31492 blocks / 65 sec ~ 480 IOPS, not
Hello
Please recheck with track_io_timing = on in configuration. explain
(analyze,buffers) with this option will report how many time we spend during i/o
> Buffers: shared hit=2 read=31492
31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD
Your query reads table data from disks (
My table is having data like below with 100M records (contains all dummy
data). I am having btree index on column ("field").
*While searching for any text from that column takes longer (more than 1
minute).*
user Id field
d848f466-5e12-46e7-acf4-e12aff592241 Northern Arkansas College
24c32757-e6a8