Another suggestion, try to cluster the table using the index for the
"field" column, then analyze. If you're on a spinning disk it will help if
you sort your search "field" during bulk insert.
--
regards
marie g. bacuno ii
On Fri, Jul 19, 2019 at 11:39 AM Tomas Vondra
wrote:
> On Fri, Jul 19,
On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote:
Well, you haven't shown us the execution plan, so it's hard to check why
it did not help much and give you further advice.
This is the latest query execution with explain after adding indexing on
both columns.
Aggregate (cost=
On Fri, Jul 19, 2019 at 8:13 AM mayank rupareliya
wrote:
> Well, you haven't shown us the execution plan, so it's hard to check why
> it did not help much and give you further advice.
>
>
> This is the latest query execution with explain after adding indexing on
> both columns.
>
> Aggregate (co
Well, you haven't shown us the execution plan, so it's hard to check why
it did not help much and give you further advice.
This is the latest query execution with explain after adding indexing on
both columns.
Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual
time=65087.657..65087.65
On 18/07/2019 23:51, mayank rupareliya wrote:
[...]
Thanks Andreas, David, Gavin
/Any particular reason for using varchar instead of text, for field?/ No
use UUID for the user_id.Agreed
/[...]/
/Use of text is preferred, but I can't see it making any significant
difference to performance --
On Thu, Jul 18, 2019 at 05:21:49PM +0530, mayank rupareliya wrote:
*Please recheck with track_io_timing = on in configuration. explain
(analyze,buffers) with this option will report how many time we spend
during i/o*
*> Buffers: shared hit=2 read=31492*
*31492 blocks / 65 sec ~ 480 IOPS, not
*Please recheck with track_io_timing = on in configuration. explain
(analyze,buffers) with this option will report how many time we spend
during i/o*
*> Buffers: shared hit=2 read=31492*
*31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD*
*Your query reads table data from disks (w
On 17/07/2019 23:03, mayank rupareliya wrote:
[...]
Table and index are created using following query.
create table fields(user_id varchar(64), field varchar(64));
CREATE INDEX index_field ON public.fields USING btree (field);
[...]
Any particular reason for using varchar instead of text, for
On Wed, Jul 17, 2019 at 4:04 AM mayank rupareliya
wrote:
> create table fields(user_id varchar(64), field varchar(64));
> CREATE INDEX index_field ON public.fields USING btree (field);
>
> Any suggestions for improvement?
>
Reduce the number of rows by constructing a relationally normalized data
Am 17.07.19 um 14:48 schrieb Tomas Vondra:
Either that, or try creating a covering index, so that the query can
do an
index-only scan. That might reduce the amount of IO against the table,
and
in the index the data should be located close to each other (same page or
pages close to each other
On Wed, Jul 17, 2019 at 02:53:20PM +0300, Sergei Kornilov wrote:
Hello
Please recheck with track_io_timing = on in configuration. explain
(analyze,buffers) with this option will report how many time we spend
during i/o
Buffers: shared hit=2 read=31492
31492 blocks / 65 sec ~ 480 IOPS, not
Hello
Please recheck with track_io_timing = on in configuration. explain
(analyze,buffers) with this option will report how many time we spend during i/o
> Buffers: shared hit=2 read=31492
31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD
Your query reads table data from disks (
My table is having data like below with 100M records (contains all dummy
data). I am having btree index on column ("field").
*While searching for any text from that column takes longer (more than 1
minute).*
user Id field
d848f466-5e12-46e7-acf4-e12aff592241 Northern Arkansas College
24c32757-e6a8
13 matches
Mail list logo