Re: Searching in varchar column having 100M records

2019-07-19 Thread mgbii bax
Another suggestion, try to cluster the table using the index for the "field" column, then analyze. If you're on a spinning disk it will help if you sort your search "field" during bulk insert. -- regards marie g. bacuno ii On Fri, Jul 19, 2019 at 11:39 AM Tomas Vondra wrote: > On Fri, Jul 19,

Re: Searching in varchar column having 100M records

2019-07-19 Thread Tomas Vondra
On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote: Well, you haven't shown us the execution plan, so it's hard to check why it did not help much and give you further advice. This is the latest query execution with explain after adding indexing on both columns. Aggregate (cost=

Re: Searching in varchar column having 100M records

2019-07-19 Thread Michael Lewis
On Fri, Jul 19, 2019 at 8:13 AM mayank rupareliya wrote: > Well, you haven't shown us the execution plan, so it's hard to check why > it did not help much and give you further advice. > > > This is the latest query execution with explain after adding indexing on > both columns. > > Aggregate (co

Re: Searching in varchar column having 100M records

2019-07-19 Thread mayank rupareliya
Well, you haven't shown us the execution plan, so it's hard to check why it did not help much and give you further advice. This is the latest query execution with explain after adding indexing on both columns. Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual time=65087.657..65087.65

Re: Searching in varchar column having 100M records

2019-07-18 Thread Gavin Flower
On 18/07/2019 23:51, mayank rupareliya wrote: [...] Thanks Andreas, David, Gavin /Any particular reason for using varchar instead of text, for field?/ No use UUID for the user_id.Agreed /[...]/ /Use of text is preferred, but I can't see it making any significant difference to performance --

Re: Searching in varchar column having 100M records

2019-07-18 Thread Tomas Vondra
On Thu, Jul 18, 2019 at 05:21:49PM +0530, mayank rupareliya wrote: *Please recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o* *> Buffers: shared hit=2 read=31492* *31492 blocks / 65 sec ~ 480 IOPS, not

Re: Searching in varchar column having 100M records

2019-07-18 Thread mayank rupareliya
*Please recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o* *> Buffers: shared hit=2 read=31492* *31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD* *Your query reads table data from disks (w

Re: Searching in varchar column having 100M records

2019-07-17 Thread Gavin Flower
On 17/07/2019 23:03, mayank rupareliya wrote: [...] Table and index are created using following query. create table fields(user_id varchar(64), field varchar(64)); CREATE INDEX index_field ON public.fields USING btree (field); [...] Any particular reason for using varchar instead of text, for

Re: Searching in varchar column having 100M records

2019-07-17 Thread David G. Johnston
On Wed, Jul 17, 2019 at 4:04 AM mayank rupareliya wrote: > create table fields(user_id varchar(64), field varchar(64)); > CREATE INDEX index_field ON public.fields USING btree (field); > > Any suggestions for improvement? > Reduce the number of rows by constructing a relationally normalized data

Re: Searching in varchar column having 100M records

2019-07-17 Thread Andreas Kretschmer
Am 17.07.19 um 14:48 schrieb Tomas Vondra: Either that, or try creating a covering index, so that the query can do an index-only scan. That might reduce the amount of IO against the table, and in the index the data should be located close to each other (same page or pages close to each other

Re: Searching in varchar column having 100M records

2019-07-17 Thread Tomas Vondra
On Wed, Jul 17, 2019 at 02:53:20PM +0300, Sergei Kornilov wrote: Hello Please recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o Buffers: shared hit=2 read=31492 31492 blocks / 65 sec ~ 480 IOPS, not

Re: Searching in varchar column having 100M records

2019-07-17 Thread Sergei Kornilov
Hello Please recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o > Buffers: shared hit=2 read=31492 31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD Your query reads table data from disks (

Searching in varchar column having 100M records

2019-07-17 Thread mayank rupareliya
My table is having data like below with 100M records (contains all dummy data). I am having btree index on column ("field"). *While searching for any text from that column takes longer (more than 1 minute).* user Id field d848f466-5e12-46e7-acf4-e12aff592241 Northern Arkansas College 24c32757-e6a8