found xmin x from before relfrozenxid y

2018-10-21 Thread Johannes Graën
Hi,

after upgrading to version 11, I see the error pattern "found xmin x
from before relfrozenxid y" in different databases on different hosts.

From https://www.postgresql.org/docs/10/static/release-10-3.html, I
learned that this was an error caused by pg_upgrade, which apparently
had been fixed in 10.3. This page also states that refreshing the
affected materialized view non-concurrently would fix the problem.

My question is now how to infer the affected materialized view from the
error message. Is there a way to tell which one to refresh from the xmin
or relfrozenxid value?

Best
  Johannes



signature.asc
Description: OpenPGP digital signature


Postgres 10, slave not catching up with master

2018-10-21 Thread Boris Sagadin
Hello,

I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x 1.9TB
NVMe drive) AWS instance with about 5TB of disk space occupied, ext4,
Ubuntu 16.04.

Multi-tenant DB with about 4 tables, insert heavy.

I started a new slave with identical HW specs, SR. DB started syncing from
master, which took about 4 hours, then it started applying the WALs.
However, it seems it can't catch up. Delay is still around 3 hours
(measured with now() - pg_last_xact_replay_timestamp()), even a day later.
It goes a few 100s up and down, but it seems to float around 3h mark.

Disk IO is low at about 10%, measured with iostat, no connected clients,
recovery process is at around 90% CPU single core usage.

Tried tuning the various parameters, but with no avail. Only thing I found
suspicious is stracing the recovery process constantly produces many errors
such as:

lseek(428, 0, SEEK_END) = 780124160
lseek(30, 0, SEEK_END)  = 212992
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
unavailable)
lseek(680, 0, SEEK_END) = 493117440
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
unavailable)
lseek(774, 0, SEEK_END) = 583368704

...[snip]...

read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
unavailable)
lseek(774, 0, SEEK_END) = 583368704
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
unavailable)
lseek(277, 0, SEEK_END) = 502882304
lseek(6, 516096, SEEK_SET)  = 516096
read(6,
"\227\320\5\0\1\0\0\0\0\340\7\246\26\274\0\0\315\0\0\0\0\0\0\0}\0178\5&/\260\r"...,
8192) = 8192
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
unavailable)
lseek(735, 0, SEEK_END) = 272809984
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
unavailable)
lseek(277, 0, SEEK_END) = 502882304

ls -l fd/9
lr-x-- 1 postgres postgres 64 Oct 21 06:21 fd/9 -> pipe:[46358]


Perf top on recovery produces:

 27.76%  postgres[.] pglz_decompress
   9.90%  [kernel][k] entry_SYSCALL_64_after_swapgs
   7.09%  postgres[.] hash_search_with_hash_value
   4.26%  libpthread-2.23.so  [.] llseek
   3.64%  libpthread-2.23.so  [.] __read_nocancel
   2.80%  [kernel][k] __fget_light
   2.67%  postgres[.] 0x0034d3ba
   1.85%  [kernel][k] ext4_llseek
   1.84%  postgres[.] pg_comp_crc32c_sse42
   1.44%  postgres[.] hash_any
   1.35%  postgres[.] 0x0036afad
   1.29%  postgres[.] MarkBufferDirty
   1.21%  postgres[.] XLogReadRecord
[...]

Tried changing the process limits with prlimit to unlimited, but no change.

I can turn off the WAL compression but I doubt this is the main culprit.
Any ideas appreciated.

Regards,
Boris


Re: Postgres 10, slave not catching up with master

2018-10-21 Thread Andy Colson

On 10/21/18 2:06 AM, Boris Sagadin wrote:

Hello,

I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x 1.9TB NVMe 
drive) AWS instance with about 5TB of disk space occupied, ext4, Ubuntu 16.04.

Multi-tenant DB with about 4 tables, insert heavy.

I started a new slave with identical HW specs, SR. DB started syncing from 
master, which took about 4 hours, then it started applying the WALs. However, 
it seems it can't catch up. Delay is still around 3 hours (measured with now() 
- pg_last_xact_replay_timestamp()), even a day later. It goes a few 100s up and 
down, but it seems to float around 3h mark.

Disk IO is low at about 10%, measured with iostat, no connected clients, 
recovery process is at around 90% CPU single core usage.

Tried tuning the various parameters, but with no avail. Only thing I found 
suspicious is stracing the recovery process constantly produces many errors 
such as:

lseek(428, 0, SEEK_END) = 780124160
lseek(30, 0, SEEK_END)  = 212992
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
lseek(680, 0, SEEK_END) = 493117440
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
lseek(774, 0, SEEK_END) = 583368704

...[snip]...

read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
lseek(774, 0, SEEK_END) = 583368704
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
lseek(277, 0, SEEK_END) = 502882304
lseek(6, 516096, SEEK_SET)  = 516096
read(6, 
"\227\320\5\0\1\0\0\0\0\340\7\246\26\274\0\0\315\0\0\0\0\0\0\0}\0178\5&/\260\r"...,
 8192) = 8192
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
lseek(735, 0, SEEK_END) = 272809984
read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
lseek(277, 0, SEEK_END) = 502882304

ls -l fd/9
lr-x-- 1 postgres postgres 64 Oct 21 06:21 fd/9 -> pipe:[46358]


Perf top on recovery produces:

  27.76%  postgres    [.] pglz_decompress
    9.90%  [kernel]    [k] entry_SYSCALL_64_after_swapgs
    7.09%  postgres    [.] hash_search_with_hash_value
    4.26% libpthread-2.23.so   [.] llseek
    3.64% libpthread-2.23.so   [.] __read_nocancel
    2.80%  [kernel]    [k] __fget_light
    2.67%  postgres    [.] 0x0034d3ba
    1.85%  [kernel]    [k] ext4_llseek
    1.84%  postgres    [.] pg_comp_crc32c_sse42
    1.44%  postgres    [.] hash_any
    1.35%  postgres    [.] 0x0036afad
    1.29%  postgres    [.] MarkBufferDirty
    1.21%  postgres    [.] XLogReadRecord
[...]

Tried changing the process limits with prlimit to unlimited, but no change.

I can turn off the WAL compression but I doubt this is the main culprit. Any 
ideas appreciated.

Regards,
Boris



I dont have an answer, but at a guess, decompress is taking the most amount of time, and 
the process spewing "Resource temporarily unavailable" might be just sitting 
around waiting for the decompress to finish.

I'd say yea, definitely try disabling WAL compression.  I'm really curious what 
that would do.

I assume you are using streaming replication?

-Andy



Re: found xmin x from before relfrozenxid y

2018-10-21 Thread Tom Lane
=?UTF-8?Q?Johannes_Gra=c3=abn?=  writes:
> after upgrading to version 11, I see the error pattern "found xmin x
> from before relfrozenxid y" in different databases on different hosts.
> From https://www.postgresql.org/docs/10/static/release-10-3.html, I
> learned that this was an error caused by pg_upgrade, which apparently
> had been fixed in 10.3. This page also states that refreshing the
> affected materialized view non-concurrently would fix the problem.
> My question is now how to infer the affected materialized view from the
> error message. Is there a way to tell which one to refresh from the xmin
> or relfrozenxid value?

No :-(.  I wonder why in the world we didn't make that error message
include the relation and block number the tuple was found in.

(Well, I see the short answer: the code layer throwing the error
doesn't know.  But that could be fixed easily enough.)

In the meantime, the only answer I can think of offhand is to manually
do VACUUM FREEZE on each of your MVs, and then refresh anything that
shows up with an error.

regards, tom lane



Re: found xmin x from before relfrozenxid y

2018-10-21 Thread Johannes Graën
Thanks for your answer.

On 21/10/2018 16.24, Tom Lane wrote:
> In the meantime, the only answer I can think of offhand is to manually
> do VACUUM FREEZE on each of your MVs, and then refresh anything that
> shows up with an error.

Since I have so many of them, I decided to go for a quick-and-dirty
solution (what about REFRESH ALL MATERIALIZED VIEWS in the future?):

> DO
> $$
> DECLARE command text;
> BEGIN
> FOR command IN SELECT 'REFRESH MATERIALIZED VIEW '|| 
> nspname||'.'||relname||';'
> FROM pg_catalog.pg_class c
> LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
> WHERE c.relkind = 'm'
> LOOP
> RAISE NOTICE '%', command;
> EXECUTE command;
> END LOOP;
> END
> $$;





signature.asc
Description: OpenPGP digital signature


Re: [proposal] pg_stat_statements: extension timing instrumentation

2018-10-21 Thread legrand legrand
An other way is to use "waits" reporting ...
something like :

pgss_store (...)
pgstat_report_wait_start(PG_WAIT_EXTENSION);
...
pgstat_report_wait_end();

gives waits of type "Extension", name "Extension" when spending time in this
part of pgss.

This can help those working with waits sampling
as found in pg_stat_activity, pg_wait_sampling or pgsentinel.

note: some of the contentions are already reported as  "LWLock" /
"pg_stat_statements"

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: no queryId in post_parse_analyze hook when row is locked

2018-10-21 Thread legrand legrand
Problem in pgsentinel has been identified, and should be fixed soon.

it has nothing to do with post_parse_analyze hook.

Sorry for the noise.

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: archive items not in correct section order

2018-10-21 Thread Tim Clarke

On 28/08/2018 21:51, Tom Lane wrote:

Hm, could I trouble you for a self-contained test case? I tried to
flesh it out as attached, but I'm not seeing any error with this.
So there must be some other moving part ...

regards, tom lane





Tom, the materialized view in question was dependent on another 
materialized view (pointlessly). I've changed the underlying view to a 
simple one and the problem message "archive items not in correct section 
order" doesn't appear.


Simply nesting the materialized views doesn't cause the issue though so 
I'm no further forward, alas:


drop schema rating cascade;
drop schema r cascade;

create schema rating;
create table rating.cy (f1 int, cid int, ye int);

create schema r;

set search_path = r;

create table c(id int);
create table f(id int, cid int, ye int, ytext text);
create table i(gid int);
create table o(id int);
create table p(id int);
create table wb(cid int, fid int, prop float8);

create materialized view rating.d AS
 SELECT c.id as cid,
    f.id as fid,
    COALESCE(( SELECT sum(
  CASE
  WHEN i.gid = 3 THEN 1
  ELSE 0
    END)::double precision / count(p.id)::double precision AS c
   FROM o,
  p,
  i
      WHERE o.id = c.id AND f.id = p.id AND o.id = i.gid)) AS fp
   FROM rating.cy, c, f
  WHERE c.id = f.id AND f.id = cy.cid AND f.ye = cy.ye
  GROUP BY c.id, f.id;

CREATE materialized VIEW rating.b AS
 SELECT
    d.cid,
    f.ytext,
    min(coalesce(
  (select
    case
  when wb.prop >= 0.333 then 4
  when wb.prop >= 0.25 then 3
  when wb.prop >= 0.15 then 2
  when wb.prop >= 0.1 then 1
  else 0
    end
  FROM r.wb
  where
    wb.cid = d.cid and
    wb.fid = f.id), 0)) as score
  FROM
    rating.cy, rating.d, f
  WHERE
    d.cid = f.cid AND f.cid = cy.cid AND f.ye = cy.ye
  GROUP BY
    1, 2
  LIMIT 1;

Tim Clarke




Re: found xmin x from before relfrozenxid y

2018-10-21 Thread Andres Freund
Hi,

On 2018-10-21 10:24:16 -0400, Tom Lane wrote:
> =?UTF-8?Q?Johannes_Gra=c3=abn?=  writes:
> > after upgrading to version 11, I see the error pattern "found xmin x
> > from before relfrozenxid y" in different databases on different hosts.
> > From https://www.postgresql.org/docs/10/static/release-10-3.html, I
> > learned that this was an error caused by pg_upgrade, which apparently
> > had been fixed in 10.3. This page also states that refreshing the
> > affected materialized view non-concurrently would fix the problem.
> > My question is now how to infer the affected materialized view from the
> > error message. Is there a way to tell which one to refresh from the xmin
> > or relfrozenxid value?
> 
> No :-(.  I wonder why in the world we didn't make that error message
> include the relation and block number the tuple was found in.

Because it was a really complicated bugfix already, I don't think the
answer is more complicated than that.


> (Well, I see the short answer: the code layer throwing the error
> doesn't know.  But that could be fixed easily enough.)

I wonder if the better approach wouldn't be to add an errcontext for
vaccuum, where continually update the block number etc. Theres plenty of
different sources of corruption that'd potentially cause debug messages
or errors, and that should get most of them.

Greetings,

Andres Freund



Re: Help with list partitioning on expression

2018-10-21 Thread David Rowley
On 19 October 2018 at 02:49, Dinko Papak  wrote:
> I have created table partitioned by list on expression using timestamp
> column, so for each timestamp value function returns int and each partition
> table is for single value (range of timestamp for which function result is
> the same). This helps me to simplify querying as I do not need to always
> write date ranges. Querying (partition pruning) works nice, however when I
> attach new partition it seems to always scan whole table, although I do have
> necessary check constraint on partitioned table. I have tried to make
> timestamp column both null and not null. Also, it takes longer to attach
> partition then to add constraint itself although per my understanding those
> 2 operations should do the same scan.

It's not all that obvious, but if you have PARTITION BY LIST
(extract(minute FROM ts)) and try to attach a partition like:

CREATE TABLE mypartition PARTITION OF parted FOR VALUES IN (1);

then the partition constraint is actually (extract(minute FROM ts) IS
NOT NULL AND extract(minute FROM ts) = 1). If your CHECK constraint is
just checking `extract(minute FROM ts) = 1` then the current code in
PartConstraintImpliedByRelConstraint() is not smart enough to know
that `extract(minute FROM ts) = 1` is strict and cannot match nulls.
Perhaps that could be improved, but that's how it is today.

Likely you'll have better luck with a check constraint that explicitly
checks the function IS NOT NULL.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



RE: Help with list partitioning on expression

2018-10-21 Thread Dinko Papak
Thank you! Thank you! Thank you!



I would not have figured it out, but solution is so simple.



Sent from Mail for Windows 10




From: David Rowley 
Sent: Sunday, October 21, 2018 6:50:47 PM
To: Dinko Papak
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Help with list partitioning on expression

On 19 October 2018 at 02:49, Dinko Papak  wrote:
> I have created table partitioned by list on expression using timestamp
> column, so for each timestamp value function returns int and each partition
> table is for single value (range of timestamp for which function result is
> the same). This helps me to simplify querying as I do not need to always
> write date ranges. Querying (partition pruning) works nice, however when I
> attach new partition it seems to always scan whole table, although I do have
> necessary check constraint on partitioned table. I have tried to make
> timestamp column both null and not null. Also, it takes longer to attach
> partition then to add constraint itself although per my understanding those
> 2 operations should do the same scan.

It's not all that obvious, but if you have PARTITION BY LIST
(extract(minute FROM ts)) and try to attach a partition like:

CREATE TABLE mypartition PARTITION OF parted FOR VALUES IN (1);

then the partition constraint is actually (extract(minute FROM ts) IS
NOT NULL AND extract(minute FROM ts) = 1). If your CHECK constraint is
just checking `extract(minute FROM ts) = 1` then the current code in
PartConstraintImpliedByRelConstraint() is not smart enough to know
that `extract(minute FROM ts) = 1` is strict and cannot match nulls.
Perhaps that could be improved, but that's how it is today.

Likely you'll have better luck with a check constraint that explicitly
checks the function IS NOT NULL.

--
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: BUG: Incorrect working with POSIX locale if database in UTF-8 encoding

2018-10-21 Thread Олег Самойлов
I think correct behavior will be get the whole locale from postgresql.conf 
(like the backend processes do) or from environment. It’s  a question, may be, 
from what place do take locale, but obviously from only one. But do not take 
LC_TYPE from the one place (postgresql.conf), while LC_MESSAGES from other 
(environment). Te bug is here.

> 18 окт. 2018 г., в 19:29, Tom Lane  написал(а):
> 
> =?utf-8?B?0J7Qu9C10LMg0KHQsNC80L7QudC70L7Qsg==?=  writes:
>> [ postmaster's localized messages are printed as garbage if LANG is C or 
>> unset ]
> 
> I'm not quite convinced that this is a bug.  The reason it's misbehaving
> is that in the postmaster process (and, probably, non-backend children)
> LC_MESSAGES gets set to whatever you said in postgresql.conf, but LC_CTYPE
> is never changed away from what it was in the postmaster's environment.
> So if the prevailing environment setting is C/POSIX, gettext() throws up
> its hands and substitutes "?" for non-ASCII characters, because it has
> no idea which encoding to render them in.
> 
> This is sort of intentional, in that the environment LC_CTYPE ought to
> reflect the "console encoding" that you're operating in; if you run your
> terminal in say KOI8R, then you set LC_CTYPE=ru_RU.koi8r and messages
> should get printed in the encoding the terminal is expecting.
> 
> We could maybe make a case for forcing gettext to use the encoding
> implied by LC_MESSAGES if LC_CTYPE is C/POSIX, but I'm not really
> convinced that there's anything principled about that.
> 
> On the other hand, the current behavior in this situation surely
> isn't useful to anybody.  Arguably, gettext() is being pretty
> unhelpful here, but I doubt we could get them to change.
> 
> Peter, any thoughts?
> 
>   regards, tom lane




Optimizing Postgresql ILIKE while query

2018-10-21 Thread aman gupta
Hi Team,

Greetings for the day!!

Platform:

PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-11), 64-bit


Issue:


We have the base table which contains 22M records and we created a view on
top of it while querying the view with ILIKE clause it took 44 seconds and
with LIKE Clause 20 Seconds

Query:

fm_db_custom_db=# EXPLAIN (ANALYZE, TIMING OFF)
select
destination,hostname,inputfilename,inputtime,logicalservername,outputfilename,outputtime,processinglink,source,totalinputbytes,totalinputcdrs,totaloutputbytes,totaloutputcdrs
from mmsuper.test_20m_view  where inputfilename ilike
'%SDPOUTPUTCDR_4001_BLSDP09_ADM_4997_18-10-15-02549.ASN%';

Also attaching the comparison for both ILIKE and LIKE test performed.

Expectation:

How can we optimize our ILIKE query, since it is hardcoded in the
application and we can't use any other keyword than ILIKE .

BR//
Aman Gupta
+918447611183
amangp...@gmail.com
fm_db_custom_db=# \d+ test_20m_view
 View "mmsuper.test_20m_view"
  Column   |Type | Modifiers | Storage  | 
Description 
---+-+---+--+-
 inputfilename | character varying(300)  |   | extended | 
 source| character varying(80)   |   | extended | 
 hostname  | character varying(300)  |   | extended | 
 configname| character varying(300)  |   | extended | 
 logicalservername | character varying(300)  |   | extended | 
 clustername   | character varying(80)   |   | extended | 
 inputtime | timestamp without time zone |   | plain| 
 totalinputbytes   | bigint  |   | plain| 
 totalinputcdrs| bigint  |   | plain| 
 destination   | character varying(80)   |   | extended | 
 outputfilename| character varying(300)  |   | extended | 
 processinglink| character varying(80)   |   | extended | 
 totaloutputbytes  | bigint  |   | plain| 
 totaloutputcdrs   | bigint  |   | plain| 
 outputtime| timestamp without time zone |   | plain| 
View definition:
 SELECT a67.sourceid AS inputfilename,
a67.innodename AS source,
a67.hostname,
a67.configname,
a67.logicalservername,
ci.layered_clustername AS clustername,
a67.intime AS inputtime,
a73.bytes AS totalinputbytes,
a73.cdrs AS totalinputcdrs,
a68.outnodename AS destination,
a68.destinationid AS outputfilename,
a68.outnodeid AS processinglink,
a68.bytes AS totaloutputbytes,
a68.cdrs AS totaloutputcdrs,
a68.outtime AS outputtime
   FROM test_report_auditlog2 a67
 LEFT JOIN test_report_auditlog2 a73 ON a67.destinationid::text = 
a73.sourceid::text AND a67.event = 67 AND a73.event = 73
 LEFT JOIN test_report_auditlog2 a80 ON a73.sourceid::text = 
a80.sourceid::text AND a80.event = 80
 LEFT JOIN test_report_auditlog2 a68 ON a80.destinationid::text = 
a68.sourceid::text AND a68.event = 68
 LEFT JOIN clusterinfo ci ON a68.hostname::text = ci.hostname::text;

fm_db_custom_db=# 


fm_db_custom_db=# EXPLAIN (ANALYZE, TIMING OFF)
fm_db_custom_db-# select destination,   
 
fm_db_custom_db-#  hostname,
  
fm_db_custom_db-#  inputfilename,   
  
fm_db_custom_db-#  inputtime,   
  
fm_db_custom_db-#  logicalservername,   
  
fm_db_custom_db-#  outputfilename,  
  
fm_db_custom_db-#  outputtime,  
  
fm_db_custom_db-#  processinglink,  
  
fm_db_custom_db-#  source,  
  
fm_db_custom_db-#  totalinputbytes, 
  
fm_db_custom_db-#  totalinputcdrs,  
  
fm_db_custom_db-#  totaloutputbytes,
  
fm_db_custom_db-#  totaloutputcdrs  
  
fm_db_custom_db-#  from mmsuper.test_20m_view   
   
fm_db_custom_db-#  where inputfilename ilike 
'%SDPOUTPUTCDR_4001_BLSDP09_ADM_4997_18-10-15-02549.ASN%';
  

Re: Optimizing Postgresql ILIKE while query

2018-10-21 Thread Pavel Stehule
Hi

po 22. 10. 2018 v 7:57 odesílatel aman gupta  napsal:

> Hi Team,
>
> Greetings for the day!!
>
> Platform:
>
> PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-11), 64-bit
>
>
> Issue:
>
>
> We have the base table which contains 22M records and we created a view on
> top of it while querying the view with ILIKE clause it took 44 seconds and
> with LIKE Clause 20 Seconds
>
> Query:
>
> fm_db_custom_db=# EXPLAIN (ANALYZE, TIMING OFF)
> select
> destination,hostname,inputfilename,inputtime,logicalservername,outputfilename,outputtime,processinglink,source,totalinputbytes,totalinputcdrs,totaloutputbytes,totaloutputcdrs
> from mmsuper.test_20m_view  where inputfilename ilike
> '%SDPOUTPUTCDR_4001_BLSDP09_ADM_4997_18-10-15-02549.ASN%';
>
> Also attaching the comparison for both ILIKE and LIKE test performed.
>
> Expectation:
>
> How can we optimize our ILIKE query, since it is hardcoded in the
> application and we can't use any other keyword than ILIKE .
>

look on trigram index

https://www.postgresql.org/docs/11/static/pgtrgm.html

Regards

Pavel

>
> BR//
> Aman Gupta
> +918447611183
> amangp...@gmail.com
>
>