Re: Tools to convert timestamp data to another time zone in PostgreSQL

2022-06-13 Thread Ilya Anfimov
On Mon, Jun 13, 2022 at 09:52:00PM +, Joel Rabinovitch wrote:
>Hi,
> 
> 
> 
>We have recently modified our application to work with PostgreSQL
>databases and schemas. We also support Oracle and SQL Server Databases.
[skipped]

> 
>As a result, we have hit an issue where we need to convert data in
>timestamp columns in existing records to reflect that the time is in UTC.
>The timezone is not specified in our timestamp columns (i.e. they are
>defined as timezone without time zone). We need to do this for

 btw, it's not specified in timestamptz either.
 timestamptz always stores time in UTC microseconds, and displays
it in timezone according to the session settings.

>interoperability between the database engines we support.

 It's better to use timestamptz type 

 
https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_timestamp_.28without_time_zone.29
 

 and I think it's a good time to do that change.

[skipped]

Setting  timezone in session to 'America/New_York' and converting
column to timestamptz should do it fine

 set timezone = 'America/New_York';
 ALTER TABLE  ALTER COLUMN  create_stamp  TYPE  timestamptz;

 should do it just fine (on a reasonably sized tables).
 Other  possibilities, like creating a new column and renaming it
after the proper feel in, are possible.

> 
>   set create_stamp = (create_stamp at time zone 'America/New_York' at
>time zone 'UTC')


> 
>where client_code = 'HOANA';
> 
> 
> 




Re: lifetime of the old CTID

2022-07-06 Thread Ilya Anfimov
On Wed, Jul 06, 2022 at 02:26:00PM +0200, Matthias Apitz wrote:
> El d??a Mittwoch, Juli 06, 2022 a las 11:45:14 +0200, Karsten Hilbert 
> escribi??:
> 
> > Am Wed, Jul 06, 2022 at 08:18:42AM +0200 schrieb Matthias Apitz:
> > 

[skipped]

> WHERE-clause of its interest ("WHERE d01status=4"), here all books which
> are lent to patrons. This gives in the DB layer a CURSOR of say 100.000
> rows of the 3.000.000 in the table. Now the application fetches row
> by row and see if something should be done with the row. If so, the
> DB layer must LOCK the row for update. It does so using the CTID. Of
> course there is a key in the row (d01gsi, the signature of the book),
> but this is not uniqu and can't be used to lock exactly this row for update.

 Then add the primary key to the result of your function.
 The primary key for a table could easily be found in pg_constraint.

 You could even collapse it into one field and name it CTID in the resultset, if
it is strictly necessary by your logic.





Re: jsonb Indexing

2021-09-20 Thread Ilya Anfimov
On Mon, Sep 20, 2021 at 12:52:54PM +0200, rami...@gmail.com wrote:
> Hello Julien,
> 
> On 9/17/21 4:00 PM, Julien Rouhaud wrote:
> > Hi,
> > 
> > On Fri, Sep 17, 2021 at 9:55 PM  wrote:
> > > I was wondering what I'm doing wrong. There are steps what I've tried:
> > > 
> > > CREATE TABLE api (
> > >   jdoc jsonb
> > > );
> > > 
> > > INSERT INTO api (jdoc)
> > >   VALUES ('{
> > >   "guid": "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
> > >   "name": "Angela Barton",
> > >   "is_active": true,
> > >   "company": "Magnafone",
> > >   "address": "178 Howard Place, Gulf, Washington, 702",
> > >   "registered": "2009-11-07T08:53:22 +08:00",
> > >   "latitude": 19.793713,
> > >   "longitude": 86.513373,
> > >   "tags": [
> > >   "enim",
> > >   "aliquip",
> > >   "qui"
> > >   ]
> > > }');
> > > 
> > > CREATE INDEX idxgintags ON api USING GIN ((jdoc->'tags'));
> > > 
> > > EXPLAIN ANALYZE SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc ->
> > > 'tags' ? 'qui';
> > > 
> > > And the result is
> > > 
> > > Seq Scan on api  (cost=0.00..1.02 rows=1 width=64) (actual
> > > time=0.019..0.021 rows=1 loops=1)
> > > Filter: ((jdoc -> 'tags'::text) ? 'qui'::text)
> > > 
> > > Planning Time: 0.115 ms
> > > 
> > > Execution Time: 0.047 ms
> > > 
> > > Do you know why Index Scan on idxgintag is not used?
> > Yes, because doing an index scan on a table containing a single row is
> > an order or magnitude less efficient than simply doing a sequential
> > scan.  You should try to simulate something close to your production
> > data to see something interesting.
> 
> Thank you for the tip. I've tried to generate more data. I have 2000 rows in
> the table but the query still uses sequential scan.
> 
> Seq Scan on api  (cost=0.00..131.00 rows=2000 width=64) (actual
> time=0.005..0.959 rows=2000 loops=1)
>   Filter: ((jdoc -> 'tags'::text) ? 'qui'::text)
> Planning Time: 0.064 ms
> Execution Time: 1.027 ms
> 
> Any thoughts?

 The planner expects index selectivity around 1 (all the rows to be selected).
 btw, it was right (all the rows were selected).

 So, trying to select something by the index is just wasting time,
compared to seq scan.

> 
> 




Re: develop a extension with cpp?

2021-11-02 Thread Ilya Anfimov
On Tue, Nov 02, 2021 at 01:31:22AM +, huangning...@yahoo.com wrote:
>Hi
>if i can develop a extension with cpp language?
>regards!

 Generally,  like  in  any other language. Write files, use calls
and APIs available inside Postgres, compile  it  and  install  to
postgres directory.

 The  only one I know of as of now is https://github.com/postgrespro/rusmorph

 And it's not an example of a well-documented extension, unfortu-
nately.

 Hoewever,  it  works.  It's possible that you can find some more
examples on pgxn.org

 Also, probably you would need to compile it with

USE_PGXS=1 make with_llvm=no

 While  the USE_PGXS=1 is a standard way to compile extension out
of the postgres source tree, the with_llvm=no is a workaround  of
some bug in the llvm-7, that doesn't allow JIT compilation of C++
code.





Re: SELECT fails to present result rows depending on the columns to show

2021-11-23 Thread Ilya Anfimov
On Tue, Nov 23, 2021 at 09:15:06AM +0100, Matthias Apitz wrote:
> 
> Hello,
> 
> We encounter the following problem in a 13.1 server on Linux:
> 
> sisis=# select desk, feldnr from titel_worte where desk = '2' and feldnr = 
> 257;
>  desk | feldnr
> --+
>  2|257
> (1 row)
> 
> 
> but:
> 
> sisis=# select * from titel_worte where desk = '2' and feldnr = 257;
>  desknr | feldnr | desk | deskorg | gesanz | aufanz | katkey1 | katkey2
> ++--+-+++-+-
> (0 row)
> 
> sisis=# select desk, feldnr, deskorg from titel_worte where desk = '2' and 
> feldnr = 257;
>  desk | feldnr | deskorg
> --++-
> (0 row)
> 
> The table was created as:
> 
> create table titel_worte (
>   desknr  serial,
>   feldnr SMALLINT   ,
>   desk VARCHAR (245)  ,
>   deskorg VARCHAR (245)  ,
>   gesanz INTEGER   ,
>   aufanz INTEGER   ,
>   katkey1 INTEGER   ,
>   katkey2 INTEGER
>  )
>  ;
> 
> There are no messages in the serverlog when the SELECT fails to show the
> row.
> 
> What could be the reason for this? Thanks

 Broken index could. Then this anomaly shoud have gone after reindex table.

> 
>   matthias
> -- 
> Matthias Apitz, ??? g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
> 




Re: Questions on Upgrading PostgreSQL from 15.0 to 15.9 and Setting Up Streaming Replication

2024-11-25 Thread Ilya Anfimov
On Sun, Nov 24, 2024 at 09:35:15PM +0530, Subhash Udata wrote:
>Dear PostgreSQL Community,
> 
>I have a production database setup with a primary server and a standby
>server. The database is currently running on PostgreSQL 15.0, and I plan
>to upgrade both servers to 15.9.
> 
>I have the following questions regarding the upgrade and replication
>process:
> 
> 1. Upgrade and Replication Compatibility:
> 
>   * My plan is to perform a failover, promote the standby server
> (currently 15.0) to primary, and then upgrade the old primary
> server to version 15.9.

 1) Why do you want to use a switchover first?
 You can upgrade the standby, then switchover to it.
 (You  could  even  don't  switchover  back, when the old primary
would be upgraded and synchonized).


>   * After upgrading the old primary server to version 15.9, I want to
> configure it as a standby server and set up streaming replication
> with the new primary server, which will still be running version
> 15.0.
>   * Is it possible to establish streaming replication between these
> two versions (15.0 as primary and 15.9 as standby)?
> 2. Efficient Replication Setup:
> 
>   * The production database is around 1TB in size, and creating
> replication using pg_basebackup is taking more than 2-3 hours to
> complete.
>   * Is there an alternative method to set up replication without
> taking a full backup of the entire cluster but instead using only
> the WAL files that have changed on both servers?

 Well, there are some.

pg_rewind  is one of those (you should keep all the WAL files be-
tween switchover point and now on both servers. Also, maximum one
switchover/failover AFAIK. Also, it's a bit fragile nevertheless,
bad things could happen  if  you  mix  timelines  from  the  very
straight  scenario  of one switchover+pg_rewind on the old prima-
ry).

 Hoewever, I'd usually use rsync+low-level backup protocol
 
https://www.postgresql.org/docs/15/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP

 This  requires some manual commands, writing backup_label and so
on -- but looks more straightforward to me.
 (And yes, rsync uses block-level comparision and transfers  only
change blocks.
 setting block-size to 8k in rsync could be beneficial).

> 
>Your guidance and recommendations on these questions will be greatly
>appreciated.
> 
>Thank you for your time and support!
> 
>Best regards,
> 
>Subhash