Re: Help with writing a generate_series(tsmultirange, interval)

2021-08-01 Thread Alban Hertroys


> On 1 Aug 2021, at 3:30, Tom Lane  wrote:
> 
> =?utf-8?Q?Fran=C3=A7ois_Beausoleil?=  writes:
>> While mowing the lawn, I thought that since the syntax of multi ranges is 
>> similar to arrays, maybe I could use unnest(), but sadly, that was not to be 
>> the case:
>> # select 
>> unnest('{[2021-08-02,2021-08-04],[2021-08-07,2021-08-09)}'::tsmultirange);
>> ERROR:  function unnest(tsmultirange) does not exist
> 
> That's fixed for beta3:
> 
> regression=# select 
> unnest('{[2021-08-02,2021-08-04],[2021-08-07,2021-08-09)}'::tsmultirange);
>unnest 
> ---
> ["2021-08-02 00:00:00","2021-08-04 00:00:00"]
> ["2021-08-07 00:00:00","2021-08-09 00:00:00")
> (2 rows)
> 
> 
>   regards, tom lane

If what you need is behaving similar to arrays, perhaps arrays of ranges suit 
your problem?

development=> select 
unnest(array['[2021-08-02,2021-08-04]'::tsrange,'[2021-08-07,2021-08-09)'::tsrange]);
unnest 
---
 ["2021-08-02 00:00:00","2021-08-04 00:00:00"]
 ["2021-08-07 00:00:00","2021-08-09 00:00:00")
(2 rows)

The drawback of that approach is probably with the operators and functions you 
have to your avail. It seems to me though that several of those not available 
for arrays could be emulated using array functions such as array_position(…) 
for the contains operator, unnest with tsrange functions for others, etc.


Another approach could be to store the “rules” of the schedule and generate the 
relevant portion of the multirange as a set of tsrange rows on-the-fly. That 
may well perform better than storing the entire range in a table of tsrange 
records.

I’ve done something like that for a hierarchical query on versioned items where 
I had to base how to slice through the hierarchy on a reference timestamp. That 
performed adequately on a production data warehouse, as long as you 
sufficiently constrained the inputs. You can join such a function (laterally) 
to some other data set too.

Regards,

Alban Hertroys
--
There is always an exception to always.








Re: postgres vacuum memory limits

2021-08-01 Thread Vijaykumar Jain
On Sun, 1 Aug 2021 at 10:27, Ayub M  wrote:
>
> Hello, when maintenance_work_mem and autovacuum_work_mem are set, my
understanding is that the vacuum and autovacuum sessions should be limited
to use the memory limits set by these parameters. But I am seeing more
memory being used than these limits by autovacuum sessions, any reason why
this would happen?
>
> Please see below examples, where maintenance_work_mem is set to 20mb and
shared_buffers is 128mb. When I see the memory for this session in top, it
shows 162mb. But when default_statistics_target is increased to 3000, the
session usage is 463mb, which is way more than 20mb maintenance_work_mem
and 128mb shared_buffer. Shouldn't the process memory be capped to 20+128mb?
>
>
> postgres=# show maintenance_work_mem ;
>  maintenance_work_mem
> --
>  20MB
> (1 row)
> postgres=# vacuum analyze mdm_context;
> VACUUM
> postgres=# show shared_buffers;
>  shared_buffers
> 
>  128MB
> (1 row)
>
>PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+
COMMAND

>  62246 postgres  20   0  422892 165996 139068 R 57.1 15.7  25:06.34
postgres: postgres postgres [local] VACUUM
>

your assumption may be right, but i am not sure of the interpretation from
top.
I have to admit I am not great at understanding top output (RES vs VIRT) in
general when it comes to limits.

I did a demo cgroup setup with limit max memory to 5MB, started psql using
cgexec and ran vacuum with maintenance_work_mem = 1024 (1MB)
it ran fine. I am not sharing the results, it may divert the convo.

The vacuum process seems to get dead tuples as a function of
maintenance_work_mem, and if it is small, it will loop many times (the run
may take longer)
but respect that limit (+- tolerance). also, vacuum verbose only prints CPU
usage per iteration of removing dead tupes from pages, but no mem usage. so
idk.

DEBUG:  StartTransaction(1) name: unnamed; blockState: DEFAULT; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG:  CommitTransaction(1) name: unnamed; blockState: STARTED; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG:  StartTransaction(1) name: unnamed; blockState: DEFAULT; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG:  vacuuming "public.t"
DEBUG:  launched 1 parallel vacuum worker for index vacuuming (planned: 1)
DEBUG:  scanned index "t_col1_idx" to remove 174518 row versions
DETAIL:  CPU: user: 0.04 s, system: 0.00 s, elapsed: 0.04 s
DEBUG:  CommitTransaction(1) name: unnamed; blockState: STARTED; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG:  StartTransaction(1) name: unnamed; blockState: DEFAULT; state:
INPROGRESS, xid/subid/cid: 0/1/0
DEBUG:  starting parallel vacuum worker for bulk delete
DEBUG:  scanned index "t_col1_idx1" to remove 174518 row versions
DETAIL:  CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.02 s
DEBUG:  CommitTransaction(1) name: unnamed; blockState:
PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG:  "t": removed 174518 dead item identifiers in 1424 pages


i can be corrected, as i could not really get values from the source to
profile mem usage per function call.
from the source,
i have a feeling, shared_mem has nothing to do with vacuum, but i may be
wrong.

I think someone who is more aware of the process/code can throw more light.
but thanks for asking. will learn something figuring this out.

FROM THE CODE COMMENTS:
 * The major space usage for LAZY VACUUM is storage for the array of dead
tuple
 * TIDs.  We want to ensure we can vacuum even the very largest relations
with
 * finite memory space usage.  To do that, we set upper bounds on the
number of
 * tuples we will keep track of at once.
 *
 * We are willing to use at most maintenance_work_mem (or perhaps
 * autovacuum_work_mem) memory space to keep track of dead tuples.  We
 * initially allocate an array of TIDs of that size, with an upper limit
that
 * depends on table size (this limit ensures we don't allocate a huge area
 * uselessly for vacuuming small tables).  If the array threatens to
overflow,
 * we suspend the heap scan phase and perform a pass of index cleanup and
page
 * compaction, then resume the heap scan with an empty TID array.
 *
 * If we're processing a table with no indexes, we can just vacuum each page
 * as we go; there's no need to save up multiple tuples to minimize the
number
 * of index scans performed.  So we don't use maintenance_work_mem memory
for
 * the TID array, just enough to hold as many heap tuples as fit on one
page.
 *
 * Lazy vacuum supports parallel execution with parallel worker processes.
In
 * a parallel vacuum, we perform both index vacuum and index cleanup with
 * parallel worker processes.  Individual indexes are processed by one
vacuum
 * process.  At the beginning of a lazy vacuum (at lazy_scan_heap) we
prepare
 * the parallel context and initialize the DSM segment that contains shared
 * information as well as the memory space for storing dead tuples.  When
 * starting either index vacuum or index cleanup, w

ERROR: ImportError: No module named 'psutil'

2021-08-01 Thread Ganesh Korde
Hi Everyone,

Environment
OS : Linux ubuntu 4.4.0-87-generic
Database:  PostgreSQL 10.7 on x86_64-pc-linux-gnu,
Extension: plpython3u
Python version: Python 3.5.2

 I am trying to run a python function using the language plpython3u and I
am getting the below error, though I have installed psutil.

postgres=# select * from get_psutil_mem();
ERROR:  ImportError: No module named 'psutil'
CONTEXT:  Traceback (most recent call last):
  PL/Python function "get_psutil_mem", line 2, in 
from psutil import virtual_memory, swap_memory
PL/Python function "get_psutil_mem"

 Below is the successful execution of psutil

root@ubuntu:~# python
Python 3.5.2 (default, Jan 26 2021, 13:30:48)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil
>>> psutil.cpu_times()
scputimes(user=139677.25, nice=755.4, system=53408.11, idle=10956992.84,
iowait=18110.06, irq=0.0, softirq=1294.34, steal=0.0, guest=0.0,
guest_nice=0.0)
>>>

Function details:

CREATE OR REPLACE FUNCTION get_psutil_mem(
OUT total float8, OUT used float8, OUT free float8, OUT buff_cache float8,
OUT available float8, OUT percent float8,
OUT swap_total float8, OUT swap_used float8, OUT swap_free float8, OUT
swap_percent float8
)
 LANGUAGE plpython3u
AS $FUNCTION$
from psutil import virtual_memory, swap_memory
vm = virtual_memory()
sw = swap_memory()
return vm.total, vm.used, vm.free, vm.buffers + vm.cached, vm.available,
vm.percent, sw.total, sw.used, sw.free, sw.percent
$FUNCTION$;

The above function is used by pgwatch2 to monitor memory.

*I tried installing python3.4.3 from source code but still the same
error.* Also
tried most of the solutions provided on the internet but nothing helped.
Problem is with the psutil module only.

Please let me know if I am missing anything.

Any help will be much appreciated.

Regards,
Ganesh Korde.


Re: postgres vacuum memory limits

2021-08-01 Thread Tom Lane
"David G. Johnston"  writes:
> On Saturday, July 31, 2021, Ayub M  wrote:
>> But when default_statistics_target is increased to 3000, the session usage
>> is 463mb

> IIUC, the analyze process doesn’t consult maintenance_work_mem.  It simply
> creates an array, in memory, to hold the random sample of rows needed for
> computing the requested statistics.

Yeah.  A sample of N rows of the table is going to take X amount of
memory; playing with [maintenance_]work_mem isn't going to affect that.
If you're not happy with the memory consumption, the statistics target
is exactly the knob that's provided to adjust that.

In an ideal world maybe ANALYZE could work within a memory budget that's
smaller than the sample size, but I think that'd inevitably involve a
lot more I/O and significantly worse performance than what we do now.
In any case it'd require a massive rewrite that breaks a lot of
extensions, since the per-datatype APIs for ANALYZE presume in-memory
data.

Keep in mind also that large statistics targets translate to bloat
everywhere else too, since that implies larger pg_statistic entries
for the planner to consult.  So I'm not sure that focusing on ANALYZE's
usage in isolation is a helpful way to think about this.  If you can't
afford the amount of memory needed to run ANALYZE, you won't like the
downstream behavior either.

regards, tom lane




Re: ERROR: ImportError: No module named 'psutil'

2021-08-01 Thread Adrian Klaver

On 8/1/21 7:40 AM, Ganesh Korde wrote:

Hi Everyone,

Environment
OS : Linux ubuntu 4.4.0-87-generic
Database:  PostgreSQL 10.7 on x86_64-pc-linux-gnu,
Extension: plpython3u
Python version: Python 3.5.2

  I am trying to run a python function using the language plpython3u and 
I am getting the below error, though I have installed psutil.


postgres=# select * from get_psutil_mem();
ERROR:  ImportError: No module named 'psutil'
CONTEXT:  Traceback (most recent call last):
   PL/Python function "get_psutil_mem", line 2, in 
     from psutil import virtual_memory, swap_memory
PL/Python function "get_psutil_mem"




The above function is used by pgwatch2 to monitor memory.

*I tried installing python3.4.3 from source code but still the same 
error.* Also tried most of the solutions provided on the internet but 
nothing helped. Problem is with the psutil module only.


Please let me know if I am missing anything.


How did you install plpython3u?

How did you install psutil?



Any help will be much appreciated.

Regards,
Ganesh Korde.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: ERROR: ImportError: No module named 'psutil'

2021-08-01 Thread Tom Lane
Adrian Klaver  writes:
> On 8/1/21 7:40 AM, Ganesh Korde wrote:
>> I am trying to run a python function using the language plpython3u and 
>> I am getting the below error, though I have installed psutil.
>> 
>> postgres=# select * from get_psutil_mem();
>> ERROR:  ImportError: No module named 'psutil'

> How did you install plpython3u?
> How did you install psutil?

Experimenting locally, I observe something that seems relevant:
I get that error message phrasing from python *2*.  In python 3,
an unknown module name draws "ModuleNotFoundError":

$ python3
Python 3.6.8 (default, Mar 18 2021, 08:58:41) 
[GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from z import virtual_memory, swap_memory
Traceback (most recent call last):
  File "", line 1, in 
ModuleNotFoundError: No module named 'z'

Now it's possible that the wording didn't change exactly at the 2/3
boundary, but I doubt it.  So it looks to me like the alleged plpython3u
language is actually invoking python 2.something, which is unlikely
to work well at all.

regards, tom lane




Re: ERROR: ImportError: No module named 'psutil'

2021-08-01 Thread Adrian Klaver

On 8/1/21 8:51 AM, Tom Lane wrote:

Adrian Klaver  writes:




Now it's possible that the wording didn't change exactly at the 2/3
boundary, but I doubt it.  So it looks to me like the alleged plpython3u
language is actually invoking python 2.something, which is unlikely
to work well at all.


In that vein the OP could run:

DO $$
import sys
plpy.notice(sys.version)
$$ LANGUAGE plpython3u;

in the database to see what plpython3u is actually pointing at?



regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: ERROR: ImportError: No module named 'psutil'

2021-08-01 Thread Tom Lane
Adrian Klaver  writes:
> In that vein the OP could run:

> DO $$
>  import sys
>  plpy.notice(sys.version)
> $$ LANGUAGE plpython3u;

> in the database to see what plpython3u is actually pointing at?

+1 ... looking at sys.path in the same way would also be useful.

regards, tom lane




Re: postgres vacuum memory limits

2021-08-01 Thread Vijaykumar Jain
On Sun, 1 Aug 2021 at 20:04, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

>
> On Sun, 1 Aug 2021 at 10:27, Ayub M  wrote:
> >
> > Hello, when maintenance_work_mem and autovacuum_work_mem are set, my
> understanding is that the vacuum and autovacuum sessions should be limited
> to use the memory limits set by these parameters. But I am seeing more
> memory being used than these limits by autovacuum sessions, any reason why
> this would happen?
> >
> > Please see below examples, where maintenance_work_mem is set to 20mb and
> shared_buffers is 128mb. When I see the memory for this session in top, it
> shows 162mb. But when default_statistics_target is increased to 3000, the
> session usage is 463mb, which is way more than 20mb maintenance_work_mem
> and 128mb shared_buffer. Shouldn't the process memory be capped to 20+128mb?
> >
> >
> > postgres=# show maintenance_work_mem ;
> >  maintenance_work_mem
> > --
> >  20MB
> > (1 row)
> > postgres=# vacuum analyze mdm_context;
> > VACUUM
> > postgres=# show shared_buffers;
> >  shared_buffers
> > 
> >  128MB
> > (1 row)
> >
> >PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+
> COMMAND
>
> >  62246 postgres  20   0  422892 165996 139068 R 57.1 15.7  25:06.34
> postgres: postgres postgres [local] VACUUM
> >
>
> your assumption may be right, but i am not sure of the interpretation from
> top.
> I have to admit I am not great at understanding top output (RES vs VIRT)
> in general when it comes to limits.
>
>
i just tried a couple of more things, maybe it helps.

every connection ( like psql in interactive mode ) has an overhead of
around 10MB.


postgres=# set maintenance_work_mem TO 1024;
SET
postgres=# -- 11284 this was showing in my RES mem on a fresh connection
postgres=# do $$
begin
for i in 1..20 loop
 update t set col1 = col1 || i::text;
commit;
end loop;
end; $$;
DO
postgres=# -- 394924 this was  showing in my RES mem on a connection that
did large updates, adding to connection cache ?
postgres=# vacuum t;
VACUUM
postgres=# -- 395852  this was  showing in my RES mem on a connection that
did vacuum, although the value is around 395M,
   -- but vacuum only took around  ~ 1M when
maintenance_work_mem was set to 1024 (1MB)

PostgreSQL connections are process based, and a lot goes into what is held
into the memories right since init, i did a pmap and lsof on the process id,
it touches a lot of files in datadir/base and datadir/global, basically the
pages touched during the session activities.

also there is a huge chunk allocated to
7f233b839000 2164816K rw-s- /dev/zero (deleted)
which I think is mmap to /dev/zero that contents have been deleted, but the
connection has to be closed to reclaim that space. idk


Re: postgres vacuum memory limits

2021-08-01 Thread Vijaykumar Jain
https://rhaas.blogspot.com/2012/01/linux-memory-reporting.html?m=1

I think this awesome blog will clear a lot of 'understanding of top' output
in postgresql context of memory growth.


Automatic backup in windows

2021-08-01 Thread obi reddy
Hello everyone,  I want to schedule the automatic backup in windows
postgresql platform.

I have attached the script,  if any changes is required please correct and
send back. This is my humble request.
Actually I don't know to write the windows script.  Otherwise if incase any
have the script please send me .



Db name:rathi26
Backup file path:C:\Users\gangireddyobiredy\Desktop\case.sql



Thank you And Regards
Obireddy. G






@echo off
SET day=%date:~0,2%
SET month=%date:~7,8%
SET year=%date:~6,4%
SET hh=%time:~0,2%
SET mm=%time:~3,2%

SET BACKUPDIR=C:\users_Backup
SET datestr=%day%-%month%-%year%_%hh%-%mm%
SET dir=%day%-%month%-%year%
mkdir C:\users_Backup%dir%

#Provide database name here
SET db1=dbname1

echo datestr is %datestr%

SET BACKUP_rathi26=C:\Users\gangireddyobiredy\Desktop\case.sqll
SET rathi26=%db1%%datestr%.sql


ECHO Backup file name is %rathi26%

SET PGPASSWORD=
echo on

#Execute Backup for database
C:\Program Files\PostgreSQL\13\bin>pg_dump.exe -p 5432 -U postgres -v -d
rathi26 > C:\Users\gangireddyobiredy\Desktop\case.sql
echo Backup Completed