Problem with connection to host (wrong host)

2018-03-31 Thread Mike Martin
Hi
I am just setting up a postgres server, details
Host 192.168.0.3
pg_hba.conf
# TYPE  DATABASEUSERADDRESS METHOD

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
hostall all 192.168.0.0/32ident
# IPv6 local connections:
hostall all 127.0.0.1/32ident

hostall all ::1/128 ident

postgresql.conf
listen_addresses-'*'

however when I try to connect from my laptop (ip 192.168.0.2) I get

psql -h 192.168.0.3 -U usevideo -W
Password for user usevideo:
psql: FATAL:  no pg_hba.conf entry for host "192.168.0.2", user "usevideo",
database "usevideo", SSL off

So a bit confused, is psql ignoring the host parameter

thanks


Regex on field value

2018-04-14 Thread Mike Martin
Is this possible, eg
Substring(field, regex include other field name)
Thanks


Question about getting values from range of dates

2018-06-22 Thread Mike Martin
Hi
I am looking for suggestions about the best way to resolve this problem.

Scenario

I have entries for files in database which lack specific values (xml files
not generated)
These values can be obtained by what are called DTC read files, where the
values are the same in the last DTC read file before date of file and the
next DTC read file (by date)

This code works but looks horrendous, so would appreciate any ideas.

thanks

Mike

select max(a.recordingdate) ,max(b.recordingdate)
,a.registration,max(b.filename)
from create_tdms a
join (select registration,recordingdate,filename from create_tdms where
filename not like 'DTC%') b
on b.registration=a.registration
where b.recordingdatea.recordingdate and a.filename like 'DTC%'
group by a.registration


Advice on logging strategy

2018-10-11 Thread Mike Martin
I have a question on logging strategy

I have loggin set to
log_statement = 'all' on a network database with logging set to csv so I
can import it to a logging table

However the database is populated via a nightly routine downloading data
via REST APIusing prepared statements

This results in enormous log files which take ages to import using copy
becuase each execute statement is logged with the parameters chosen

Is there any way around this?

I cant find any way to filter dml statements

thanks


Re: Advice on logging strategy

2018-10-11 Thread Mike Martin
I suppose the ideal would be to log the prepared statement once and detail
only if error rather than one per execution

On Thu, 11 Oct 2018 at 11:33, Rob Sargent  wrote:

>
>
> > On Oct 11, 2018, at 4:26 AM, Mike Martin  wrote:
> >
> > I have a question on logging strategy
> >
> > I have loggin set to
> > log_statement = 'all' on a network database with logging set to csv so I
> can import it to a logging table
> >
> > However the database is populated via a nightly routine downloading data
> via REST APIusing prepared statements
> >
> > This results in enormous log files which take ages to import using copy
> becuase each execute statement is logged with the parameters chosen
> >
> > Is there any way around this?
> >
> > I cant find any way to filter dml statements
> >
> > thanks
> >
> Do you want all the log lines in you logging table?
> There was a thread yesterday (10.Oct.2018) on COPY which mention the
> possibility of multiple processes COPYing to same table.


Re: Advice on logging strategy

2018-10-12 Thread Mike Martin
Thanks!

On Fri, 12 Oct 2018 at 14:33, David Steele  wrote:

> On 10/11/18 11:26 AM, Mike Martin wrote:
> >
> > This results in enormous log files which take ages to import using copy
> > becuase each execute statement is logged with the parameters chosen
> >
> > Is there any way around this?
> >
> > I cant find any way to filter dml statements
>
> pgAudit (https://github.com/pgaudit/pgaudit) gives you fine-grain
> control over what is logged by command type, table, or user as well as a
> lot more detail.
>
> --
> -David
> da...@pgmasters.net
>


Stored procedure with execute and returning clause

2020-08-23 Thread Mike Martin
Hi
I am having difficulty with returning clause and stored procedure. This is
an (edited) example of where I am

CREATE OR REPLACE PROCEDURE public.arrcopy1(
 dataarr anyarray,
 tblname text,
 cols text DEFAULT NULL::text,
 selstr text DEFAULT NULL::text,
 INOUT outarr text[] DEFAULT NULL
)
LANGUAGE 'plpgsql'
AS $BODY$

insstr:= INSERT INTO tagdata(fileid,tagname,tagvalue)  SELECT
arr[1]::integer,arr[2]::text,string_to_array(arr[3],E'\b') FROM
  (select array_agg(v order by rn) arr
  from unnest($1) with ordinality v(v,rn)
  group by (rn - 1) / array_length($1::text[],2)
  ) a
JOIN tagfile ON fileid=arr[1]::int RETURNING *::text[];

Then called as

EXECUTE insstr INTO outarr USING (dataarr) ;
$BODY$

This compiles as a proc

But I then get an error (this is in perl)

DBD::Pg::db selectall_arrayref failed: ERROR:  malformed array literal:
"3182753"
DETAIL:  Array value must start with "{" or dimension information

The procedure works perfectly without the INTO Clause on execute

If I change returning clause to
RETURNING array[fileid]

It runs but only returns the first fileid not all fileids inserted


thanks


Return value of CREATE TABLE

2020-09-10 Thread Mike Martin
Is this possible?
Basically I want to manage temp table status for use in a procedure.

The procedure looks at pg_catalog to get information for processing.

So basically I would like to be able to know what namespace a temp table is
created in, so that I can constrain lookup.

example

CREATE TEMP TABLE tagdata (test int,test2 numeric(10,2));
SELECT relname,relpersistence ,relnamespace
,pa.atttypid,attname,attnum
,nspname
FROM pg_catalog.pg_class pc
JOIN pg_attribute pa ON pc.oid=pa.attrelid
JOIN pg_namespace pn ON pn.oid=relnamespace
WHERE relname = 'tagdata' AND attnum>0

Which returns (when its run for the second time in different tabs in
pgadmin)
relname   persistence namespace typeid colname colnum schema
"tagdata" "p" "2200" "23""fileid"1
"public"
"tagdata" "p" "2200" "25""tagname"  2  "public"
"tagdata" "p" "2200" "1009""tagvalue"  3   "public"
"tagdata" "t"   "483934""23""test"  1
"pg_temp_10"
"tagdata" "t""538079"   "23""test"  1
"pg_temp_13"
"tagdata" "t""538079""1700"   "test2"2
"pg_temp_13"

So I would like some way of knowing exactly which schema the temp table has
been created in, I cant see anything obvious

thanks

Mike


Re: Return value of CREATE TABLE

2020-09-10 Thread Mike Martin
Thanks , exactly what I was looking for

On Thu, 10 Sep 2020 at 13:16, Christoph Moench-Tegeder 
wrote:

> ## Mike Martin (redt...@gmail.com):
>
> > So basically I would like to be able to know what namespace a temp table
> is
> > created in, so that I can constrain lookup.
>
> pg_my_temp_schema() returns the OID of the session's temporary schema
> ("or 0 if none", according to the docs).
>
> Regards,
> Christoph
>
> --
> Spare Space
>


Shell Command within function

2018-10-26 Thread Mike Martin
Is this possible?
I have a script which imports csvlogs into a table, and it would be useful
to truncate the log files after import

thanks

Mike


Problem with commit in function

2018-10-30 Thread Mike Martin
I have the following function
-- FUNCTION: public.update_log()

-- DROP FUNCTION public.update_log();

CREATE OR REPLACE FUNCTION public.update_log(
)
RETURNS void
LANGUAGE 'sql'

COST 100
VOLATILE
AS $BODY$

truncate table postgres_log_tmp  ;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Mon.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Tue.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Wed.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Thu.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Fri.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Sat.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Sun.csv' WITH
csv;
INSERT INTO postgres_log SELECT * from postgres_log_tmp ON
CONFLICT(session_id, session_line_num) DO NOTHING;
--COMMIT;
truncate table postgres_log_tmp  ;

$BODY$;

ALTER FUNCTION public.update_log()
OWNER TO postgres;

If I leave the second truncate statement nothing is written to
postgres_log. I assume the insert doesnt finish

Any way to force it to finish before the truncation?


Code for getting particular day of week number from month

2018-12-11 Thread Mike Martin
Hi
For a particular sequence I needed to do (schedule 2nd monday in month for
coming year) I created the following query

select to_char(min(date::date) + interval '1 week','DD/MM/')  date
--gets first date for day of month (monday in this case) then adds week and
finally formats it to desired date string

from generate_series(
  '2018-12-01'::date,
--start date
  '2020-12-01'::date,
--end date
  '1 day'::interval
) date

where extract(dow from date) =1
--sets day of week
GROUP BY (extract(year from date)*100)+extract(month from date)
--groups by month and year
ORDER BY cast(min(date) as date)
--sets order back to date

I couldn't see anything on google so thought I'd share it

Mike


How to compare dates from two tables with blanks values

2018-12-19 Thread Mike Martin
I have a situation where I need to update dates in a primary table from
regular imports of data, eg: this is the base select query

select d.row_id,
fname||lname,'joineddate',d.joineddate,'joineddate',s.joineddate,0 as bool1
from import s join  members d on d.contact_id=s.contact_id where

cast(nullif(d.joineddate,NULL) as timestamp) !=
cast(nullif(s.joineddate,'') as timestamp)

This gives zero records, however I cant seem to get a query that works.
For non-date fields I just use
Coalesce(fieldprime,'')!=coalesce(fieldiimport,'') which works fine but
chokes on dates where there is a blank value

thanks in advance


Re: How to compare dates from two tables with blanks values

2018-12-21 Thread Mike Martin
thanks for this. I did get it to work using coalesce and nullif on opposite
sides of the where condition, but the IS DISTINCT FROM certainly sounds a
better approach. Coming from SQLServer until last year , never came across
it before

Mike

On Wed, 19 Dec 2018 at 10:57, Arnaud L.  wrote:

> Le 19/12/2018 à 11:41, Mike Martin a écrit :
> > cast(nullif(d.joineddate,NULL) as timestamp) !=
> cast(nullif(s.joineddate,'') as timestamp)
> Try with
> d.joineddate IS DISTINCT FROM s.joineddate
>
> https://www.postgresql.org/docs/current/functions-comparison.html
>
> Cheers
> --
> Arnaud
>
>


Array_agg and dimensions in Array

2019-01-13 Thread Mike Martin
I have come across a problem which I cant seem to solve in a nice way
Basically I have a (small) table of tags
What I need to is combine two concatenated fields with a literal value as
an array element.

First thought was using array_agg with a pre-created array as

select array_agg(ARRAY['-metadata',optname||'='||optvalue]))  metaopt from
encodeopts
where  alias is not null and opttype in ('tag','tagn')
group by transref,fileid ) a

However this results in a multi-dimensional array, rather than a single
dimensioned one, which makes it impossible to join with the rest of an
array created elsewhere in the query

This works, but is very cludgy

select ARRAY['-map_metadata','-1']||array_agg(metaopt) from
(select unnest(array_agg(ARRAY['-metadata',optname||'='||optvalue]))
metaopt from encodeopts
where  alias is not null and opttype in ('tag','tagn')
group by transref,fileid ) a

So does this

select
string_to_array(string_agg('-metadata',||'||'||optname||'='||optvalue])),'||')
metaopt from encodeopts
where  alias is not null and opttype in ('tag','tagn')
group by transref,fileid

but again cludgy

Any ideas appreciated

Mike


Is it possible to have a cascade upwards query?

2019-03-27 Thread Mike Martin
As per title, is this possible?

Scenario being two tables with a linking pair of fields, where one table is
a superset of the other and key is generated by the second table

I would like to have the record in table two deleted when I delete the
record in table 1

Thanks


Problem with connecting with named host parameter

2021-08-27 Thread Mike Martin
Hi
I am getting the following error when I connect with hostname on localhost

psql -h capture -U  -p 5432 
psql: error: FATAL:  no pg_hba.conf entry for host
"fe80::7285:c2ff:feb0:cd4%eth0", user "usevideo", database "usevideos", SSL
off

The hba.conf is

local   all all password
# IPv4 local connections:
hostall all 127.0.0.1/24password
host allall 192.0.0.0/0 password
# IPv6 local connections:
hostall all ::1/128 trust

If I change -h to ip address then it works, just not with hostname.
Connecting from another host also works.

The other thing I cant understand if the
"fe80::7285:c2ff:feb0:cd4%eth0"

This looks like an ipv6 host or block id which I caant understand.
This is with pg 11 and 12 on fedora 34


Re: Problem with connecting with named host parameter

2021-08-27 Thread Mike Martin
On Fri, 27 Aug 2021 at 15:00, Tom Lane  wrote:

> Mike Martin  writes:
> > I am getting the following error when I connect with hostname on
> localhost
>
> > psql -h capture -U  -p 5432 
> > psql: error: FATAL:  no pg_hba.conf entry for host
> > "fe80::7285:c2ff:feb0:cd4%eth0", user "usevideo", database "usevideos",
> SSL
> > off
>
> Evidently, your DNS setup is resolving the machine's name as a IPv6
> address, whereupon PG quite legitimately doesn't find a match in
> its pg_hba table.  (I suppose you are using listen_addresses = "*",
> else the server wouldn't even be listening on this address.)
>
> > This is with pg 11 and 12 on fedora 34
>
> Looking at the "hosts" entry in /etc/nsswitch.conf might help figure out
> exactly what's going on, but I'll bet a nickel that this is some
> unhelpful systemd behavior.
>
> regards, tom lane
>

Very possibly.

I seem to have resolved the issue with just the following active lines in
pg_hba.conf

host allall samehost password
host all all samenet password