Hi,
I am just thinking, when DEFAULT CURRENT_DATE is being used in table
definition then why the function is again being used in INSERT statement
why not use
default. Here is sample
edb=# create table date_test (id int, hiredate date default current_date);
CREATE TABLE
edb=# insert into date_tes
thanks for the clarification.
Regards
On Thu, 5 Sep 2024 at 11:15, Abdul Qoyyuum
wrote:
> I believe that's the only way to do it.
>
> On Thu, Sep 5, 2024 at 12:09 PM Muhammad Ikram wrote:
>
>>
>> Hi,
>>
>> I want to change email for my PostgreSQL c
Hi,
I want to change email for my PostgreSQL community subscriptions. Is there
a way to do it without unsubscribing and then subscribing to a new email ?
--
Regards,
Muhammad Ikram
subscribers vs
what exists on publisher then here is something from my previous
experience. We used to have a Data Validation tool for checking tables/rows
across publisher/subscriber. We also used pg_dump for another tool that was
meant for making copies of schemas.
Regards,
Muhammad Ikram
On Mo
on to catch up.
Regards,
Muhammad Ikram
On Sun, Sep 1, 2024 at 9:22 PM Shaheed Haque wrote:
> Since nobody more knowledgeable has replied...
>
> I'm very interested in this area and still surprised that there is no
> official/convenient/standard way to approach this (see
> h
erver package" is extremely common though.
>
> Correct. The missing part is that in Debian/Ubuntu packaging when you
> use psql you are actually doing:
>
> ls -al /usr/bin/psql
> lrwxrwxrwx 1 root root 37 Aug 8 07:37 /usr/bin/psql ->
> ../share/postgresql-common/pg_wrapper
>
> The Debian packaging routes most things through
> pg_wrapper/postgresql-common a Perl script that does the magic of
> finding the correct binaries for each Postgres version.
>
> >
> > regards, tom lane
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
>
--
Muhammad Ikram
Fire Fox and Thunderbird. ArbolOne is composed of
>> students and volunteers dedicated to providing free services to charitable
>> organizations. ArbolOne on Java Development is in progress [ í ]
>>
>
--
Muhammad Ikram
It will record all statements in logs. If you are concerned about query
times then you may use pg_stat_statements.
Muhammad Ikram
On Tue, 20 Aug 2024 at 12:19, sud wrote:
>
> On Mon, Aug 19, 2024 at 1:25 PM Muhammad Ikram wrote:
>
>> Hi Sud,
>>
>> Please ma
ny possible way(example query tracing etc) to get the
> underlying system queries which gets triggered as part of the main insert
> query? For example in above scenario, postgres must be executing some query
> to check if the incoming row to the child table already exists in the
> parent table or not?
>
>
> Regards
>
> Sud
>
--
Muhammad Ikram
tgresql.conf
primary_conninfo = 'host=primary_server_ip port=5432 user=repl
password=your_password'
restore_command = 'copy \\path_to_archive\\%f %p' # if you are using
archive logs
Hope I did not forget any step.
Regards,
Muhammad Ikram
On Wed, Aug 7, 2024 at 6:11 AM Va
> # replication privilege.
> local replication all peer
> hostreplication all 10.10.20.7/32
> scram-sha-256
> hostreplication all 127.0.0.1/32ident
> hostreplication all
.
Regards,
Ikram
On Fri, Jul 26, 2024 at 11:05 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> I've only one instance of PG in that server. Means only one
> postgresql.conf for both databases.
>
>
> On Friday, 26 July, 2024 at 11:12:34 am IST, Muhammad Ikram &l
-> Index Scan using
> cl_student_name_ix4 on cl_student_name b (cost=0.41..284.75 rows=166
> width=97) (actual time=0.049..0.184 rows=313 loops=1)"
> " Index Cond:
> (((companycode)::text = '100'::text) AND ((departmentheaderfk)::text =
> '04DF8BD89D0844DD4D8AA151EFB28657'::text) AND ((levelfk)::text =
> '37A9BEC2638844FFD5B1422D83E70EF3'::text) AND (status = 'A'::bpchar))"
> "-> Index Scan using
> ""cl_student_exam_subject_IX1"" on cl_student_exam_subject a
> (cost=0.42..10.15 rows=2 width=89) (actual time=0.010..0.011 rows=1
> loops=313)"
> " Index Cond: (((companycode)::text =
> '100'::text) AND ((examheaderfk)::text =
> 'BA80952CFF8F4E1C3F9F44B62ED9BF37'::text) AND ((registrationnumber)::text =
> (b.registrationnumber)::text))"
> " Filter: ((examstudentstatus IS
> NULL) OR (examstudentstatus = ANY ('{R,S,W}'::bpchar[])))"
> " -> Index Scan using cl_subject_ix3 on
> cl_subject c (cost=0.28..0.32 rows=1 width=53) (actual time=0.006..0.006
> rows=1 loops=326)"
> "Index Cond: (((companycode)::text =
> '100'::text) AND ((subjectcode)::text = (a.subjectcode)::text))"
> "-> Index Scan using ""cl_student_internal_mark_IX""
> on cl_student_internal_mark m (cost=0.42..2.94 rows=1 width=97) (actual
> time=0.010..0.010 rows=1 loops=326)"
> " Index Cond: (((companycode)::text =
> (a.companycode)::text) AND ((companycode)::text = '100'::text) AND
> ((subjectcode)::text = (a.subjectcode)::text) AND
> ((registrationnumber)::text = (a.registrationnumber)::text))"
> " Filter: (((departmentheaderfk)::text =
> '04DF8BD89D0844DD4D8AA151EFB28657'::text) AND ((levelfk)::text =
> '37A9BEC2638844FFD5B1422D83E70EF3'::text))"
> " -> Index Scan using ""cl_student_semester_subject_IX1"" on
> cl_student_semester_subject p (cost=0.42..3.09 rows=1 width=60) (actual
> time=0.010..0.010 rows=1 loops=326)"
> "Index Cond: (((companycode)::text = '100'::text) AND
> ((subjectcode)::text = (a.subjectcode)::text) AND
> ((registrationnumber)::text = (a.registrationnumber)::text) AND (semester =
> a.semester))"
> "Filter: ((examheaderfk)::text =
> 'BA80952CFF8F4E1C3F9F44B62ED9BF37'::text)"
> "-> Seq Scan on co_company_branch q (cost=0.00..1.01 rows=1
> width=276) (actual time=0.000..0.000 rows=1 loops=326)"
> "Planning Time: 15.581 ms"
> "Execution Time: 19.546 ms"
>
>
> The query when run against DB1 takes around 7 min 32 seconds.
> The same query when run against DB2 takes around 124 msec.
>
> Same computer, same PG cluster, same query.
> Why it takes so much time when run against DB1 (client_db)?
>
> Already executed vacuum against client_db database.
>
> Any help is really appreciated.
>
> Happiness Always
> BKR Sivaprakash
>
>
--
Muhammad Ikram
2703F_20240719-123408D, 20240719-122703F_20240723-110212I
>
> incr backup: 20240719-122703F_20240724-094727I
> timestamp start/stop: 2024-07-24 09:47:27+05:30 / 2024-07-24
> 09:47:30+05:30
> wal start/stop: 0009003D /
> 0009003D
> database size: 62MB, database backup size: 11.5MB
> repo1: backup size: 72KB
> backup reference list: 20240719-122703F,
> 20240719-122703F_20240719-123408D, 20240719-122703F_20240723-110212I,
> 20240719-122703F_20240723-141818
>
>
>
>
--
Muhammad Ikram
operations are more update/delete intensive. Dead tuples and bloat will
take longer to be cleaned up.
It can be beneficial for performance of other database operations,
especially if your system is I/O-bound. Can also be helpful when system
resources are less
Regards,
Muhammad Ikram
Bitnine Global.
On
,
Muhammad Ikram
On Fri, Jul 5, 2024 at 3:03 PM Josef Šimánek
wrote:
> pá 5. 7. 2024 v 11:08 odesílatel sud napsal:
> >
> > Hello all,
> >
> > Its postgres database. We have option of getting files in csv and/or in
> avro format messages from another system to load i
Hi,
Sorry for late reply, I think you will having some script that drops and
creates daily partitions etc, command for disabling/removing any constraint
can be placed before these statements so that when the script runs it does
the needful.
Regards,
Muhammad Ikram
Bitnine global
On Tue, Jul 2
Hi Yudhi,
I think disabling foreign keys before maintenance will help.
Regards,
Muhammad Ikram
Bitnine global
On Tue, Jul 2, 2024 at 11:41 AM yudhi s wrote:
> Hello All,
> In postgres we are seeing issues during automatic partition maintenance
> using pg_partman extension. So bas
Hi,
Please reinitialize using pg_basebackup or use pg_rewind
Muhammad Ikram
Bitnine Global
On Tue, 25 Jun 2024 at 22:20, Yongye Serkfem wrote:
> Hello Engineer,
> Below is the error message I am getting after failing over to the standby
> and reconfiguring the former master a
ing lower case
won't cause any issue (IMO) as this is the default PG case.
Regards,
Muhammad Ikram,
Bitnine Global
On Mon, Jun 24, 2024 at 8:36 AM 毛毛 wrote:
>
>
> Thank you for your advice.
>
> .pgpass file would help a lot.
>
>
> I recently started to writing SQL
ined slowness, and performance issues were only
> resolved via a downgrade. Do you have any upgrade docs or recommendations
> you can share about the upgrade process before I try to go on the journey?
>
>
> Thank you,
> -Doron
>
>
--
Muhammad Ikram
t and fails on
another, if there were some locale difference then it must have failed with
a different reason, not the duplicates
Perform a clean up and try again. Hope it succeeds.
Regards,
Muhammad Ikram
Bitnine Global.
On Sun, Jun 23, 2024 at 2:59 AM Adrian Klaver
wrote:
> On 6/22/24 10:01,
Hi Shenavai,
Here are some more options..
VACUUM
VACUUM FULL
You may also reindex to reclaim space
REINDEX TABLE
REINDEX INDEX
Regards,
Muhammad Ikram
Bitnine
On Thu, Jun 13, 2024 at 1:09 PM Kashif Zeeshan
wrote:
> Hi
>
> You can use the CLUSTER command, which will p
=#
postgres=#
postgres=# INSERT INTO idev.assessment_result_2023_dab_part
SELECT * FROM idev.temp_assessment_result_2023_dab;
INSERT 0 0
postgres=#
postgres=# DROP TABLE idev.temp_assessment_result_2023_dab;
DROP TABLE
postgres=#
postgres=#
Regrads,
Muhammad Ikram
bitnine
On Thu, Jun 6, 2024 at 11
Hi Rich,
Based on what I could understand is, here is an example
UPDATE employees
SET salary = salary + 500
WHERE department_id = 'Sales';
Sorry, if I misunderstood your question.
Regards,
Muhammad Ikram
Bitnine
On Sun, May 19, 2024 at 9:54 PM Rich Shepard
wrote:
> Searching t
Thanks for the advice.
On Fri, 3 May 2024 at 16:09, Peter Eisentraut wrote:
> On 03.05.24 12:57, Muhammad Ikram wrote:
> > Tables which have an identity column in Oracle when migrated to
> > PostgreSQL, the data type of Identity column is changed to bigint from
> > numb
),
transaction_date DATE,
store_id NUMBER
);
CREATE TABLE Sales_Details (
*transaction_id* NUMBER,
item VARCHAR2(100),
quantity NUMBER,
price NUMBER,
CONSTRAINT fk_transaction_id FOREIGN KEY (transaction_id) REFERENCES
Sales(transaction_id)
);
--
Regards,
Muhammad Ikram
s
> implemented? (It wasn't when I last used it in 2022.)
>
>
--
Muhammad Ikram
28 matches
Mail list logo