I tried adding a dummy column with different values and then deleting one, as
you suggested. Got the same message.
Mark Brady, Ph.D.
Deputy Chief Data Officer, TRMC
amazon.com/author/markjbrady<https://amazon.com/author/markjbrady>
From: mark bradley
Wouldn't that be nice 🙂
Best regards,
Mark Brady
amazon.com/author/markjbrady<https://amazon.com/author/markjbrady>
From: Adrian Klaver
Sent: Friday, March 7, 2025 10:55 AM
To: mark bradley
Cc: pgsql-general
Subject: Re: Duplicate Key Values
On 3
Here are some of the references from Copilot
https://dba.stackexchange.com/questions/62675/why-does-my-table-hold-duplicate-primary-keys
https://stackoverflow.com/questions/1461/having-duplicate-rows-on-a-primary-key-and-unique-constraints-in-postgres
Best regards,
Mark Brady
amazon.com
I hesitate to just delete my tables and start over because this
error will reoccur.
Best regards,
Mark Brady
amazon.com/author/markjbrady<https://amazon.com/author/markjbrady>
From: Adrian Klaver
Sent: Thursday, March 6, 2025 3:34 PM
To: mark bradley
Cc:
Here is the table definition:
[cid:ecac8e92-826b-45c8-95a8-aaf0e55c4f9c]
And here is the error message I get when I try to delete a duplicate:
[cid:a0f5f298-984d-4f89-abd2-475c02e65b9d]
Mark Brady,
amazon.com/author/markjbrady<https://amazon.com/author/markjbr
Although I did not enter them, somehow duplicate primary key values have
appeared in one of my tables. I can't remove the offending rows because the
key is a foreign key in another table.
What to do?
Mark Brady,
amazon.com/author/markjbrady<https://amazon.com/author/markjbrady>
I'm getting a strange error message when I try to insert a date using the
view/edit grid in pgadmin. See below. I've tried quotes, no quotes and
various formats. The column type is clearly "date."
[cid:4ff69cfe-2efa-4636-8dde-6230512706f7]
Mark Brady, Ph.D.
Deputy Chie
Thank you for clarifying this. I missed that even though it is there in the
second paragraph.
- Mark, out and about.
> On Nov 14, 2024, at 1:57 AM, Laurenz Albe wrote:
>
> On Wed, 2024-11-13 at 17:33 -0800, Mark Phillips wrote:
>> Given a database table with one policy state
are greatly appreciated.
- Mark
Here are two tests I ran using pg 12 (upgrade on the schedule).
Given a table “customer” with a column “deadfiled” with a default of false. If
deadfile is true, then exclude row from queries executed by role “staff”.
Test 1
CREATE POLICY
Thank you. I will revisit my test cases to be sure I have the use cases covered. - Mark, out and about.On Nov 13, 2024, at 5:36 PM, David G. Johnston wrote:On Wednesday, November 13, 2024, Mark Phillips <mphill...@mophilly.com> wrote:Given a database table with one policy statement FOR
Given a database table with one policy statement FOR SELECT applied, it is
necessary to apply additional policy statements for insert, update, and delete
operations?
My testing indicates that this is case but I haven’t found an explanation of
this requirement in the documentation.
- Mark
the column, and it now works fine. The better form of the USING clause
certainly helped. I am happy to share my notes if someone would like to see
them.
As for pg 12, an update to the current stable release is on the project roadmap.
Cheers,
- Mark
> On Nov 12, 2024, at 12:48 AM, Laurenz Alb
er
AS PERMISSIVE
FOR SELECT
TO prm_staff
USING (coalesce(deadfiled,false)=false);
So my question is specifically about the USING clause, but also more broadly
about this attempted application of RLS.
Links and advice accepted with gratitude.
Mark
Will there be a minor version update to Postgres 12 this Thursday, 11/7/24,
going to version 12.21?
Thanks, Mark
While installing PostgreSQL I am getting this error message during the PEM
server portion. Do I need the PEM server? If so, what is the solution?
Thanks!
[cid:b414e51e-598a-44bf-951a-ce754c0fa77d]
I am getting the following error message during install of PEM while installing
Posgres.
[cid:da2b6d5e-c3b0-430a-98e0-dac72d0aba23]
How can I fix this?
ing of the standby wal-receiver and that the pg_waldump
showed the failing lsn was a commit.
Thanks
UNOFFICIAL
-Original Message-
From: Kyotaro Horiguchi
Sent: Wednesday, 28 February 2024 5:09 PM
To: Mark Schloss
Cc: pgsql-general@lists.postgresql.org
Subject: Re: walreceive
mmit (0/08000FF8 and 1/AC000D78) lsn did not get streamed to either the
replicas or barman server
Thanks in advance.
UNOFFICIAL
-Original Message-
From: Kyotaro Horiguchi
Sent: Monday, 26 February 2024 7:27 PM
To: Mark Schloss
Cc: pgsql-general@lists.postgresql.org
Subject: Re: walreceiver f
UNOFFICIAL
Hello,
We have the following setup in our DEV environment -
- Primary/Replica using asynchronous streaming replication
- Servers run Ubuntu
Linux 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC
2023 x86_64 x86_64 GNU/Linux
- Postgres version
postgr
an oversight?
Btw, the same holds for ODBC 12.02..
Thanks, Mark
From: Adrian Klaver
Sent: Sunday, July 16, 2023 11:35 AM
To: Baskar Muthu ; pgsql-general@lists.postgresql.org
Subject: Re: Installation Issue
On 7/15/23 07:30, Baskar Muthu wrote:
> Hi sir/mam
>
> I installed postre - sql but it is not connected with the serv
ou currently hit, but then you are still at an
impasse. Would a factor or 2x or 4x be enough for your needs?
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
extension/uuid-ossp--unpackaged--1.0.sql
/share/extension/uuid-ossp--1.0--1.1.sql
I need a Windows-specific install of uuid-ossp for the Postgres build to use,
for both 32bit and 64bit Windows.
Thanks, Mark
-Original Message-
From: Daniel Gustafsson
Sent: Wednesday, March 15, 2023
the source so I
can build it or
prebuilt libraries?
Thanks, Mark
Hi, thanks for your reply.
So to confirm, EXPLAIN ANALYZE does not detoast rows? The original goal of
these queries was to see the effect of fetching from toast tables on query
performance.
On Thu, 27 Oct 2022 at 15:43, Tom Lane wrote:
> Mark Mizzi writes:
> > When I run
>
> &g
As an example, let's take the following simple table:
CREATE TABLE unary(a VARCHAR);
-- simple way to make table large
ALTER TABLE unary
ALTER COLUMN a SET STORAGE EXTERNAL;
-- insert one million large rows
INSERT INTO unary
SELECT repeat('a', 8000)
FROM generate_series(0, 10);
-- update
this part of the query.
Yes, I saw those, hence asking on the list if there was a way to do it.
I'll handle it with multiple statements.
--
Mark Raynsford | https://www.io7m.com
On 2022-10-19T11:58:07 -0700
"David G. Johnston" wrote:
> On Wed, Oct 19, 2022 at 10:36 AM Mark Raynsford
> wrote:
>
> > insert into t (y) values (t.x * 2);
> >
> > I can think of various ways to do it with multiple statements, but a
> > single st
with multiple statements, but a
single statement would be preferable.
--
Mark Raynsford | https://www.io7m.com
e docs and could
find no mention, however).
Is my understanding correct? Am I missing something? I just want to make
sure I'm not screwing something up.
Thanks,
Mark
ing to the config.pl script:
$main::config->{'nls'}="C:/gettext";
my $nls = $main::config->{'nls'};
Anybody have experience with this?
Thanks, Mark
h-includes="$BUILD_DIR"/openssl/include/openssl/ \
--with-libraries="$BUILD_DIR"/openssl/lib/ \
"$ADDITIONAL_FLAGS"
However, when I try to create a database I'm getting an out of memory error
ink without using a 3rd party lib?
Thanks, Mark
the publisher, the subscriber might not
bother creating indexes over that data, even if such indexes exist on the
publisher.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
executed. Instead, the default value will be returned the next
time the row is accessed, and applied when the table is rewritten, making the
ALTER TABLE very fast even on large tables.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
and casting the json column to jsonb before comparing for equality.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
> On Oct 26, 2021, at 3:39 PM, Mitar wrote:
>
> On Tue, Oct 26, 2021 at 10:55 PM Mark Dilger
> wrote:
>> Note that there is a performance cost to storing the old rows using the
>> REFERENCING clause of the trigger
>
> Yea, by moving the trivial update check
> On Oct 26, 2021, at 1:34 PM, Mitar wrote:
>
> Hi!
>
> On Tue, Oct 26, 2021 at 10:17 PM Mark Dilger
> wrote:
>> I can't tell from your post if you want the trivial update to be performed,
>> but if not, would it work to filter trivial updates as:
>
ead nothing;
CREATE RULE
rules=# update my_table set j = '{"key":2}'::jsonb;
UPDATE 0
rules=# update my_table set j = '{"key":3}'::jsonb;
WARNING: [old.i=1, old.j={"key": 2}] => [new.i=1, new.j={"key": 3}]
WARNING: [old.i=2, old.j={"key": 2}] => [new.i=2, new.j={"key": 3}]
WARNING: [old.i=3, old.j={"key": 2}] => [new.i=3, new.j={"key": 3}]
UPDATE 3
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
reset the seed for
your random generator. I haven't looked specifically at your uuid generator,
and I don't know if it gets nondeterministic randomness from /dev/random or
similar, but deterministic random generators can be made to produce the same
sequence again if the seed
all regexp_replace multiple times until all repeats are eradicated, but I
don't think such workarounds should be necessary.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
told to grab and not try to grab the ",someone", but
other than that, they should be performing the same work. I don't see why the
performance should be so different.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
i have some sql functions written in c code using version 0 calling
convention. it's working fine on postgresql 9.6, but i'm migrating to
v13. has this calling convention been removed?
my function creation give an error,
SQL Error [42883]: ERROR: could not find function information for function
> On Jun 29, 2021, at 11:02 AM, Ron wrote:
>
> What's an IOS?
An Index Only Scan. See
https://www.postgresql.org/docs/14/indexes-index-only-scans.html
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
You might want to run VACUUM FREEZE and then retry your test query using
EXPLAIN. See if it switches to an index only scan after the VACUUM FREEZE.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
> On Jun 4, 2021, at 11:55 AM, Laura Smith
> wrote:
>
> That seems to have done the trick. Thanks again Mark
Glad to hear it. Good luck.
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
XCLUDE USING gist (t_val WITH=, t_range WITH &&) DEFERRABLE INITIALLY
> DEFERRED
> );
> INSERT INTO test(t_val) values(p_val);
This will insert a t_range of ('-infinity','infinity'), won't it? Wouldn't you
want to instead insert with t_range
space they need has not been created. My problem is that I
> would like to avoid creating them.
>
> Thanks
> João
>
> --
- Mark
Since INSERT /*+APPEND*/ is generally used when bulk loading data into
Oracle from external files you should probably look at the PostgreSQL COPY
command (https://www.postgresql.org/docs/13/sql-copy.html) and additional
utilities like pg_bulkload (https://github.com/ossc-db/pg_bulkload) .
On Thu
thoughts, experiences or other thoughts greatly appreciated.
- Mark Phillips
Don't you have to select into a variable and then return the variable to
the client per [1]?
Consider the following example from my Oracle system:
beginning code ...
V_SQL := 'SELECT COUNT(*) FROM ' || V_TAB;
EXECUTE IMMEDIATE V_SQL INTO V_CNT;
ending code ...
[1]
https://www.postgresql.org/
n app on MacOS and bundling Postgres they're going
to have a bad time with searching.
Please let me know if there's anything I can do to help. Will gladly test
patches.
Thanks,
--
Mark Felder
ports-secteam & portmgr alumni
f...@freebsd.org
where to get pg_dump 13.0?
On Sat, Nov 28, 2020 at 9:47 PM Adrian Klaver
wrote:
> On 11/28/20 8:12 AM, mark armon wrote:
> >
> > OS: Windows
> > issue:
> >
> https://stackoverflow.com/questions/14168920/how-to-fix-pg-dump-version-mismatch-errors
> > <
OS: Windows
issue:
https://stackoverflow.com/questions/14168920/how-to-fix-pg-dump-version-mismatch-errors
pg_dump: error: server version: 13.0; pg_dump version: 12.4
pg_dump: error: aborting because of server version mismatch
-
-
-
-
other hand, one could argue that a higher limit in postgres is sensible
these days.
Additional Information
1. symmetricds version 3.12.4
2. Postgresql Version 12
3. OS's Windows 10, Mac Catalina and SME Server
Regards,
Mark Phillips
Mophilly Technology Inc.
Telephone: (619) 296-0114
On th
This all sounds like a previous discussion on pg hackers about a progress
meter for pg_dump. Search the archives for that discussion. Also, search
the web for something like "pg_dump progress meter" and you'll get a few
suggestions like pipe to pv, although that does not appear to work with all
o
l in /var/spool/mail/root
Careful when stripping out the false positives. PostgreSQL installed from
source can be pretty much anywhere including a Downloads directory, /tmp,
and so on. In my case, the Downloads directory has a src sub-directory,
which tells me it is just a staging area for source cod
like I create a schema: test, I want the default date to 2020-01-01, so
when I do
select test.now;
the result is 2020-01-01
On Mon, Nov 9, 2020 at 9:44 PM David G. Johnston
wrote:
> On Mon, Nov 9, 2020 at 9:08 AM mark armon <1994hej...@gmail.com> wrote:
>
>>
>> How to
How to set up a schema default date (now) to '2020-01-01'? Whatever
timezone would be OK.
User managed backups in PostgreSQL work very similar to what you know from
Oracle. You first place the cluster in backup mode, then copy the database
files, and lastly take the cluster out of backup mode. The first and last
steps are done using functions pg_start_backup('label',false,false) and
p
o accomplish
the same by adding pairs of timestamp columns to each table and then
writing our own code to handle row filtering. Partitioning isn't needed.
Certainly partitioning by range could be used, but it would still require
some manual efforts.
-Mark
On Wed, Oct 7, 2020 at 10:41 AM St
ian Klaver writes:
> > On 10/5/20 7:55 AM, Mark wrote:
> >> I followed one PostgreSQL tutorial step by step. One session to use
> PSQL
> >> to execute sql files to create a new database in PostgreSQL.
> >> 1. copy paste the sql file within "C:\Program Files\Pos
Stackoverflow question link:
https://stackoverflow.com/questions/64210281/using-psql-executing-sql-format-file-shows-permission-denied-on-windows-platform
I followed one PostgreSQL tutorial step by step. One session to use PSQL to
execute sql files to create a new database in PostgreSQL.
1.
Thanks to everyone who replied. All helpful. I learned and have new ideas to
work with.
> On Aug 11, 2020, at 10:42 AM, Adam Brusselback
> wrote:
>
> I mentioned this in another email thread yesterday about a similar topic, but
> I'd highly suggest if you do go the UUID route, do not use the
Thank you for the reply.
The article is a good one. I posed the question on the chance things had
evolved since 2012, specifically as it relates to postgres.
> On Aug 10, 2020, at 3:21 PM, Christophe Pettus wrote:
>
>
>
>> On Aug 10, 2020, at 15:19, Mark Phillips
tales, suggestions and such will be warmly received.
- Mark
On 4/8/2020 10:28 AM, Adrian Klaver wrote:
> On 4/8/20 6:39 AM, Mark Bannister wrote:
>> I am converting an application to postgresql. On feature I have is
>> functions that return custom displays of a table row. For instance
>> the company display function might display
to reproduce this and take advantage of postgresql
caching? These functions can be called a lot.
--
Mark B
Sorry I've failed to mention which postgres version this is with:
PostgreSQL 10.11.
On Fri, Mar 6, 2020 at 3:39 PM Mark Haylock wrote:
>
> Hi,
>
> We have an autovacuum process that has been running for almost 27 hours:
>
> SELECT * FROM pg_stat_activity WHERE pid
|
max_dead_tuples|
num_dead_tuples|
I see nothing in the documentation to suggest that this is an expected
state - what does it mean?
Thanks,
Mark.
has ECC RAM?
>
> Yes, it appears that Linode uses ECC and other server grade hardware for
their machines.
Thanks,
Mark
the problem. For good measure I
restarted the machine as well.
I need to decide what to do next, if anything. We have a hot standby that
we also run queries against, and it hasn't shown any errors. I can switch
over to that as the primary. Or I can move the main database to a different
physical host.
Thoughts appreciated.
Thanks,
Mark
't find much on-line about this. How concerned should I be? Would you
move the instance to a different physical host?
Thanks,
Mark
You don’t tell us if other users will be concurrently changing any of the
records involved. If you could guarantee that the table won’t be changed, you
might be better off doing a CREATE TABLE table_new as SELECT … FROM table_old,
dropping table_old, and finally renaming table_new. Given the
.
Regards,
Mark Z.
From: Israel Brewster
Sent: Monday, January 6, 2020 1:24 PM
To: Michael Lewis
Cc: Rob Sargent ; Alban Hertroys ;
Christopher Browne ; pgsql-generallists.postgresql.org
Subject: Re: UPDATE many records
On Jan 6, 2020, at 11:54 AM, Michael Lewis
mailto:mle...@entrata.com>>
As I recall, if the listening address is set to '*' but is showing
localhost, then the problem you describe is likely due to missing an IPv6
address in pg_hba.conf. For me, I just added a line to pg_hba.conf like
this:
hostall all ::1/128 md5
So, even t
y in making changes to
functions that already exist in production deployments. An additional function
like this would be helpful to many.
--
Mark Felder
ports-secteam & portmgr alumni
f...@freebsd.org
ication.
>
> [2]:
> https://www.postgresql.org/message-id/flat/qfkua9$2q0e$1...@blaine.gmane.org
>
> [3]: https://git.pleroma.social/pleroma/pleroma/issues/1324 is an
> example of data loss induced by this issue.
>
> Ariadne
>
This should be directed towards the h
Hi,
The row_to_json seems to strip white space:
select ROW_to_json(q) from (select 'fooa bar' as TEST) q;
any help on how to preserve?
Thanks!
-mark-
s?
>
> We ended up rolling our own. We do schema dumps and then use
https://www.apgdiff.com/ to diff them. For our relatively simple schemas,
it's worked fine. One thing to note, apgdiff doesn't support `ALTER TABLE
ONLY [a-z\.]+ REPLICA IDENTITY FULL;` lines, which we just remove before
diffing.
Cheers,
Mark
and I haven't yet figured out how to reset that without dropping the procedure
and re-defining it. For my purposes, that is "good enough" -- I can promise
not to run such procedures against the temporary table.
Thanks for the reply,
Mark Z.
-Original Message-
From: Laurenz A
One of the issues I have run into in migrating from Oracle to PostgreSQL is the
difference in temporary table behavior. I've come up with what I think is a
cunning plan, but I wonder if anyone with more PostgreSQL understanding can
shoot holes in it.
Oracle defines their temporary tables stati
in a CASE statement, you have to use the actual column name.
Modify each of the queries throwing errors, and replace the lochierarchy
alias with the actual column name you see in the SELECT statement.
-Mark
On Mon, Mar 11, 2019 at 4:00 AM Tatsuo Ishii wrote:
> > Hi,
> >
> >
Andreas, Sameer,
Thank you for replying. I did not understand the purpose of
hot_standby_feedback, and your explanations helped. I turned it on, and the
pausing stopped.
Thanks,
Mark
at for us. I'm worried
about the streaming replication being paused while this table scan is
running. If the table scan takes several minutes, then the replica becomes
several minutes out of sync with the master. I'd prefer that not to happen
and I'm wondering if there's a way to do that.
Thanks,
Mark
do to prevent the pausing?
Thanks,
Mark
I wish more people would ask this question, to me, it is the true mark of
experience. In general, I think of PostgreSQL as the leading Relational
Database. The farther you get away from relational data and relational
queries, the more I would say, you should look for other products or
solutions
a backups, which decrease the time/load
required for a backup immensely in our case.
Cheers,
Mark
On Wed, Jan 9, 2019 at 12:58 PM github kran wrote:
>
> Mark - just curious to know on the logical replication. Do you think I can
> use it for my use case where i need to publish data to a subscriber when
> there is a change in the data updated for a row or any new inserts
> h
On Wed, Jan 9, 2019 at 10:10 AM github kran wrote:
> Mark - We are currently on 9.6 version of postgres and cant use this
> feature of logical replication.Answering to your question we are looking
> for any changes in the data related to a specific table ( changes like any
>
em set up to stream changes from the database into an
elastic search cluster, and it works great.
Mark
avel │ 0 bytes │ this is a table comment │
> └┴┴───┴───┴─┴─┘
> (1 row)
>
I was so close! That works; thank you very much for your help, Pavel.
--
Mark Jeffcoat
Austin, TX
'test_table');
objoid | classoid | objsubid | description
+--+--+-
714760 | 1259 |0 | this is a table comment
(1 row)
--
Mark Jeffcoat
Austin, TX
so I will only see the problem if ANALYZE runs during
this smaller time window. Looks like your patch will help if this happens.
Then again, it also seems no one has had a problem with its current
behaviour (for 11 years!).
Thanks,
Mark
On Wed, 2 Jan 2019 at 16:11 Tom Lane wrote:
> Mark
ED.
>
>
> On 1/2/19 9:28 AM, Mark wrote:
>
> Hi Ron,
>
> Yes, my process will commit the transaction (or roll it back) eventually.
> It's the window where
> one transaction has deleted all the rows (before committing) and an
> ANALYSE has ran.
>
> The d
Jan 2019 at 14:04 Ron wrote:
> But Jeff said "left open indefinitely without either committing or rolling
> back". Your process is committing the transaction.
>
>
> On 1/2/19 6:15 AM, Mark wrote:
>
> Hi Jeff,
>
> Thanks for your help. That is exactly what is
on_student_sid (cost=0.00..78.83
rows=1388 width=0) (actual time=0.077..0.077 rows=1388 loops=1)
Index Cond: (school_id = 36)
Planning time: 0.327 ms
Execution time: 2.311 ms
(7 rows)
On Sun, 23 Dec 2018 at 15:28 Maxim Boguk wrote:
> Hi Mark,
>
> It's look very weird.
Hi Jeff,
Thanks for your help. That is exactly what is happening.
I have a long running job which deletes all of the common_student table and
then repopulates it. It takes long time to load all the other data and
commit the transaction. I didn't think the delete inside the transaction
would have
I changed it to be just the single float value I needed to extract out of
the JSON object, but originally it was a text column that held the entire
JSON object.
ᐧ
On Mon, Dec 31, 2018 at 3:52 PM Adrian Klaver
wrote:
> On 12/31/18 9:36 AM, Mark Mikulec wrote:
> > Hi,
> >
> >
Thanks Rob,
Since I'm on Windows and Windows batch sucks I just ended up doing the JSON
parsing with node.js
To be honest this whole affair with COPY FROM program seems like a bug to
me though.
On Mon, Dec 31, 2018 at 1:59 PM Rob Sargent wrote:
>
>
> On Dec 31, 2018, at 10:36 AM
1 - 100 of 123 matches
Mail list logo