Hi all,
What is a reliable way to programmatically & generically populate an
empty schema with all the objects in the public schema as a template?
We are using the multi tenancy ruby gem Apartment (
https://github.com/influitive/apartment ), which was recently broken
by the changes made to pg_dump
ema-dumps
[1] This is outside the realms of this mail group though.
Cheers,
Matt.
- Original Message -
From: "Adrian Klaver"
To:,
Cc:
Sent:Mon, 12 Mar 2018 21:22:01 -0700
Subject:Re: Programmatically duplicating a schema
On 03/12/2018 08
On Wed, May 9, 2018 at 3:05 AM, Pavel Stehule
wrote:
>
>
> 2018-05-09 9:59 GMT+02:00 John McKown :
>
>> I just wanted to throw this out to the users before I made a complete
>> fool of myself by formally requesting it. But I would like what I hope
>> would be a minor change (enhancement) to the p
explaining why this works or what this argument does when
using a replication slot. Any clarification would be appreciated.
Thanks,
-Matt
On Tue, Jul 31, 2018 at 9:21 AM, Tom Lane wrote:
> "Lu, Dan" writes:
> > Question:
> > Can be a variable like `hostname` derived from Unix shell or
> I have to hardcode the name of the host like " include
> /nfs/global/postgres-host123.cnf
>
> Nope, you'd have to hard-code it.
>
> Of course, yo
age, followed by an
XLogData message, such that the Keepalive's LSN is higher than the
XLogData's? This could cause problems, because after committing the
Keepalive's LSN, if my application needed to restart, when it comes back
up, we might miss the XLogData, since we committed a higher LSN.
Thanks, and I appreciate any pointers you can provide!
-Matt
t want to manipulate the
returned data based on the types/constraints of the underlying columns of
the view.
It seems like mapping view columns to their source columns should be
something that's been done before. Is it possible?
Matt Andrews
0400 990 131
which means I will be able to use the system catalogs
in a round about way.
Incidentally, I have become interested in the pg_node_tree type. I can't
find much info on it. Would that be in the source? What would be the first
steps for writing an extension for this sort of thing?
Matt
0400 990 13
app users through a
connection pool.
I’m thinking that any queries should start with SET ROLE ‘username’ and end
with RESET ROLE. Is this how it could work?
Any help would be greatly appreciated.
--
Matt Andrews
0400 990 131
, maybe earlier.
>
> While this seems plausible- I'd love to hear about exactly what you've
> seen start to be a problem when getting up to that many users. Are you
> just referring to things like \du? Or..?
>
> Thanks,
>
> Stephen
>
--
Matt Andrews
0400 990 131
privileges for many people, but many different
privileges for a few people.
Does it come down to performance issues when there are many roles to users?
On Wed, 2 Oct 2019 at 21:03, Rob Sargent wrote:
>
>
> On Oct 2, 2019, at 3:41 AM, Matt Andrews
> wrote:
>
> I have little exper
Greetings,
I've searched the internet and also the pgsql-general list for an answer to
this question, but couldn't find one.
Is there a continuation character for meta commands in psql?
I've seen mention that a caret (^) could be used, but it didn't work
correctly and the psql man page makes no
+1
On Wed, Feb 5, 2020 at 9:15 PM Ahmed, Nawaz (Fuji Xerox Australia)
wrote:
>
> +1
>
> -Original Message-
> From: Vik Fearing
> Sent: Thursday, 6 February 2020 1:55 PM
> To: pgsql-general@lists.postgresql.org
> Subject: POLL: Adding transaction status to default psql prompt
>
> Hello,
>
data_a added as a covering column… that is, it’s only in the index
so I can get some index-only results with that column. Is the partial index
possibly an issue in this configuration?
Thanks for any insight,
Matt
> On 21/03/2020, at 4:00 AM, Adrian Klaver wrote:
>
> On 3/20/20 2:17 AM, Matt Magoffin wrote:
>> Hello,
>> Indexes:
>> "da_datum_pkey" UNIQUE, btree (node_id, ts, source_id) CLUSTER,
>> tablespace "solarindex"
>> "d
> On 21/03/2020, at 8:10 AM, Adrian Klaver wrote:
>
>> The _hyper_1_1931_chunk_da_datum_x_acc_idx index has the same definition as
>> the da_datum_x_acc_idx above (it is defined on a child table). That is, they
>> are both essentially:
>> UNIQUE, btree (node_id, source_id, ts DESC, jdata_a) WH
> On 22/03/2020, at 8:11 AM, Adrian Klaver wrote:
>
>> I was thinking more about this:
>> "INSERT INTO solardatum.da_datum(ts, node_id, source_id, posted, jdata_i,
>> jdata_a, jdata_s, jdata_t)
>> VALUES (…) ..."
>> from your OP. Namely whether it was:
>> VALUES (), (), (), ...
>> and if so
> On 23/03/2020, at 9:44 AM, Adrian Klaver wrote:
> Is there a chance the BEFORE trigger functions are doing something that could
> be leading to the error?
>
> In the error log is there a line with the actual values that failed?
The error log does not show the literal values, no. Here is a li
> On 23/03/2020, at 1:10 PM, Adrian Klaver wrote:
>
> So the query is in the function solardatum.store_datum()?
>
> If so what is it doing?
Yes. This function first performs the INSERT INTO the solardatum.da_datum table
that we’re discussing here; then it inserts into two different tables. I
> On 27/03/2020, at 5:26 AM, Adrian Klaver wrote:
>
> Well morning and coffee helped some, but not enough to offer blinding
> insight. Reviewing the function above, the TimescaleDB insert block function
> and the overview of the TimescaleDB hypertable architecture leads me to
> believe there
On Thu, Jun 4, 2020 at 12:56 AM Laurenz Albe
wrote:
> On Thu, 2020-06-04 at 16:41 +1200, Thomas Munro wrote:
> > There's no doubt it's useful, and it's also part of the SQL spec,
> > which says you can do catalog.schema.table. I would guess that we
> > might get that as a byproduct of any projec
ter than a CTE, but I don’t
know how to insert three rows on two tables using a subquery.
Any advice is appreciated, thank you in advance.
Matt
INPUT:
(The CTE illustrates all the properties and types)
[
{
“bank_account_id”: 1324,
“transaction_id”: “abc123”,
“transaction_date”: “2
= 1GB
min_wal_size = 80MB
shared_buffers = 128MB
work_mem is not configured so presumably it’s the default of 4MB.
I’ll try the temp tables. That seems familiar to what I found searching online
- are you suggesting that as a permanent solution, or just as a means to better
analyze performance?
Thanks,
Hello,
I see the --data-checksums option for initdb. Is it possible to use
--data-checksums after the cluster has been initialized? I'm guessing
"not", but thought I'd ask.
I'm running v12 on Debian.
Thanks for any help!
-m
Greetings,
Is there a preferred method of counting rows?
count(*) vs count(field_name)
I have used count(*) for a long time and am hitting an inconsistency. At
least it is an inconsistency to me (I'm sure there is an explanation)...
INCORRECT OUTPUT (not desired results)
$ select calendar.entry
On Mon, Feb 1, 2021 at 5:57 PM Rob Sargent wrote:
[...]
>
> You got one null from count(*) likely.
>
What is count(*) counting then? I thought it was rows.
-m
On Mon, Feb 1, 2021 at 6:35 PM Tom Lane wrote:
> Matt Zagrabelny writes:
> > On Mon, Feb 1, 2021 at 5:57 PM Rob Sargent
> wrote:
> >> You got one null from count(*) likely.
>
> > What is count(*) counting then? I thought it was rows.
>
> Yeah, but count(id)
On Mon, Feb 1, 2021 at 7:19 PM Guillaume Lelarge
wrote:
>
>
> You're doing a left join, so I guess there's no row where
> call_records.timestamp::date = 2020-08-30. That would result with a NULL id.
>
>
Thanks for the excellent analysis everyone. I appreciate it!
Here is the documentation (for a
Greetings PG-users,
Scenario:
I am using FreeRADIUS to insert data to a PG database.
One of the pieces of data is an array of key value pairs. The array is
encoded with quoted-printable [0].
I'd like to ideally have the key-value pair put into a JSONB field.
Here is an example of the data enco
At the risk of starting a +1 snowball or a divergent argument...
On Tue, Jan 31, 2023 at 10:16 AM Raymond Brinzer
wrote:
> Greetings,
>
> There is (for me) a small speed bump in psql. I think it's worth
> mentioning, minor though it is, because psql is such a polished tool
> generally, and bec
Greetings,
I'm setting up my workstation to use "gss" for auth to a variety of Pg
systems on different hosts.
I'd rather not have to specify the "-h" for a connection:
psql -h db-host-1.example.com foo
I'd rather do:
psql foo
and have it know that I connect to foo on host db-host-1.example.co
On Thu, Dec 6, 2018 at 4:24 PM Alvaro Herrera
wrote:
> On 2018-Dec-06, Matt Zagrabelny wrote:
>
> > I'd rather do:
> >
> > psql foo
> >
> > and have it know that I connect to foo on host db-host-1.example.com.
> >
> > Is this possible with psq
On Fri, Dec 7, 2018 at 7:42 AM Arthur Zakirov
wrote:
> On 07.12.2018 01:34, Matt Zagrabelny wrote:
> >
> >
> > On Thu, Dec 6, 2018 at 4:24 PM Alvaro Herrera > <mailto:alvhe...@2ndquadrant.com>> wrote:
> >
> > Sure, just define a pg_service.conf
Hey Rich,
On Tue, Feb 12, 2019 at 4:24 PM Rich Shepard
wrote:
> The query is to return the latest next_contact date for each person. Using
> the max() aggregate function and modeling the example of lo_temp on page 13
> of the rel. 10 manual I wrote this statement:
>
I use DISTINCT ON and ORDER
n:Could not connect to server(#3) -
'PostgreSQL 15'.
Error: connection timeout expired
I have installed the latest pgAdmin4 (v7.6) but this gives the same error.
Any suggestions as to a solution would be appreciated.
Regards,
Matt
{OPEN}
The information contained in this e-mail is c
Classified as: {OPEN}
Hi,
Thanks for the messages. It looks like the problem lay in spurious "junk" on
the path environment variable.
Tidying that up results in pgAdmin being perfectly happy post migration.
Matt
{OPEN}
-Original Message-
From: Adrian Klaver
Sent: 14 Sept
Greetings Pg folks,
I have a database schema (not mine) that uses bytea to represent a MAC
address. I'd like to create a view that casts that field into a MACADDR
type.
# \d lease4
Table "public.lease4"
Column │ Type │ Collation │ Nullable │
Hi Boris,
[bringing this email back to the mailing list]
A couple of questions:
1. What do the double parenthesis '(( Values ))' mean?
2. What is the 'x(field)' in the column alias?
Thanks for educating me!
-m
On Thu, Sep 28, 2023 at 10:34 AM Boris Zentner wrote:
Greetings,
I have a database that I recently changed the ownership for and now I am
attempting to drop the previous owner of the database:
test_db=# drop ROLE legacy_owner;
ERROR: role "legacy_owner" cannot be dropped because some objects depend
on it
DETAIL: privileges for schema public
I don
Hi Adrian,
Thanks for the reply!
On Mon, Jun 24, 2024 at 7:03 PM Adrian Klaver
wrote:
> On 6/24/24 17:00, Matt Zagrabelny wrote:
> > Greetings,
> >
> > I have a database that I recently changed the ownership for and now I am
> > attempting to drop the pre
I am trying to model a stateful `event` table using partitions. An event
has date_raised (not null) and date_cleared (nullable) columns. An event
is in the "open" state when it has a null date_cleared; it is in the
"closed" state when date_cleared is set. Once date_cleared is set, it
won't chang
-02';
)
select * from openAndREcentlyRaisedEvents
On Fri, Jul 5, 2024 at 11:24 AM Matt Hughes wrote:
> I am trying to model a stateful `event` table using partitions. An event
> has date_raised (not null) and date_cleared (nullable) columns. An event
> is in the "open" state when it has a nul
Greetings,
I am using the \C command in psql to set a title for my table.
psql is echoing 'Title is "foo"'.
$ \C foo
Title is "foo".
How do I suppress the echo of "Title is..." from psql?
Thanks for any help!
Cheers,
-m
On Tue, Aug 31, 2021 at 11:10 AM Tom Lane wrote:
>
> Matt Zagrabelny writes:
> > $ \C foo
> > Title is "foo".
>
> > How do I suppress the echo of "Title is..." from psql?
>
> The -q switch would do it ...
Thank you, Tom!
-m
terminology might be off.
Cheers,
Matt Magoffin
[1] https://github.com/pjungwir/aggs_for_vecs
[2]
https://github.com/SolarNetwork/aggs_for_vecs/blob/feature/numeric-stats-agg/vec_to_mean_numeric.c
[3]
https://github.com/SolarNetwork/aggs_for_vecs/blob/7c2a5aad35a814dca6d9f5a
ctions as I’m trying to, does that mean in theory there isn’t anything
inappropriate about doing this as far as you know?
Cheers,
Matt
On 5/12/2021, at 9:04 AM, Tom Lane wrote:
> So that probably means that you weren't careful about allocating your
> own state data in the long-lived context (agg_context), and so it
> got freed between calls.
It turns out I wasn’t careful about setting isnull on the passed in state
argument. Aft
ork/aggs_for_vecs/blob/9e742cdc32a113268fd3c1f928c8ac724acec9f5/vec_agg_mean.c>
Cheers,
Matt Magoffin
On 15/12/2021, at 11:51 AM, Tom Lane wrote:
> Hmm, I think you're abusing the ArrayBuildState API. In particular,
> what guarantees that the astate->dvalues and astate->dnulls arrays
> are long enough for what you're stuffing into them?
The length is defined in the aggregate transition function
On 15/12/2021, at 11:51 AM, Tom Lane wrote:
> You should
> probably palloc temp arrays right here and then use construct_md_array
> directly instead of dealing with an ArrayBuildState.
OK, I gave that a go [1] this time in a vec_agg_sum() aggregate, that operates
much the same as the vec_agg_mea
On 16/12/2021, at 2:43 PM, Tom Lane wrote:
> Of course what this function is actually returning is numeric[].
> There is some code such as array_out that will look at the
> element type OID embedded in the array value, and do the right
> thing. But other code will believe the function's declared
ed to call pfree() like this. My actual function is
dealing with array input values, and for large sets of inputs I didn’t want to
grow memory use as large as the entire data set being aggregated.
Kind regards,
Matt
On 21/12/2021, at 10:25 AM, Tom Lane wrote:
> Not quite like that. Look into nodeAgg.c, which solves a similar problem
> for the transvalues themselves with code like
>
>/* forget the old value, if any */
>if (!oldIsNull && !pertrans->inputtypeByVal)
>pfre
On Wed, Dec 20, 2017 at 9:30 PM, support-tiger
wrote:
>
> No, the docs for understanding hba_conf are not good (yes we can read and
> are fairly smart) - we made suggestions the last time for several case
> examples and were ignored - okay, simplicity of pouchdb/couchdb is
> getting our atten
On Sat, Oct 5, 2024 at 11:26 AM Adrian Klaver
wrote:
> On 10/5/24 09:04, Matt Zagrabelny wrote:
> >
> >
> > On Sat, Oct 5, 2024 at 10:27 AM Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 10/5/24 07:13, Matt Zagr
Hello,
I'd like to have a read-only user for all databases.
I found the pg_read_all_data role predefined role, which I granted to my RO
user:
GRANT pg_read_all_data TO ro_user;
...but I cannot connect to my database(s).
I'd like to not have to iterate over all the databases and "GRANT
CONNECT.
hieve this desire?
Thanks for the help!
-m
On Sat, Oct 5, 2024 at 9:02 AM David G. Johnston
wrote:
> On Saturday, October 5, 2024, Matt Zagrabelny wrote:
>
>> Hello,
>>
>> I'd like to have a read-only user for all databases.
>>
>> I found the pg_read_
On Sat, Oct 5, 2024 at 10:27 AM Adrian Klaver
wrote:
> On 10/5/24 07:13, Matt Zagrabelny wrote:
> > Hi David (and others),
> >
> > Thanks for the info about Public.
> >
> > I should expound on my original email.
> >
> > In our dev and test
On Sat, Oct 5, 2024 at 3:12 PM Adrian Klaver
wrote:
> On 10/5/24 11:24, Matt Zagrabelny wrote:
> >
>
> >
> > Nope. I create the role (via puppet) and then add the GRANT
> > pg_read_all_data TO (via puppet).
> >
>
> > What is \drgS? I don't beli
On Sat, Oct 5, 2024 at 5:43 PM Adrian Klaver
wrote:
> On 10/5/24 15:04, Matt Zagrabelny wrote:
>
> > psql (15.8 (Debian 15.8-0+deb12u1))
> >
> >
> > Where did you install it from or where are you running it?
> >
> >
> > Installed from Debian
On Tue, Nov 5, 2024 at 4:44 PM Ron Johnson wrote:
> On Tue, Nov 5, 2024 at 4:45 PM Matt Zagrabelny wrote:
>
>> Greetings Pg folk,
>>
>> I've got a legacy app connecting to a legacy Pg database (9.4).
>>
>> I'd like to upgrade the database to Pg 15
Greetings Pg folk,
I've got a legacy app connecting to a legacy Pg database (9.4).
I'd like to upgrade the database to Pg 15, but when I connect and perform
some tasks in the app, I get:
ERROR: column d.adsrc does not exist at character 331
Some searching online yields that adsrc was removed i
63 matches
Mail list logo