------
[{"a": 1, "b": -2, "c": 1}, {"a": 2, "b": -2, "c": 2}, {"a": 2, "b": -1, "c":
3}, {"a": 2, "b": -2, "c": 4}]
(1 row)
For understanding both queries better, it probably helps to take out the
jsonb_agg calls to see the separate objects from the array. Add the original
obj back in for comparison, if you like.
I typically use the documentation pages for the JSON functions and the one on
aggregate functions, where the JSONB aggregates are located:
https://www.postgresql.org/docs/16/functions-json.html
https://www.postgresql.org/docs/16/functions-aggregate.html
And if you’re not familiar with dollar quoting:
https://www.postgresql.org/docs/16/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING
Alban Hertroys
--
There is always an exception to always.
;2023-06-12T19:54:39Z"
}
]$$::text) replacement
)
select *
from dollar6
cross join lateral jsonb_array_elements(replacement) r
where (r->>'start')::timestamptz <= current_timestamp;
There are probably other ways to attack this problem, this is the one I came up
with.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
‘default’ partition that
gets detached at step 7, after which you can insert+select those from the
default into the appropriate partitions?
But you were going to test that first anyway, obviously.
Alban Hertroys
--
There is always an exception to always.
ion for PostgreSQL says this: "You should ensure that the
join produces at most one candidate change row for each target row.”, which
also seems to imply that you shouldn’t have duplicates.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
w format, while
you can add any new partitions in the new format.
I suspect it’s not allowed, but perhaps worth a try.
Alban Hertroys
--
There is always an exception to always.
nd), or something in
your data changed. My bet is on the latter.
For example, in FY25 the value of NCD_EXCL."Order Quantity" is 'NCD Valid
FY25’, which doesn’t match your LIKE expression. Even something like a trailing
space to the value could be enough.
Alban Hertroys
--
There is always an exception to always.
aints (which you
should probably have on this table anyway), which would allow to handle such in
the application. Such constraints can raise exceptions in your code, that need
handling.
So I say, at least put an exclusion constraint on that table if you didn’t
already, and then decide what appro
e transactions get aborted and rolled back,
putting us back at the 1/4th of space in use situation.
Have you been able to catch your shared memory shortage in the act? I suspect
that the stats you showed in your message were those after rollback.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
>> Craig
>
Is this a single INSERT statement with multiple tuples after VALUES? Then
perhaps an earlier line (my bet would be on line 487) accidentally ends with a
semi-colon instead of a comma?
Something like this:
INSERT INTO table (col1, col2, ..., coln) VALUES
(..., ..., ),
(..., ..., ),
(..., ..., ); -- <-- This terminates the INSERT
(..., ..., ); -- <-- Now this line make no sense
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
nfortunately, there’s no mode in psql that allows you to import
an SQL file and step through the statements one by one. That would be helpful
in your case I think. But maybe someone on the list has ideas about that?
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut
entire database from that backup
> and then insert all new table rows since I have saved all the scripts.
If you end up in the same situation again after doing that, then you know at
least it’s repeatable and can analyse how you got there.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
nk there are any open-source initiatives (unfortunately), they’re all
commercial products AFAIK, and not cheap. With a suitable use-case they can be
rather valuable tools too though.
Regards,
Alban Hertroys
--
Als je de draak wilt steken met iemand,
dan helpt het,
als die een punthoofd heeft.
OR (coalesce(upper(i.dates), 'infinity') =
coalesce(upper(e.dates), 'infinity')
AND coalesce(lower(i.dates), '-infinity') >
coalesce(lower(e.dates), '-infinity'))
)
);
id | value | dates
+---+-
1 | b | [2010-01-01,)
2 | d | [2010-01-01,2021-01-01)
3 | g | [2013-01-01,)
4 | j | [2010-01-01,2015-01-01)
(4 rows)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
the next attempt.
In PostgreSQL this also works for almost all DDL statements (CREATE TABLE, DROP
TABLE, TRUNCATE TABLE, etc.), which is one of the features about this database
that I really appreciate - some big names don’t have that.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
On Fri, 22 Mar 2024 at 15:01, Nick Renders wrote:
>
> We now have a second machine with this issue: it is an Intel Mac mini
> running macOS Sonoma (14.4) and PostgreSQL 16.2.
> This one only has a single Data directory, so there are no multiple
> instances running.
>
I don't think that having a
On Mon, 4 Mar 2024 at 13:46, Francisco Olarte
wrote:
> On Mon, 4 Mar 2024 at 13:10, wrote:
> > According to the documentation, Table 9.31, IMHO both comparisons should
> > produce the same results, as
>
> > timestamp - timestamp → interval
> > timestamp + interval → timestamp
> Your problem may
mn.
> So, creating the index itself took ~2hrs+ and the index size now shows as
> ~116GB.
>
> Create index idx1 on TAB1(ID)
Are your duplicates exact duplicates? Or is there an order of preference among
them?
And if so, what really makes those rows unique?
That matters for soluti
de to recreate the caches from
scratch from source data. Having custom code in there not under control of the
3rd party application breaks that guideline.
If they’re necessary, then so be it, but I can’t shake the feeling that we can
achieve this without custom code in the database.
Regards,
Alban Hertroys
--
There is always an exception to always.
t; "
I didn’t succeed in calling SET LOCAL TIMEZONE from within the function. Could
be I missed something, then Google (stackoverflow) pointed me to set_config().
I did manage to apply it to the second function header, which I think behaves
such that the time zone change stays within func
migrated onto a new ARM-based architecture? In that case the Homebrew uninstall
scripts won’t even work anymore - at least not w/o Rosetta 2 - as they’re
Intel-based too.
A migration assistant can also work too well, I found.
Alban Hertroys
--
There is always an exception to always.
obviously.
Now of course there are only 2 hours a year where this happens. Our data
scientists chose to ignore the problem for simplicity’s sake and be slightly
off with their numbers on those dates.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
alendar and an (quarter of an) hour per UTC day
table, but materialising that with some indexes may perform better (at the cost
of disk space). I do materialise that currently, but our database server
doesn’t have a lot of memory so I’m often not hitting the cache and performance
suffers a bit (infrastructure is about to change for the better though).
Regards,
Alban Hertroys
--
There is always an exception to always.
o i get that all these columns that are joined are aligned, meaning if
> it starts with 1 in one column it must be 1 in the other columns. Or how
> would you assign unique keys in Postgres?
Are you perhaps asking how to define FOREIGN KEY CONSTRAINTs?
https://www.postgresql.org/docs/15/
_administrative takes up a large part of those 164G, then you probably
don’t have enough space for a 10x multiplication in size from the original
table to the new table. And that happening looks entirely possible from the
information you provided.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> On 26 Aug 2023, at 11:31, pan snowave wrote:
(…)
> pg_indent.conf
>
> test rootcce
If that is indeed the name of the file, that would explain your problem. No
doubt that it should be named pg_ident.conf instead, without the ’n'.
Alban Hertroys
--
If you can
could
get a higher cost than using a less optimal (costlier) index that’s already
cached.
Regarding lowering random_page_cost; If your index files are on SSD storage,
lowering that sufficiently (to a realistic value) could then sufficiently lower
the cost of loading that uncached index into memory, evicting the index it was
using in above plan to make room (unless other active sessions are using it).
Alban Hertroys
--
There is always an exception to always.
’re simply trying to connect to the database named postgres as database
user postgres, you can instead use the command 'psql -U postgres postgres’.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
he data changes seldom.
Not in practice, AFAIR, but I was thinking of a solution like this for small
matrices (Sudoku’s, I hate the things, but I need some solutions to a few to
help me with a number of test scripts).
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
current/2023-April/003446.html
Regards,
Alban Hertroys
--
There is always an exception to always.
r statement violating a not null
constraint.
It doesn’t matter whether you insert first or update first, either operation is
going to violate that constraint. You’re specifying a NULL value for a column
that doesn’t accept that because it has a NOT NULL constraint. That is your
problem.
Alban
e: Fix your design.
Regards,
Alban Hertroys
--
There is always an exception to always.
the same minimum number (namely 0).
Should the result be just the first of the maximums (or minimums) through some
selection criterium (such as their alphabetical order), should that give each
of the tied results, or should there be a means to define that behaviour?
I suppose a combination with FIRST and LAST could solve that issue?
Regards,
Alban Hertroys
--
There is always an exception to always.
> On 3 Mar 2023, at 20:32, Thorsten Glaser wrote:
>
> On Fri, 3 Mar 2023, Alban Hertroys wrote:
>
>> You can rewrite that into something like this:
>>
>> select jsonb_build_object('opening_times’,
>> obj
>> ORDER BY
>>
> On 3 Mar 2023, at 0:02, Thorsten Glaser wrote:
>
> On Tue, 28 Feb 2023, Alban Hertroys wrote:
>
>> Perhaps you can use a lateral cross join to get the result of
>> jsonb_build_object as a jsonb value to pass around?
>
> I don’t see how. (But then I’ve not
;from_hour',
> jsonb_build_object(
> 'weekday', cot.weekday,
> 'from_hour', cot.from_hour,
> 'to_hour', cot.to_hour)->>'to_hour')
> )
>
> Isn’t that more like it?
Perhaps you can use a lateral cross join to get the result of
jsonb_build_object as a jsonb value to pass around?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> On 9 Feb 2023, at 18:35, Dominique Devienne wrote:
>
> On Thu, Feb 9, 2023 at 5:37 PM David G. Johnston
> wrote:
> On Thu, Feb 9, 2023 at 9:28 AM Alban Hertroys wrote:
> > On 9 Feb 2023, at 16:41, Dominique Devienne wrote:
> > Now we'd like to do the same
ibpq, but a pure SQL or PL/pgSQL demo would
> still help (I think).
This works:
=> select (1, 'one'::text) in ((1, 'two'::text), (2, 'one'::text), (1,
'one'::text), (2, 'two'::text));
?column?
--
t
(1 row)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
t a 'message' was sent or none received
it.
Algorithms like Tarry, Lai-Yang or the Echo algorithm solve this by adding
communication between those processes about messages in transit.
Alban Hertroys
--
There is always an exception to always.
u could just calculate them and even omit storing
them in a physical table.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
enised version after the upgrade.
It is a bit of a hassle, as you need to remember to do that before an upgrade,
but at least you’d have something…
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
after of the output of pg_get_functiondef, applied to
the stored diff?).
I’m not so sure that would work for auditing, but that seems to have been
tackled down-thread.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
ert that to a character string yourself.
See for example:
https://stackoverflow.com/questions/341384/how-to-convert-an-interval-like-1-day-013000-into-253000
That seems unnecessarily complicated, perhaps there is/could be a more
convenient method? I’m sort of thinking of a "relative timestamp offset" type,
that tracks an exact difference relative to a given timestamp?
Alban Hertroys
--
There is always an exception to always.
er1 to
> Server2.
Bit of a wild idea, I haven’t used SSH tunnels in years, so a bit fuzzy on the
details:
Can you create an SSH tunnel to the new machine from the VM, then pipe that to
an SSH connection from a machine that does have enough space to dump?
And then vice versa to the new mac
u could either pass them on or they (probably) don’t make sense in
the context of the generic function.
Alban Hertroys
--
There is always an exception to always.
ee section 9.16.2 on:
https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-SQLJSON-OP-TABLE
A recursive query is another possible solution. It would probably perform far
worse, but I find them more rewarding to write. Some people prefer Sodoku.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
select *
from cte
order by hierarchy;
Function parent() is made-up. It would return the parent node of a node, so
that there is some way to connect the different parts in the hierarchy. I guess
xpath() could fulfil that purpose, but I have no way of testing that hypothesis.
I hope that’s a good enough starting point for you?
Alban Hertroys
--
There is always an exception to always.
> On 18 Apr 2022, at 11:56, Pól Ua Laoínecháin wrote:
(…)
> All of the code below is available on the fiddle here:
>
> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=0cc20c9081867131260e6e3550bd08ab
(…)
> OK, grand, now I wish to perform a RECURSIVE CTE on it. So, I start by
> trying someth
be a good idea
to store that in a way optimised for that. TimescaleDB comes to mind, or arrays
as per Pavel’s suggestion at
https://stackoverflow.com/questions/68440130/time-series-data-on-postgresql.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
s:
=> with x as (
select '{ "x": 1, "y": 2 }'::jsonb
union all
select '{ "y": 2, "x": 1 }'::jsonb
)
select row(x.jsonb)::text, md5(row(x.jsonb)::text) from x;
row| md5
--+--
("{""x"": 1, ""y"": 2}") | d5a6dbdec7a5bfe0dc99e090db30322e
("{""x"": 1, ""y"": 2}") | d5a6dbdec7a5bfe0dc99e090db30322e
(2 rows)
Alban Hertroys
--
There is always an exception to always.
on’t use this approach with JSON (as opposed to JSONB) type fields though, a
single extra space in the JSON structure would already lead to a difference, as
would other formatting differences.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
instant| empty range result |
instant range result
---+---++--
empty | ["2000-01-10 00:00:00","2000-01-10 00:00:00"] | false |
true
(1 row)
As I re
on downloading this data to the
R script, would it help to rewrite it in PL/R and do (part of) the ML
calculations at the DB side?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
single-row example, but if you had a table full of data, you
would now have locked all rows with the value t1c2 = 100 for update. If that
update never happens (or the locking doesn’t get rolled back), well…
Regards,
Alban Hertroys
--
There is always an exception to always.
hierarchy on a reference timestamp. That
performed adequately on a production data warehouse, as long as you
sufficiently constrained the inputs. You can join such a function (laterally)
to some other data set too.
Regards,
Alban Hertroys
--
There is always an exception to always.
> On 26 Jul 2021, at 17:52, Alban Hertroys wrote:
> Something like this:
>
> with recursive foo (id, parent, children_ids) as (
> select id, parent, null::text
> from tree t
>where not exists (
> select 1 from tree
t.id, t.parent
, f.id || case f.children_ids when '' then '' else ',’ end ||
f.children_ids
from foo f
join tree t on f.parent = t.id
where f.parent <> 0
;
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
e, but that’s the gist of it.
If that project_csv column gets populated by some external application, you
could keep the link-table updated with insert/update/delete triggers.
Alternatively, a writable view replacing public.projects may be a possibility.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
On 2021-07-08 13:30, Ron wrote:> Thus, the bigTable PK must be on id, columnX, (No, I don't like it > either.)That's not entirely true. You can keep the PK on id if you additionally create a unique constraint on (id, columnX).That way, you can at least be certain that the uniqueness of the PK remai
ot;,"2021-06-20 16:00:00+01")
You could probably achieve this by using window function lag() over (order by
booking_id), in combination with a case statement when the range from the
previous row overlaps the current range.
That would only solve the case for immediately subsequent ro
ring(qw.mm, 1, 1), '') as mm
Or even:
select qw.*, coalesce(left(qw.mm, 1), '') as mm
Regards,
Alban Hertroys
--
There is always an exception to always.
y key column
candidate.
Now, of course, the OP could have a case where their geometries are guaranteed
to be unique regardless, but they’d better make sure before adding them to the
PK.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
---
PostgreSQL 11.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-44), 64-bit
(1 row)
Regards,
Alban Hertroys
P.S. Sorry about below company disclaimer, there is nothing I can do
about that.
Alban Hertroys
D: 8776 |M: |T: +31 (0)53 4888 888 | E
foo;
>
> This test is equivalent to "col1 = null" which will always fail.
> You could try something like
>
> CASE WHEN col1 IS NULL THEN ... ELSE ... END
>
> Although I think the particular thing you're doing here would
> be better solved with COALESCE(col1, 'z').
>
> regards, tom lane
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
ary, I'm looking for a generic approach here.
We're on PG 11(.9), so the enhancements made to JSONB operations in PG12
are not (yet) available to us. This is a 3TB database w/o replica's, so
upgrading it is a little scary and requires a maintenance window that
would be sufficient t
ain, this is what I
>> get:
>>
>>
>>
>> CREATE OR REPLACE VIEW myview
>>
>> AS SELECT product_id,
>>
>> product_acronym
>>
>>FROM products
>>
>> WHERE product_acronym = 'ABC'::text;
>>
>>
>>
>> So, the formatting changed, keywords are capitalized, the comment I added in
>> the from-part has gone and the text constant 'ABC' changed to 'ABC'::text.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Verizon Deutschland GmbH - Sebrathweg 20, 44149 Dortmund, Germany -
>> Amtsgericht Dortmund, HRB 14952 - Geschäftsführer: Detlef Eppig -
>> Vorsitzender des Aufsichtsrats: Francesco de Maio
>>
>
Alban Hertroys
--
There is always an exception to always.
ly trivial to
> create 50 or 1000 partitions to break up the range of values allowed by your
> field definition.
Alban Hertroys
--
There is always an exception to always.
know
about the existence of said column; overwriting an existing value that some
other application put there looks like a problem to me. But of course, that
depends a lot on what you’re trying to achieve.
What is your use-case that that is not the desired behaviour? Or are we talking
about a mixe
the common approach
> to this problem?
>
> BR
> Dirk
Can't you use column defaults to handle these cases?
Alban Hertroys
--
There is always an exception to always.
lap and assign a group number but i'm not sure how to accomplish
> that.
You could date_trunc those timestamps to the minute and group on that.
> there's also no guarantee an entire rack is full of samples so some "cells"
> of display might be null. i think that makes the use of tablefunc crosstab a
> little harder. if i remember correctly, it does not handle missing values
> well. i'm open to any pivoting strategy.
Many reporting tools have features to support just that. We use WebFOCUS, which
calls those ACROSS columns. It’s a common requirement in reporting.
Alban Hertroys
--
There is always an exception to always.
pg_hab_file_rules:
>
> postgres=# table pg_hba_file_rules;
Also:
postgres=# show listen_addresses;
postgres=# show port;
Those will tell you whether the server is listening on the network and on the
expected port.
Alban Hertroys
--
There is always an exception to always.
a function or in a do-block.
We do this in plpgsql, but that’s mostly because this code is part of our ETL
process and it has to perform some other logic on the same data anyway. I could
look up our code for you tomorrow, but that looks to be a busy day, so I can’t
promise.
Alban Hertroys
--
If
On 11 Nov 2020, at 11:15, Jitendra Loyal wrote:
>
>
> Thanks Nikolay
>
> I read that but is there a way to meet the above requirement. And I will like
> to add that IS NULL and IS NOT NULL should evaluate to true/false. These
> operators are made for this and should not be returning NULL.
uld also do this using subqueries instead of CTE’s, that may perform
better as CTE’s act as optimisation fences.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
"Srinivasa T N" wrote on 01/10/2020 11:47:33:
> On Thu, Oct 1, 2020 at 2:47 PM Alban Hertroys <
> alban.hertr...@apollovredestein.com> wrote:
> Hi all,
>
> We're seeing the FATAL error message from the subject pop up in our
> logs at regular intervals, b
r 1530646postmaster
6482 postmaster 1530645postmaster
6482 postmaster 1530659postmaster
6482 postmaster 1530658postmaster
6482 postmaster 1530657postmaster
6482 postmaster 15 30656postmaster
6482 postmaster 1530655postmas
On Thu, 3 Sep 2020 at 20:59, Michael Lewis wrote:
> It seems like you are maybe wanting this- If the previous row is the same,
> then get the previous row's run_nr. If it is different, then increment.
>
> case when lag( property_A ) over() = property_A and lag( property_B )
> over() = property_B
On Thu, 3 Sep 2020 at 16:01, Tom Lane wrote:
> Alban Hertroys writes:
> > As stated above, I want to enumerate the runs, starting at 1 and
> > incrementing by 1 every time a partition from the 'run' window closes,
> > Is there a way to achieve this through win
9 | coffee | cappucino | 15:09 | 3 | 0.45
2020-09-03 15:10 | coffee | cappucino | 15:09 | 3 | 0.43
etc.
Is there a way to achieve this through window functions, or do we need to
wrap the thing in a subquery to achieve this?
Regards,
Alban Hertroys
--
If you can't see the forest for the trees
mestamps.
> I've looked into this topic and found statements about that one
> shouldn't put too many rows into the index[1] and stuff like that or
> it will be ignored at all. But that doesn't seem to be the case for me
> according to the plan. OTOH, my index really simply is about the
> column containing the timestamp, no function reducing things to dates
> or stuff like that to reduce the number of rows.
>
>> CREATE INDEX idx_clt_rec_captured_at ON clt_rec USING btree ( captured_at );
Try this:
CREATE INDEX idx_clt_rec_captured_at ON clt_rec USING btree ( captured_at, id );
Alban Hertroys
--
There is always an exception to always.
ls_price_and_date, 7, 13 )
from articulos art
cross join lateral f_art_get_price_str( art.id ) p(ls_price_and_date);
Alban Hertroys
--
There is always an exception to always.
it also makes
the identifier case-sensitive. That’s probably why Paul suggested to rename the
table to no longer require identifier quoting - many people consider it a PITA,
but it can be used to get out of trouble like yours - some people insist on it,
for example because it makes using camel-caps in identifiers meaningful.
Regards,
Alban Hertroys
--
There is always an exception to always.
;table function". Can you kindly clarify?
Correct. Thomas already explained the return type, but the plural form of the
function name is also an indication that it returns multiple results.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
idea
how to write that in Django):
select snapshot->’pay_definition’->k.value->’name’
from MyModel
join lateral jsonb_object_keys(snapshot->’pay_definition’) k(value) on true
I don’t know how that compares performance-wise to using jsonb_each, but
perhaps worth a try. Obviously, the way it’s written above it doesn’t return
distinct values of ’name’ yet, but that’s fairly easy to remedy.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
y to commit, and the central machine returns an ‘acknowledged’ once the
last server sent it’s ‘done’ event.
The challenge there is that the ‘ack’ needs to be caught and processed within
the same waiting transaction… Not sure how to do that right now - maybe through
web services, MQTT or similar.
A
uery in a more readable
and consistent way, they would spot their error pretty quickly. It’s a simple
typo.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
hat to do about that; if it
fails it will probably fail fast, and if not, then you’re looking at a one-off
situation that won’t require more than a few workarounds - after which you can
just run the update again.
Ad 1). No harm has been done, it’s a single transaction that rolled back.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> FROM land
LEFT JOIN devisen ON land.wkz = devisen.wkz AND land.brgroup =
devisen.brgroup
> WHERE land.brgroup = :brgroupHost_for_helpland_cursor
> ORDER BY stammprio, landbez;
> #endif
Alban Hertroys
--
If you can't
On Mon, 2 Dec 2019 at 12:11, Laura Smith
wrote:
>
> My initial idea was something along the lines of :
> select (select sum(statcount) from stats_residence) as
> aggstat,statcount,short_name_en from stats_residence where
> aggstat>some_number;
>
One option is to move the aggregate to the where-
ent_timestamp <= t from (values
('-infinity'::timestamp), ('infinity'::timestamp)) x(t);
t | ?column? | ?column?
-------+--+--
-infinity | t| f
infinity | f| t
(2 rows)
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
recreate table files, such as CLUSTER.
Then again, if you’re just looking for the table you created last Wednesday,
that’s probably not a major concern.
Another option is to add a DDL Event trigger on create table statements and log
that to some table[3].
Regards,
Alban Hertroys
[1]: ht
ing an attribute of that
structure. Personally, I would invert the test like so (and then move it before
the idxused test:
if (idxcd == NULL) {
elog( INFO, "idxcd IS NULL" );
continue; /* Or is that fatal enough to break instead? */
)
if (!idxcd->idxused)
continue;
Alban Hertroys
--
There is always an exception to always.
> On 25 Sep 2019, at 22:50, Alban Hertroys wrote:
>
>
>> On 25 Sep 2019, at 22:25, David Salisbury wrote:
>> db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY (
>> select string_to_array( '200,400', ',')::bigint[] );
>&g
. Since the left-hand array has only 1
item and the right-hand one has two, there’s not much equality between them.
You probably meant:
select name from table_name_ds_tmp where categoryid = ANY ( select
string_to_array( '200,400', ',')::bigint[] );
Alban Hertroys
--
There is always an exception to always.
> 2) "tcp:postgresql://localhost?user=myuser&password=password" looks like
>
> "tcp:postgresql://localhost?user=myuser&password=my&pwd"
>
> and password is parsed on the & and you also end up with an extra parameter
> pwd
Perhaps it he
> On 13 Aug 2019, at 15:19, David G. Johnston
> wrote:
>
> On Tuesday, August 13, 2019, Alban Hertroys wrote:
>
> > On 13 Aug 2019, at 13:10, stan wrote:
> >
> > select
> > project.proj_no ,
>
> Removed columns that get in the way o
From experience, that either results in people reporting the wrong
financial results (several orders too high) or blaming your query.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
on its lag. You probably have at least that much down-time to
change the connections anyway.
Regards,
Alban.
> From: Alban Hertroys
> Sent: Monday, August 5, 2019 5:01 AM
> To: Julie Nishimura
> Cc: Adrian Klaver ;
> pgsql-general@lists.postgresql.org ;
> pgsql-genera
ons
I hope it all makes sense...
Thank you
It does to me. Now would be a good time for people to chime in if they
don't agree ;)
From: Alban Hertroys
Sent: Saturday, August 3, 2019 3:15 AM
To: Julie Nishimura
Cc: Adrian Klaver ;
pgsql-general@lists.postgresql.org ;
pgsql-general
Subject:
ly removed a database from both a and c,
you still have replicas to recover it from. And the backups, of course, but
that will not contain the data that came in after replication was paused.
I do hope the remaining 3% disk space is enough to cover all that, though...
Regards,
Alban Hertroys
--
If
your transaction cannot be allowed to commit
without a response from the other peer. In such cases it is appropriate to use
plpython, plperl, etc
All that said, I am talking about PostgreSQL here. If you’re instead using
EnterpriseDB, which does have an Oracle compatibility layer that could pe
1 - 100 of 149 matches
Mail list logo