lot of data and a lot of users.
I am not looking for an answer - I know that I should create dummy data
and run some timing tests. I was just wondering if someone more
experienced would wince when they look at the second SELECT, or if they
would shrug and think that it looks fine.
Any input will be appreciated.
Frank Millman
On 2019-03-24 9:25 AM, Ron wrote:
On 3/24/19 1:42 AM, Frank wrote:
Hi all
As I understand it, a general rule of thumb is that you should never
create a physical column if the data could be derived from existing
columns. A possible reason for breaking this rule is for performance
reasons
Good advice - much appreciated.
Frank
On 2019-03-24 2:41 PM, Peter J. Holzer wrote:
On 2019-03-24 10:05:02 +0200, Frank wrote:
Many thanks to Peter et al for their valuable insights. I have learned a
lot.
> So the important part here is not whether data is added, but whether
> data is changed. Sure, new transactions are
On 2019-03-25 4:06 PM, Ron wrote:
On 3/25/19 8:15 AM, Frank wrote:
It would be interesting to see what the query planner tries to do with
this:
WHERE
CASE
WHEN a.tran_type = 'ar_rec' THEN y.posted
WHEN a.tran_type = 'cb_rec' THEN w.posted
On 2019-03-25 5:11 PM, Frank wrote:
On 2019-03-25 4:06 PM, Ron wrote:
On 3/25/19 8:15 AM, Frank wrote:
It would be interesting to see what the query planner tries to do with
this:
WHERE
CASE
WHEN a.tran_type = 'ar_rec' THEN y.posted
WHEN a.tran_type
On 2019-03-25 5:44 PM, Frank wrote:
On reflection, I have not been consistent with my use of indexes, and I
think that will affect the query plan.
There are at least two issues -
1. The first table defined in the VIEW is ccc.ar_tran_inv. It has the
following index
k for.
I can supply it if that would help.
Thanks for any advice.
Frank Millman
On 2020-05-26 9:32 AM, Olivier Gautherot wrote:
Hi Frank,
On Tue, May 26, 2020 at 9:23 AM Frank Millman <mailto:fr...@chagford.com>> wrote:
Hi all
I have a SELECT that runs over 5 times slower on PostgreSQL compared
with Sql Server and sqlite3. I am trying to under
On 2020-05-26 11:10 AM, Charles Clavadetscher wrote:
Hello
On 2020-05-26 10:38, Frank Millman wrote:
On 2020-05-26 9:32 AM, Olivier Gautherot wrote:
Hi Frank,
On Tue, May 26, 2020 at 9:23 AM Frank Millman <mailto:fr...@chagford.com>> wrote:
Hi all
I have a SELECT that ru
On 2020-05-26 12:02 PM, Christian Ramseyer wrote:
Hi
On 26.05.20 09:22, Frank Millman wrote:
I have looked at the EXPLAIN, but I don't really know what to look for.
I can supply it if that would help.
My favorite approach to tuning Postgres queries is:
1. Run EXPLAIN ANALYZE
2.
On 2020-05-26 11:27 AM, Charles Clavadetscher wrote:
On 2020-05-26 11:10, Charles Clavadetscher wrote:
Hello
On 2020-05-26 10:38, Frank Millman wrote:
On 2020-05-26 9:32 AM, Olivier Gautherot wrote:
Hi Frank,
On Tue, May 26, 2020 at 9:23 AM Frank Millman <mailto:fr...@chagford.
On 2020-05-26 12:04 PM, David Rowley wrote:
On Tue, 26 May 2020 at 19:23, Frank Millman wrote:
The table sizes are -
my_table : 167 rows
table_1 : 21 rows
table_2 : 11 rows
table_3 : 3 rows
table_4 : 16 rows
Therefore for each tran_date in my_table there are
On 2020-05-26 1:45 PM, David Rowley wrote:
On Tue, 26 May 2020 at 22:31, Frank Millman wrote:
Thank you David. I tried that and it produced the correct result in
53ms, which is what I am looking for.
It will take me some time to understand it fully, so I have some
homework to do!
The
table
WHERE deleted_id = 0
) as a
WHERE a.row_num = 1
Do you see any problem with this?
Thanks
Frank
ith user postgres and some password, you can set one using the method
described by Reid Thompson.
Regards,
Frank
OpenPGP_0xE5418B2366C33EE6.asc
Description: OpenPGP public key
OpenPGP_signature
Description: OpenPGP digital signature
Deutsch's <>
>
> Jian
My proposal:
https://dbfiddle.uk/?rdbms=postgres_14&fiddle=7daef32ae39b2ec7c38a83cf9e19d4ae
select id, name, size
, sum(size) over (order by id) as size_total
, ((sum(size) over (order by id) - 1) / 600) + 1 as size_group
from account
order by id, name;
Best regards
Frank
TS "SRP-27097" WITH FORCE;
Best rehards
Frank
Hi all
Apologies if this is an FAQ, but I could not find the answer I was
looking for.
I want to run two SELECTs, one immediately after the other.
I want to ensure that no other process updates the database in between
the two.
What is the best way to achieve this?
Thanks
Frank Millman
On 2022-08-07 7:54 AM, Ron wrote:
On 8/7/22 00:30, Frank Millman wrote:
Hi all
Apologies if this is an FAQ, but I could not find the answer I was
looking for.
I want to run two SELECTs, one immediately after the other.
I want to ensure that no other process updates the database in
ing command line options.
> […] ^^
And I'm wondering, if it's what you want. Possibly your errors could come from
there…
Cheers,
Frank.
Hi Tom,
Am 20.10.22 um 07:12 schrieb Tom Lane:
Frank Gard writes:
Am 20.10.22 um 02:58 schrieb Ron:
pg_restore -v --create --clean --jobs=`nproc` -Fd --dbname=template1 $DB
What database do you want to restore your data into? As far as I know your
pg_restore command would import the data
need to do to get it to return the character(30) type?
--
Frank.
Frank Cazabon
On 15/11/2022 2:48 pm, Adrian Klaver wrote:
On 11/15/22 10:43 AM, Frank Cazabon wrote:
Please reply to list als.
Ccing list
Sorry about that, first time using this list and just assumed I was
replying to the list and the list would then notify you
SELECT * FROM public.testFunction
On 15/11/2022 2:44 pm, Tom Lane wrote:
Frank Cazabon writes:
If however I have a function defined like this
CREATE OR REPLACE FUNCTION public.testfunction(
)
RETURNS TABLE
(
Firstname character(30)
)
LANGUAGE 'plpgsql'
AS $BODY$
BEGIN
I don't think that's necessary, I'm 100% certain that it's VFP not able to
interpret the size of what is coming back to it so it just gives it the biggest
type it can.
Thanks
15 Nov 2022 14:59:59 Ron :
> On 11/15/22 12:54, Frank Cazabon wrote:
>>
>> On
On 15/11/2022 2:58 pm, Adrian Klaver wrote:
On 11/15/22 10:54 AM, Frank Cazabon wrote:
On 15/11/2022 2:44 pm, Tom Lane wrote:
Frank Cazabon writes:
If however I have a function defined like this
CREATE OR REPLACE FUNCTION public.testfunction(
)
RETURNS TABLE
I have read that PostgreSQL listens on localhost is it possible to have it
listen on an IP address?
Thank you.
FDG
09:12, Frank Gunseor wrote:
> > I have read that PostgreSQL listens on localhost is it possible to have
> > it listen on an IP address?
>
> Read:
>
>
> https://www.postgresql.org/docs/current/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS
&g
%p]: user=%u,db=%d,app=%a,client=%h '
log_lock_waits = yes
log_min_duration_statement = 100
log_temp_files = 0
---cut
Any idea what I missed or did wrong?
Cheers,
Frank
Hello,
Sorry for the late response.
Am 06.03.24 um 16:40 schrieb Greg Sabino Mullane:
On Tue, Mar 5, 2024 at 3:14 AM Frank Lanitz mailto:fr...@frank.uvena.de>> wrote:
$ pgbadger --journalctl "journalctl -u postgresql.service"
You could try adding --verbose to see if it
edger_row_id = 1))
(24 rows)
Maybe SQL Server has a way of optimising this, and there is nothing more
I can do. I can live with that. But I just thought I would ask the question.
Thanks for any advice.
Frank Millman
On 2021-03-19 10:29 AM, Thomas Kellerer wrote:
Frank Millman schrieb am 19.03.2021 um 09:19:
This may be a non-issue, and I don't want to waste your time. But perhaps
someone can have a look to see if there is anything obvious I have missed.
I am writing a cross-platform accounting app
On 2021-03-19 11:04 AM, Thomas Kellerer wrote:
Frank Millman schrieb am 19.03.2021 um 09:52:
I am writing a cross-platform accounting app, and I test using Sql
Server on Windows 10 and PostgreSql on Fedora 31. Performance is
usually very similar, with a slight edge to PostgreSql. Now I have a
On 2021-03-19 10:56 AM, Pavel Stehule wrote:
pá 19. 3. 2021 v 9:53 odesílatel Frank Millman <mailto:fr...@chagford.com>> napsal:
On 2021-03-19 10:29 AM, Thomas Kellerer wrote:
> Frank Millman schrieb am 19.03.2021 um 09:19:
>> This may be a non-issue, and I d
On 2021-03-19 12:00 PM, Pavel Stehule wrote:
In this query the most slow operation is query planning. You try to do
tests on almost empty tables. This has no practical sense. You should
test queries on tables with size similar to production size.
Sorry about that. I hope this one is better
On 2021-03-19 12:58 PM, Frank Millman wrote:
QUERY PLAN
Merge Left Join (cost=1401.00..1401.12 rows=1 width=132) (actual
time=3.595
On 2021-03-19 4:38 PM, Tom Lane wrote:
Frank Millman writes:
However, the bizarre thing is that I have simply restored the index to
what it was in the first place. If you look at the table definition in
my original message you can see that all the columns were included in
the index. But the
On 2021-03-19 5:32 PM, Jehan-Guillaume de Rorthais wrote:
On Fri, 19 Mar 2021 14:28:27 +0200
Frank Millman wrote:
[...]
Execution of my main query has improved from 50ms to 33ms. Sql Server
takes 25ms, but this is much better than it was.
[...]
Here is the new EXPLAIN ANALYSE
On 2021-03-19 7:11 PM, Thomas Kellerer wrote:
Frank Millman schrieb am 19.03.2021 um 10:16:
cl_bal selects WHERE tran_date <= '2018-03-31'.
op_bal selects WHERE tran_date < '2018-03-01'.
The second one could be written as WHERE tran_date <= '2018-02-28
We cancelled a web application upgrade part way through when it was realized
Postgres needed a version upgrade first. Using the dump file extracted from
a full dump with pg_dumpall the DB was restored with a line like :
psql -d dbname -f dump.sql
The file dump.sql was extracted from the full du
quot;en-US-x-icu" AS c8,
'ΣΣ Μ' ILIKE 'σς Μ' COLLATE "en-US-x-icu" AS c9;
c0 | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | c9
+++++++++
t | f | t | t | f | t | t | t | f | t
(1 row)
Obviously, the ILIKE operator is really strict regarding to the correct
letter at the end of the word. The regular expression operator works as
you expected.
Happy computing...
Frank
PLEASE UNSUBSCRIBE ME TO ALL pgsql* mailing lists.
Thanks.
From: rammohan ganapavarapu
To: gp...@free.fr, pgsql-ad...@postgresql.org
Date: 11/20/2017 01:25 PM
Subject:Re: [ADMIN] Can master and slave on different PG versions?
Gilles,
Thank you, if we set
Hello,
Have you checked something like lsof to see open file descriptors to see?
Cheers,
frank
Am 14.01.25 um 13:58 schrieb Sri Mrudula Attili:
Hello Team,
We have a postgresql VDB(virtual database- Delphix) that keeps terminating due "to
too many open files".
Below are few a
You can fix the problem with this query:
SELECT setval('chinese_price_infos_id_seq', sq.val) from ( SELECT
MAX(id) as val FROM chinese_price_infos ) sq;
But you have to search in your application because in some point the
app are inserting the id column instead of leave this task to the DB.
If y
Hello.I'm not sure which replications issues you have, and I never used
Wall-E before, but I get some issues with PotgreSql 10 and Barman.
Try starting the primary server at first, when it finish to recovery
this should start as primary, if not then go to the postgresql data
directory and rename th
If you have the database modeled, the most quickly think I can thinks
is with python framework Django. Configure the connection to the DB and
make reverse engineer with Django, this create the entities class,
then activate the administration forms and configure each form for the
entities (few line
You could use FDW to replicate what you need to an external server from
the provider/primary/master to the subscriber/secondary/slaveUsing
triggers on the master tables that you want to replicate, you can
execute the insert/update/delete actions on the secondary tables
through the FDW.With this app
The FDW is a PostgreSQL extension to connect to other server from
PosgreSQL server inside, with this solution you only need connections
from P to S and no need a third server (external server), just use
triggers to push the INSERT/UPDATE/DELETE information you want to
replicate from P to S through
You could try OmniDB, is web app but have a version that just feels like a
desktop application. Is supported by 2ndQuadrant.
This is the official website https://omnidb.org/en/
Greetings
El vie., 22 mar. 2019 a las 4:56, Christian Henz ()
escribió:
> I know I'm late to the party, but we're onl
50 matches
Mail list logo