logical/realistic reason for the database to
switch between INDEX ONLY SCAN and SEQ SCAN for EXIST part of query in this
two cases.
I not sure that it's a but, so I better post in -general first.
--
Maxim Boguk
Senior Postgresql DBA
http://dataegret.com/ <http://www.postgresql-consulting.c
s could be useful as well: "To match a literal underscore or percent
sign without matching other characters, the respective character in pattern
must be preceded by the escape character."
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.com/
Phone RU: +7 910 405 4718
P
On Wed, May 4, 2016 at 3:45 AM, Jeff Janes wrote:
> On Tue, May 3, 2016 at 3:09 AM, Maxim Boguk wrote:
> > Hi,
> >
> > I started with empty table with index over
> > custom_fields | jsonb
> > field
> > defined as:
> > "idx_learne
On Tue, May 3, 2016 at 8:09 PM, Maxim Boguk wrote:
> Hi,
>
> I started with empty table with index over
> custom_fields | jsonb
> field
> defined as:
> "idx_learners_custom_fields" gin (custom_fields)
> Globally gin_pending_list_limit set to 2MB.
&g
n then the final
size of the GIN index is:
4265 MB
but after I performed reindex index idx_learners_custom_fields;
the index size had been reduced 15x to 295 MB.
Is this behavior expected?
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.com/
Phone RU: +7 910 405 4718
Phone AU:
oks like that something strange going inside linux kernel 3.16.0
memory managment (it's vanilla kernel on the bare hardware, no
virtualization, swap off).
Question is: it's work as expected (and in that case probably good idea use
pgbouncer even for one-shot analytical queries), or
for (less invasive than gdb).
PS: there are no DDL routinely run on the master (so there no need
accessexclusivelock on standby to apply ddl).
Kind Regards,
Maksym
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.com/
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
k until the final nested loop plan estimate of
700.000.000
PS: the person2obj table contains ~1.4 billion tuples, p2o_id - primary key.
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
Phone RU: +7 910 405 4718
Phone AU: +61
server doesn't start because your config file not valid
anymore.
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99
website and limit their
> access to the bare minimum.
>
> Thanks in advance for any advise on this
>
> Alex
>
Hi,
For functions it's possible (read about SECURITY DEFINER), for view no it
isn't possible (view is query text stored in database for future use and
nothing
On Mon, Jun 1, 2015 at 3:18 AM, Tom Lane wrote:
> Maxim Boguk writes:
> > On the one of databases under my support I found very curious case of the
> > almost endless index bloat (index size stabilises around 100x of the
> > original size).
>
> > The table have 5
on_uduid_localid_idx | index | phoenix | clientsession
| 254 MB |
public | clientsession_ukey | index | phoenix | clientsession
| 254 MB |
I never seen such behaviour on other databases and all my attempts to get
this index bloat under control have no effect.
If anyone h
nsion installed:
http://www.postgresql.org/docs/9.4/interactive/intarray.html
int[] - int operator documented in the extension documentation as it's a
part of the extension but not part of the PostgreSQL core.
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http
ing a PostgreSQL for the terabyte scale and/or
mission-critical databases definitely possible but require very careful
design and planning (and good hardware).
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
Melbourne, Australia
stgreSQL support in-memory store (similar to Oracle 12c in-memory
> and SQL Server 2014 in-memory OLTP) ?
>
No.
8. does PostgreSQL have temporary tables support?
>
Yes full temporary tables support since beginning.
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.
c
process), and some background process read
these data from queue table and merge it into main table (again lockless
because it single thread so no concurrent writes), and then delete the
merged data from queue table.
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ &l
PPS: and the last suggestion, after you finished with the "write all the
data into its own tables", then application should perform analyze of these
own tables (or you could have weird/inefficient plans during last stage).
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulti
ersonally, I don't expect serious progress in json/jsonb selectivity
estimators in short future, so better to avoid using a low-selectivity
queries against indexed json/jsonb fields.
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.
onf and after it
use explain (analyze, buffers, timing) to see check how much time database
spent doing IO operations.
Also try perform vacuum analyze myevents; before testing because it seems
that you have no up to date visibility map on the table.
However, even in fully cached case selecting 40% on
reate/drop database syntax to control
checkpoint behaviour sounds reasonable?
Kind Regards,
Maksym
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
LinkedIn: http://www.linke
>> I want log all 'mod' statements with their execution times and all
>> statements longer than 10ms (also with their execution times).
>
> You cannot combine things as you want. However, it seems a fairly
> minor loss - why would you care about how fast sub-10ms mods
> ran?
Trouble if I try that
(field1>value1) or (field1=value1 and field2 (value1, -value2).
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
Skype: maxim.boguk
Jabber: maxim.bo...@gmail
On Wed, May 2, 2012 at 2:50 PM, Tom Lane wrote:
> Maxim Boguk writes:
> > I got very inefficient plan for a simple query.
>
> It looks like the problem is with the estimate of the antijoin size:
>
> > -> Nested Loop Anti Join (cost=0.00..24576.82 rows=1
&g
On Wed, May 2, 2012 at 2:50 PM, Tom Lane wrote:
> Maxim Boguk writes:
> > I got very inefficient plan for a simple query.
>
> It looks like the problem is with the estimate of the antijoin size:
>
> > -> Nested Loop Anti Join (cost=0.00..24576.82 rows=1
&g
ssible case index scan over related index
(sb_messages_special4_key) will read the exactly same amount of rows from
the table as scan over sb_messages_special3_key.
And very likely scan over related index will win.
What I can do to fight that issue (I looking to keep both indexes on that
table for fas
ng pk_person2obj_counters on person2obj_counters
(cost=0.00..47110.95 rows=1212591 width=8) (actual time=0.008..997.948
rows=1212765 loops=1)
...
-> Sort (cost=6546.42..6546.98 rows=221 width=24) (actual
time=85.877..88.373 rows=7870 loops=1)
)... how that could be?
--
Maxim Boguk
Senior P
meout=high value
and (whats even better) checkpoint_timeout=0 - in that case checkpoint
happen when all checkpoint_segments were used.
Is there any serious drawbacks in that idea?
Is it safe to increase that limit in source and rebuild database? (9.0 and
9.1 case)
--
Maxim Boguk
Senior Postgresq
nly connections
2012-03-23 03:10:08.239 MSK 56317 @ from [vxid: txid:0] []LOG: streaming
replication successfully connected to primary
Is that warning harmless on 9.0.4 or should I start to worry about?
--
Maxim Boguk
Senior Postgresql DBA.
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
Sk
On Tue, Feb 21, 2012 at 5:32 PM, Tom Lane wrote:
> Maxim Boguk writes:
> > While I waiting for gdb/gcc on that server I had built pg_filedump on the
> > development server using same postgresql version and created pg_filedump
> of
> > the index file.
> > It can
On Tue, Feb 21, 2012 at 4:03 PM, Tom Lane wrote:
> Maxim Boguk writes:
> > There is some funny results:
>
> > hh=# VACUUM verbose agency_statistics_old;
> > INFO: vacuuming "public.agency_statistics_old"
> > INFO: index "agency_statistics_pkey&
On Tue, Feb 21, 2012 at 3:47 PM, Maxim Boguk wrote:
>
>
> On Tue, Feb 21, 2012 at 3:32 PM, Tom Lane wrote:
>
>> I wrote:
>> > OK, so that pretty much explains where the visible symptoms are coming
>> > from: somehow, the table got truncated but its pkey ind
try populate that table with production data I get an error:
hh=# insert into agency_statistics_old select * from agency_statistics;
ERROR: could not read block 228 in file "base/16404/118881486": read only
0 of 8192 bytes
E.g. the database see that index have zero rows, but an insert still fail.
May be I should use pageinspect addon to see an actual index pages content?
--
Maxim Boguk
Senior Postgresql DBA.
>
> OK, so that pretty much explains where the visible symptoms are coming
> from: somehow, the table got truncated but its pkey index did not get
> cleared out. So an insert creates an empty page zero, inserts a heap
> tuple there, tries to insert an index entry. The btree code sees there
> is a
On Tue, Feb 21, 2012 at 1:46 PM, Maxim Boguk wrote:
>
>
> On Tue, Feb 21, 2012 at 1:37 PM, Tom Lane wrote:
>
>> Maxim Boguk writes:
>> >> Do you know why the mod date on the file is 2012-02-20 12:04?
>>
>> > Cron was attempt to populate the table o
On Tue, Feb 21, 2012 at 1:37 PM, Tom Lane wrote:
> Maxim Boguk writes:
> >> Do you know why the mod date on the file is 2012-02-20 12:04?
>
> > Cron was attempt to populate the table once per hour after that problem
> > happened.
> > And each time it was pr
>
> > So table file size zero bytes (seems autovacuum truncated that table to 0
> > bytes).
>
> Hmmm something did, but I see no clear evidence that it was
> autovacuum.
>
> Do you know why the mod date on the file is 2012-02-20 12:04? That's
> more than two days after the error in your logs,
| 2
tree_level | 2
index_size | 4947968
root_block_no | 295
internal_pages | 1
leaf_pages | 601
empty_pages| 0
deleted_pages | 1
avg_leaf_density | 0.45
leaf_fragmentation | 13.14
I out of ideas now.
Any suggestions where and what I should look next?
On Mon, Jan 30, 2012 at 12:02 PM, Tom Lane wrote:
> Maxim Boguk writes:
>> Seems previous test case not clear demonstrate the problem which i have
>> stuck with.
>> Now much better and close to reality test case:
>
> AFAICT, these behaviors all boil down to the
-
Limit (cost=0.00..5392.87 rows=1000 width=37) (actual time=0.047..4.123
rows=1000 loops=1)
-> Index Scan using test_value_in2section_key on test
(cost=0.00..5392.87 rows=1000 width=37) (actual time=0.044..3.552 rows=1000
loops=1)
Total runtime: 4.460 ms
I hope that test case will make
On Mon, Jan 23, 2012 at 11:28 AM, Tom Lane wrote:
> Maxim Boguk writes:
> > But it seems that index scan cost for very narrow/selective conditional
> > indexes is greatly overestimated at least in some cases.
>
> I realized in connection with
> http://archives.postgresq
ctual time=98.954..98.954 rows=262754
loops=1)
Index Cond: (sections && '{208}'::integer[])
Total runtime: 632.049 ms
(9 rows)
Kind regards,
Maksym
--
Maxim Boguk
Senior Postgresql DBA.
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
Skype: ma
, however in practice it seems use
the full update way.
--
Maxim Boguk
Senior Postgresql DBA.
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com
LinkedIn profile: http://nz.linkedin.com/in/maximboguk
If they can send one man to the moon... why can
ray[1];
--RETURN NEXT _row;
--also work
--RETURN QUERY SELECT (_array[1]).*;
--error
--RETURN NEXT _array[1];
--error
--RETURN NEXT (_array[1]);
--error
--RETURN NEXT (_array[1]).*;
RETURN;
END;
$$;
>
> 2011/12/5 Maxim Boguk :
> > Some
On Mon, Dec 5, 2011 at 3:15 PM, David Johnston wrote:
> On Dec 4, 2011, at 22:58, Maxim Boguk wrote:
>
>
>
> On Mon, Dec 5, 2011 at 2:45 PM, David Johnston <
> pol...@yahoo.com> wrote:
>
>> On Dec 4, 2011, at 22:28, Maxim Boguk <
>> maxim.bo...@gmail
Some quetions about pl/pgsql and arrays[].
Is such constructions as:
RETURN NEXT array[1];
OR
SELECT val INTO array[1] FROM ...;
Should not work?
At least documentation about RETURN NEXT says:
"RETURN NEXT expression;"
I think array[1] is a valid expression.
--
Maxim Bo
On Mon, Dec 5, 2011 at 2:45 PM, David Johnston wrote:
> On Dec 4, 2011, at 22:28, Maxim Boguk wrote:
>
> > Hi.
> >
> > Is here any way to combine WITH and WITH RECURSIVE into single query?
> >
> > Something like:
> >
> > WITH t AS (so
complicated WITH RECURSIVE iterator in
real task.
--
Maxim Boguk
Senior Postgresql DBA.
Hi.
Is here any way to combine WITH and WITH RECURSIVE into single query?
Something like:
WITH t AS (some complicated select to speed up recursive part),
RECURSIVE r AS
(
...
UNION ALL
...
)
?
--
Maxim Boguk
Senior Postgresql DBA.
On Fri, Dec 2, 2011 at 3:19 AM, Tom Lane wrote:
> Maxim Boguk writes:
> > I created special custom aggregate function to append arrays defined as:
> > CREATE AGGREGATE array_accum (anyarray)
> > (
> > sfunc = array_cat,
> > stype = anyarray,
> >
ot;(1,2)","(2,3)","(1,2)","(2,3)"}
What I doing wrong? Or how to create correct version of such aggregate
function?
--
Maxim Boguk
Senior Postgresql DBA.
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com
Link
-pages-per-vacuum=1
Table size (pages) 113574 -> 23594
Index size (pages) 26364 -> 32242 (index bloat: 22%)
real12m10.300s
This run was performed with default options. Only 3.5 time slower then VF.
Thank you for your time. I hope my tool can help someone.
And of course I woul
where relname='area_pk';
oid
---
18933
(1 row)
Now i out of ideas.
Main thing what made me worry is second slave show same issue. So that is not
single random error.
Because one of slaves is pure backup i can easy experiment with him.
But need any hints what to look next.
PS:
adding 'on update' trigger on entity_log table.
--
SY, Maxim Boguk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tom Lane wrote:
Maxim Boguk writes:
So i have two theory (just waving hands ofcourse):
1)integer owerflow somewhere in cost calculation
Costs are floats, and in any case you're not showing costs anywhere near
the integer overflow limit...
2)floating rounding errors (because cost very
Maxim Boguk writes:
Tom Lane wrote:
It does know better than that. I'm wondering if the single-column index
has become very bloated or something. Have you compared the physical
index sizes?
Table fresh loaded from dump on test server... So no index bloat for sure...
As for comp
Tom Lane wrote:
Maxim Boguk writes:
Somehow postgres think index scan on singlecolumn index slower comparing to
scan on 4th field of 4column index.
It does know better than that. I'm wondering if the single-column index
has become very bloated or something. Have you compared the phy
Aggregate (cost=72239.92..72239.93 rows=1 width=0) (actual
time=1386.038..1386.038 rows=1 loops=1)
-> Index Scan using test_table_last_change_time on test_table (cost=0.00..68193.49 rows=1618568 width=0) (actual time=0.057..1228.817
rows=1627437 loops=1)
Index Cond: (last_ch
Alvaro Herrera wrote:
Maxim Boguk wrote:
Sorry with all my respect to you, you look like wrong.
The difference is that the plan with -1 does not need to sort the
output, because it comes sorted out of the index; so the execution can
be stopped as soon as 5 tuples have come out. With NULL
)
-> Index Scan using cluster_weight_2 on cluster_weight (cost=0.00..1334.41
rows=26435 width=28) (actual time=0.053..0.065 rows=5 loops=1)
Index Cond: (rubric_id = (-1))
Total runtime: 0.133 ms
(4 rows)
And plan become normal. So issue not with too many NULL's in my dataset
see that plan here... and performance difference over 1000.
Looks like when indexed search over NULL's added, planner wasn't learned right way use such index on "where something is NULL order by ... limit ..."
queries.
Thanks for any responses and sorry for not so good En
view where total=990 or total=991 or total=992 or total=993 or total=994 or total=995 or total=996 or total=997 or total=998 or total=999 or total=1000 or
total=1001 or total=1002 or total=1003 or total=1004 or total=1005 or total=1006 or total=1007 or total=1008 or total=1009 or total=1010;
-> Seq Scan on table2 (cost=0.00..1097.00 rows=333
width=0) (actual time=0.025..14.725 rows=898 loops=50)
Filter: ((fk = $0) AND (random() > 0.1::double
precision))
Total runtime: 801.021 ms
(9 rows)
--
Maxim Boguk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
algoritm?
Or just any way remove limits on default_statistics_target?
Thanks for help.
PS: sorry for bad english.
--
Maxim Boguk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ts in DB.
In all other sides DB work 24x7 well without any other issues
(and because 24x7 requirements i cannot stop DB and drop user from single user
mode).
Any ideas? or what to check else?
--
Maxim Boguk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
situation (after postgres reboot situation
become bad again in few days).
--
Maxim Boguk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
65 matches
Mail list logo