Hello,
I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will
be used solely by PostgresQL database and I am trying to choose the best
RAID level for it.
The most priority is for read performance since we operate large data sets
(tables, indexes) and we do lots of searches/sc
ce behavior was similar.
Thank you in advance,
Sergey
_
This message, including any attachments, is confidential and/or
privileged and contains information intended only for the person(s)
named above. Any other distribution, copying or disclo
need and thus I know what
to do next.
Just for the record - the hardware that was used for the test has the
following parameters:
AMD Opteron 2GHZ
2GB RAM
LSI Logic SCSI
Thanks everyone for your assistance!
Sergey
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTE
Hi
I'd like to know how to get information about which PG entities are in
kernel cache, if possible.
--
Regards,
Sergey Konoplev
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
his query useful:
SELECT
granted,
count(1) AS locks,
pid,
now() - xact_start AS xact_age,
now() - query_start AS query_age,
current_query
FROM
pg_locks AS l
LEFT JOIN pg_stat_activity AS a ON
pid = procpid
GROUP BY 1, 3, 4, 5, 6
ORDER BY 1 DESC, 2 DESC
-- ORDER BY 4 DES
Hello,
On 17 July 2010 12:50, Srikanth wrote:
> I am sending u the query along with execution plan. Please help
>
It would be better if you start with it:
http://www.postgresql.org/docs/8.4/interactive/indexes.html
http://www.mohawksoft.org/?q=node/56
--
Sergey Konoplev
Blog: http:
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> dbtest 64240M 78829 99 266172 42 47904 6 58410 72 116247 9 767.9
> 1
> --Sequentia
n a string.. like
http://search.cpan.org/~avif/Tree-Trie-1.1/Trie.pm
for example
Is there any ways to improve perfomance?
May be implement indexes using Tire algoritm ?
(if so can you please show me some url's to start...)
Thanks, Sergey
---(end of broadcast)--
pm -qa | grep postgres
postgresql-8.2.9-1.fc7
postgresql-libs-8.2.9-1.fc7
postgresql-server-8.2.9-1.fc7
postgresql-contrib-8.2.9-1.fc7
postgresql-devel-8.2.9-1.fc7
From: Sergey Hripchenko
Sent: Wednesday, August 20, 2008 1:17 PM
To: 'pgsql-performance@postgresql.o
Hi all,
Looks like I found a bug with views optimization:
For example create a test view:
CREATE OR REPLACE VIEW bar AS
SELECT *
FROM (
(
SELECT calldate, duration, billsec,
get_asterisk_cdr_caller_id(accountcode) AS caller_id
FROM asterisk_cdr
) UNION ALL (
SELE
Thx it helps.
Shame on me %) I forgot that NULL itself has no type, and thought that each
constant in the view are casted to the resulting type at the creation time.
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 20, 2008 4:54 PM
To: Sergey
pg_size_pretty(pg_total_relation_size('a')) as total,
pg_size_pretty(pg_relation_size('a')) as table;
total | table
-+-
9792 kB | 5096 kB
203068.314 ms VS 2405.481 ms, is this behaviour normal ?
Thanks !
--
Sergey Burladyan
--
Sent via pgsql-performance
.12.50 rows=1 width=4) (actual
time=0.000..0.000 rows=0 loops=1)
One-Time Filter: false
-> Function Scan on generate_series n (cost=0.00..12.50 rows=1
width=4) (never executed)
Total runtime: 0.053 ms
(9 rows)
Is it right ?
--
Sergey Burladyan
--
Sent via p
ans that, for every one of 10669 output rows, DB scanned whole
item_price table, spending 20.4 of 20.8 secs there. Do you have any
indexes there? Especially, on item_id column.
Best regards,
Sergey Aleynikov
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
s, i set it
running much more agressivly then in default install.
Best regards,
Sergey Aleynikov
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ze table every 100k changed
(inserted/updated/deleted) rows. Is this enough for you? Default on
large tables are definatly too low. If you get now consistent times -
then you've been hit by wrong statistics.
Best regards,
Sergey Aleynikov
--
Sent via pgsql-performance mailin
Hello,
2009/11/25 Richard Neill :
Also, if you find odd statistics of freshly analyzed table - try
increasing statistics target, using
ALTER TABLE .. ALTER COLUMN .. SET STATISTICS ...
If you're using defaults - it's again low for large tables. Start with
200, for example.
Best regar
Hi,
On 12 November 2011 00:18, Stephen Frost wrote:
> In a crash, unlogged tables are automatically truncated.
BTW I wonder what for they are truncated in a crash case?
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: g
ct. It
truncates all the records of the table or several recent records only?
>
> --
> Richard Huxton
> Archonet Ltd
>
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp
--
Sent vi
On 14 November 2011 14:17, Richard Huxton wrote:
> On 14/11/11 10:08, Sergey Konoplev wrote:
>>
>> On 14 November 2011 12:58, Richard Huxton wrote:
> Let's say you were doing something like "UPDATE unlogged_table SET x=1 WHERE
> y=2". If a crash occurs durin
tgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
--
Sergey Konoplev
a database architect, software developer at PostgreSQL-Consulting.com
http://www.postgresql-consulting.com
Jabber: gray...@gmail.com Skype: gray-hemp Phone: +7916068620
ient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone an
s
article http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm
where you will find some hints for your case.
Also look at the playback tools
http://wiki.postgresql.org/wiki/Statement_Playback.
--
Sergey Konoplev
a database architect, software developer at PostgreSQL-Consulting.
1.17 0.00
> 71.15
> 08:11:53all 17.53 0.00 3.13 0.68 0.00
> 78.65
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.o
with triggers if you need to get
counts fast.
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
d.aspx?tid=974484
>
> 11.10.2012, 01:30, "Sergey Konoplev" :
>> On Wed, Oct 10, 2012 at 9:09 AM, Korisk wrote:
>>
>>> Hello! Is it possible to speed up the plan?
>>> Sort (cost=573977.88..573978.38 rows=200 width=32) (actual
>>> tim
s.
> (9 rows)
>
> Postgresql 9.2.1 was configured and built with default settings.
>
> Thank you.
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984
--
Sent via pgsql-per
;0.5'::text)
-> Bitmap Index Scan on h_idx1 (cost=0.00..1616.10 rows=102367
width=0) (actual time=19.027..19.027 rows=100271 loops=1)
(5 rows)
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray...@gmail.com Skype: gray-hemp Pho
040.96 rows=25990002 width=32) (act
> ual time=0.121..3624.624 rows=25990002 loops=1)
> Output: name
> Heap Fetches: 0
> Total runtime: 7272.735 ms
> (6 rows)
>
>
>
>
>
>
> 11.10.2012, 21:55, "Sergey Konoplev" :
>> On Thu, Oct
2) (actual
> time=21731.551..21733.277 rows=4001 loops=1)
> Output: name, count(name)
> -> Seq Scan on public.hashcheck (cost=0.00..435452.02
> rows=25990002 width=32) (actual time=29.431..13383.812 rows=25990002 loop
> s=1)
>Output: id, na
pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
USA +1 415 867 9984
Russia, Moscow
mon case is when backup (pg_dump*) is running TRUNCATE has
to wait for it because it acquires an access exclusive lock on a table
and all other queries including INSERT have to wait for the TRUNCATE.
Check the backup case first.
> Our previous Postgresql 8.2 instance did not have this probl
e of this links will help you:
- http://www.postgresql.org/docs/9.2/static/file-fdw.html
- http://pgxn.org/dist/odbc_fdw/.
>
> thanks
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
USA +1 415 867 9984
Russia, Moscow +7 901 903 0499
Russia, K
ed. Have you had a lot of
updates/deletes on rows with exit_state is null?
Try to reindex tbl_tracker_performance_1_idx.
To reindex it without locks create a new index with temporary name
concurrently, delete the old one and rename the new one using the old
name.
--
Sergey Konoplev
Database
fter with it
will be slow. Also it depends on the index column values.
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
USA +1 415 867 9984
Russia, Moscow +7 901 903 0499
Russia, Krasnodar +7 988 888 1979
Skype: gray-hemp
Jabber: gray...@gmail.com
--
S
always worked ok, perhaps it makes us particularly vulnerable to
> kernel/scheduler changes.
>
> I would be very grateful for any suggestions as to the best way to diagnose
> the source of this problem and/or general recommendations?
--
Sergey Konoplev
Database and Software Ar
a number of kernel (and not only) tuning issues
with short explanations to prevent it from affecting database behavior
badly. Try to follow them:
https://code.google.com/p/pgcookbook/wiki/Database_Server_Configuration
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayh
o_id,
>ag3.extra_coins
> FROM (aggregation2 ag2
> left join aggregation3 ag3
>ON (( ag2.missionid = ag3.missionidtemp )));
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subs
as it
has a lot of performance improvements.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com
--
Sent via pgsql-performa
, a
comparison against a non-immutable function such as CURRENT_TIMESTAMP
cannot be optimized, since the planner cannot know which partition the
function value might fall into at run time."
http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS
--
Kind re
Index Cond: (k = 1942)
>> Total runtime: 481.600 ms
These are plans of two different queries. Please show the second one
(where d2, g2, etc are) with secscans off.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
So my question is:
>
>
>
> Is there any performance tips for creating index on Postgres?
>
> how to monitor the progress the creation process?
>
>
>
> Thanks and best regards,
>
> Suya Huang
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile:
can using items_item_ux on items i (cost=0.00..358.84 rows=1
width=16)
Index Cond: (item_id = 169946840)
-> Index Only Scan using users_user_id_pkey on users u
(cost=0.00..38.30 rows=1 width=8)
Index Cond: (user_id = i.user_id)
time: 28.874 ms
--
Sergey Burladyan
01.08.2013 14:05 пользователь "Thomas Reiss"
написал:
>
> If you leave enable_mergejoin to on, what happens if you run the explain
> two time in a row ? Do you get the same planning time ?
Yes, I get the same planning time.
On Thu, Aug 1, 2013 at 2:04 PM, Thomas Reiss wrote:
> Le 01/08/2013 11:55, Sergey Burladyan a écrit :
> At first look, this reminds me some catalog bloat issue. Can you provide
> the result of these queries :
> SELECT pg_size_pretty(pg_table_size('pg_class')) AS s
nable_mergejoin to off does not help with this query.
--
Sergey Burladyan
Sergey Burladyan writes:
> # explain
> # select i.item_id, u.user_id from items i
> # left join users u on u.user_id = i.user_id
> # where item_id = 169946840;
> QUERY PLAN
> --
>
Jeff Janes writes:
> I'd use strace to find what file handle is being read and written, and
> lsof to figure out what file that is.
I use strace, it is more read then write:
$ cut -d '(' -f 1 /var/tmp/pg.trace | sort | uniq -c | sort -n
49 select
708 close
1021 open
7356 write
21
Jeff Janes writes:
> I think the next step would be to run gdb -p (but don't start
> gdb until backend is in the middle of a slow explain), then:
Sorry, I am lack debug symbols, so call trace is incomplete:
explain select i.item_id, u.user_id from items i left join users u on u.user_id
= i.us
I also find this trace for other query:
explain select * from xview.user_items_v v where ( v.item_id = 132358330 );
#0 0x7ff766967620 in read () from /lib/libc.so.6
#1 0x7ff7689cfc25 in FileRead ()
#2 0x7ff7689ea2f6 in mdread ()
#3 0x7ff7689cc473 in ?? ()
#4 0x7ff7689ccf54
Tom Lane writes:
> Jeff Janes writes:
> > On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan
> > wrote:
> >> If I not mistaken, may be two code paths like this here:
> >> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity ->
>
Jeff Janes writes:
> On Fri, Aug 2, 2013 at 2:58 AM, Sergey Burladyan wrote:
> >
> > PS: I think my main problem is here:
> > select min(user_id) from items;
> > min
> > -
> >1
> > (1 row)
> >
> > Time: 504.520 ms
>
> T
Sergey Burladyan writes:
> Hot standby:
...
> ' -> Index Only Scan using items_user_id_idx on public.items
> (cost=0.00..24165743.48 rows=200673143 width=8) (actual
> time=56064.499..56064.499 rows=1 loops=1)'
> 'Output: public.items.
solution for pagination (OFFSET) problem you might
also use the "prev/next" technique, like
SELECT * FROM table
WHERE id > :current_last_id
ORDER BY id LIMIT 10
for "next", and
SELECT * FROM (
SELECT * FROM table
WHERE id < :current_first_id
ORDER BY id D
one page.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com
--
Sent via pgsql-performance mailing list (pgsql-perform
meter gives planner a hint of how much it would cost to perform a
random page read used by index scans. It looks like you need to
decrease random_page_cost.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984
, however, my assumption is
that it was IDLE in transaction. You mentioned the "incomplete message
from client" error, so it might somehow be a network problem that led
to a hunging connection to pgbouncer, that made pgbouncer kept a
connection to postgres after transaction was started.
-
n_duration_statement to
the value less that the age of hunging inserts and debug_print_parse,
debug_print_rewritten, debug_print_plan and debug_pretty_print to
'on'. It will allow you to log what is happening with these inserts
and what takes so many time.
--
Kind regards,
Sergey Konop
the symptoms look similar.
Another thing that might cause it is network. Try to monitor it at the
time of these stalls.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
On Sun, Nov 10, 2013 at 11:48 PM, Евгений Селявка
wrote:
> Sergey, yes this is all of my kernel setting. I don't use THP intentionally.
> I think that i need a special library to use THP with postgresql like this
> http://code.google.com/p/pgcookbook/wiki/Database_Server_Confi
-c "select count(1) from pg_stat_activity"
sleep 1
done > activity.log
and its correlation with slowdowns.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.co
atabase level one.
[1] http://www.postgresql.org/docs/9.3/static/sql-prepare.html
[2] http://search.cpan.org/dist/DBD-Pg/Pg.pm#prepare
[3] https://github.com/dimitri/preprepare
[4] https://github.com/ohmu/pgmemcache/
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linke
ning of the table with DELETEs or may be
you use UPDATEs for some another reason?
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
Sent via pgsql-performance mailing
On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky wrote:
> On 12/19/2013 3:34 PM, Sergey Konoplev wrote:
>> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
>>> Table rt_h_nbbo contains several hundred million rows. All rows for a
>>> given
>>> entry_d
Dave, in case if you need to archive old partitions to compressed
files out of your database you can use this tool [1]. Consult with the
configuration example [2], look at the ARCHIVE_* parameters.
[1] https://github.com/grayhemp/pgcookbook/blob/master/bin/archive_tables.sh
[2] https://github.com/grayhem
Debian.
Could you please show EXPLAIN ANALYZE for both cases, the current one
and with feed_user_id_active_id_added_idx dropped?
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gr
aven't had time yet to verify whether it goes back to 65% after
> vacuum full (that will take time, maybe a month).
Try pgcompact, it was designed particularily for such cases like yours
https://github.com/grayhemp/pgtoolkit.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and D
On Sun, Feb 9, 2014 at 2:58 PM, Claudio Freire wrote:
> On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev wrote:
>> Try pgcompact, it was designed particularily for such cases like yours
>> https://github.com/grayhemp/pgtoolkit.
>
> It's a pity that that requires sever
. It should do the trick. If it wont, please, show the
plans.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
Sent via pgsql-performance mailing list (pgsql-p
//github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
Sent via pgsql-performance mailing list (pgsql-pe
e problem appears when hot_standby is set on, so you need to
turn it off. Also, take a look at the link below:
http://www.databasesoup.com/2013/12/why-you-need-to-apply-todays-update.html
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415)
ified one as a
resulting one. Another risk is the case when you need to update 2
tables on different servers and have their modified_timestamp fields
in sync. Here you need to determine the new value of the column in the
application.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
r, in your index
only by using expressions in it and in the query, eg.
...USING gin (strip(fts_data))
and
... WHERE strip(fts_data) @@ q
[1] http://www.postgresql.org/docs/9.3/static/textsearch-features.html
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/
On Thu, Apr 24, 2014 at 4:34 AM, Heikki Linnakangas
wrote:
> On 04/24/2014 01:56 AM, Sergey Konoplev wrote:
>> My guess is that you could use strip() function [1] to get rid of
>> weights in your table or, that would probably be better, in your index
>> only by using express
uot;GIN improvements part 3: ordering in index"
patch, was it committed?
http://www.postgresql.org/message-id/flat/capphfduwvqv5b0xz1dzuqaw29erdculzp2wotfjzdbs7bhp...@mail.gmail.com
Ivan, there is a hope that we could get a more effective FTS solution
that any others I have heard about with
dom_bytes() you set a
search_path that allows to see get_byte() and the search_path that was
set before the gen_random() call doesn't allow it.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 8
17.03.2015 13:22, Sergey Shchukin пишет:
05.03.2015 11:25, Jim Nasby пишет:
On 2/27/15 5:11 AM, Sergey Shchukin wrote:
show max_standby_streaming_delay;
max_standby_streaming_delay
-
30s
We both need to be more clear about which server we're talking
77 matches
Mail list logo