On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
wrote:
> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
> but I'm getting error
>
> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240
> );
> ERROR: syntax error at or near "10240"
> LINE 1: .
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
wrote:
> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
> but I'm getting error
>
> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240
> );
> ERROR: syntax error at or near "10240"
> LINE 1: .
On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky
wrote:
> Hi Claudio, how can I do that ? Can you explain me what is this option ?
>
> 2017-08-24 2:15 GMT+03:00 Claudio Freire :
>>
>> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
>> wrote:
>> > To
On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
wrote:
> To summarize, I still have performance problems. My current situation :
>
> I'm trying to copy the data of many tables in the oracle database into my
> postgresql tables. I'm doing so by running insert into local_postgresql_temp
> select
On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky
wrote:
> I checked with the storage team in the company and they saw that I have alot
> of io on the server. How should I reduce the io that the postgresql uses ?
Do you have concurrent activity on that server?
What filesystem are you using wher
On Fri, Jul 7, 2017 at 10:12 AM, Shaun Thomas
wrote:
>> I don't think the downstream dependencies will let that work without
>> rebuilding them as well. The drop fails (without a cascade), and the other
>> views and matviews that are built off of this all simply point to x_old.
>
> Wow, ouch. Ye
On Tue, Jul 18, 2017 at 1:01 PM, Claudio Freire wrote:
> On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau
> wrote:
>> Claudio,
>>
>> At one moment
>> during the query, there is a write storm to the swap drive (a bit like this
>> case:
>> htt
On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau
wrote:
> Claudio,
>
> At one moment
> during the query, there is a write storm to the swap drive (a bit like this
> case:
> https://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).
> I can hardly expl
On Fri, Jul 14, 2017 at 12:34 PM, Charles Nadeau
wrote:
> Workers Planned: 12
> Workers Launched: 12
> Buffers: shared hit=728798037 read=82974833
> -> Hash Semi Joi
On Tue, Apr 25, 2017 at 1:17 AM, Andres Freund wrote:
> Questions (answer as many you can comfortably answer):
> - How many MB/s, segments/s do you see on busier servers?
~20MB/s with FPW compression, with peaks of ~35MB/s. Writes become the
bottleneck without compression and it tops at about 40-
On Tue, Apr 18, 2017 at 2:45 AM, Daniel Blanch Bataller
wrote:
>
> But if you are going to move a record at a time you are going to be limited
> by the fastest transaction rate you can achieve, which is going to be a few
> hundred per second, and limited at the end by the disk hardware you have, .
On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander wrote:
> CREATE TABLE [dbo].[usrUserRights] (
> [UserId] [dbo].[dm_Id] NOT NULL,
> [SiteId] [dbo].[dm_Id] NOT NULL,
> [RightId] [dbo].[dm_Id] NOT NULL,
> CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY
> CLUST
> From: Claudio Freire [mailto:klaussfre...@gmail.com]
>
> How did you query the table's size? You're probably failing to account for
> TOAST tables.
>
> I'd suggest using pg_total_relation_size.
...
On Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander wrote:
>
On Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander wrote:
> In Postgres I was thinking of going with a design like this
>
> CREATE TABLE security.user_right_site
> (
> user_id bigint NOT NULL,
> right_id bigint NOT NULL,
> sites bigint[]
> );
> create index on security.user_right_site(user
On Tue, Mar 21, 2017 at 10:56 PM, Claudio Freire wrote:
> On Tue, Mar 21, 2017 at 4:24 PM, James Parks wrote:
>> ... and here's how long it takes to read all of the rows:
>> database=> select max(an unindexed bigint column) from a;
>> -[ RECORD 1 ]
&g
On Tue, Mar 21, 2017 at 4:24 PM, James Parks wrote:
> ... and here's how long it takes to read all of the rows:
> database=> select max(an unindexed bigint column) from a;
> -[ RECORD 1 ]
> max |
> Time: 10624.368 ms
>
> Running this another time immediately afterward (to show the cached
On Fri, Nov 4, 2016 at 1:19 PM, Jeff Janes wrote:
> If setting up the ssh tunnel is the problem, you could assess whether you
> really need that security, or compile a custom postgresql with larger WAL
> file sizes, or write a fancy archive_command which first archives the files
> to a local direc
On Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen wrote:
> PASS 2:
> Process: Transform/Load (all work local to the server - read,
> transform, write as a single batch)
> Num Source Rows: 10,554,800 (one batch from just a single source table
> going to a single target table)
> Avg Rowcount Com
On Sat, Aug 13, 2016 at 3:54 PM, Justin Pryzby wrote:
> On Sun, Jun 05, 2016 at 12:28:47PM -0700, Jeff Janes wrote:
>> On Sun, Jun 5, 2016 at 9:03 AM, Tom Lane wrote:
>> > Claudio Freire writes:
>> >> So correlated index scans look extra favourable vs bitmap in
On Tue, Aug 9, 2016 at 10:46 PM, Suya Huang wrote:
> Hi Claudio,
>
> Here is the index definition
> "idx_data_3" gin (name gin_trgm_ops), tablespace "tbs_data"
> "idx_data_4" gin (displayname gin_trgm_ops), tablespace "tbs_data"
GIN indexes are quite big, they can be bigger than the data
On Tue, Aug 9, 2016 at 9:46 PM, Claudio Freire wrote:
> On Tue, Aug 9, 2016 at 9:34 PM, Suya Huang wrote:
>> dev=# explain analyze
>> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w
>> ON w.name = o.name WHERE (w.name LIKE '%dog%'
On Tue, Aug 9, 2016 at 9:34 PM, Suya Huang wrote:
> dev=# explain analyze
> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w
> ON w.name = o.name WHERE (w.name LIKE '%dog%' OR w.displayname LIKE
> '%dog%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])
> ORDER BY
On Tue, Aug 9, 2016 at 9:12 PM, Suya Huang wrote:
> Hi Claudio,
>
> The plan for dog is exactly the same as what’s for cat, thus I didn’t paste
> them here.
Are you sure?
The plan itself may be the same, but the numbers may be different, and
in fact be key to understanding the problem.
>
> Ric
On Tue, Aug 9, 2016 at 8:27 PM, Suya Huang wrote:
> I’ve got a SQL runs for about 4 seconds first time it’s been executed,but
> very fast (20ms) for the consequent runs. I thought it’s because that the
> first time table being loaded into memory. However, if you change the where
> clause value fro
On Thu, Jul 21, 2016 at 3:29 PM, David G. Johnston
wrote:
> On Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire
> wrote:
>>
>> That cross join doesn't look right. It has no join condition.
>
>
> That is that the definition of a "CROSS JOIN"...
>
> Davi
On Thu, Jul 21, 2016 at 11:48 AM, Johan Fredriksson wrote:
> EXPLAIN ANALYZE VERBOSE SELECT DISTINCT main.* FROM Users main CROSS
> JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id =
> main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON
> ( CachedGroupMembers_2.MemberId = P
On Fri, Jun 17, 2016 at 1:09 AM, julyanto SUTANDANG
wrote:
> This way is doing faster using crc32(data) than hashtext since crc32 is
> hardware accelerated in intel (and others perhaps)
> this way (crc32) is no way the same as hash, much way faster than others...
>
> Regards,
Sure, but I've had
On Wed, Jun 15, 2016 at 6:34 AM, Ivan Voras wrote:
>
> I have an application which stores a large amounts of hex-encoded hash
> strings (nearly 100 GB of them), which means:
>
> The number of distinct characters (alphabet) is limited to 16
> Each string is of the same length, 64 characters
> The s
On Fri, Jun 3, 2016 at 8:54 PM, Justin Pryzby wrote:
> As a test, I did SET effective_cache_size='1MB', before running explain, and
> still does:
>
> |-> Index Scan using
> cdrs_huawei_pgwrecord_2016_05_29_recordopeningtime_idx on
> cdrs_huawei_pgwrecord_2016_05_29 (cost=0.44..1526689.
On Wed, May 25, 2016 at 11:00 AM, Justin Pryzby wrote:
>> > First, I found I was able to get 30-50min query results on full week's
>> > table by
>> > prefering a seq scan to an index scan. The row estimates seemed fine, and
>> > the
>> > only condition is the timestamp, so the planner's use of
On Wed, Sep 2, 2015 at 4:29 PM, Jeff Janes wrote:
> On Wed, Sep 2, 2015 at 7:00 AM, Volker Böhm wrote:
>>
>>
>>
>> CREATE INDEX trgm_adresse ON adressen.adresse USING gist
>> (normalize_string((btrimnormalize_stringCOALESCE((vorname)::text,
>> ''::text) || ' '::text) || (name1)::tex
On Thu, Jun 11, 2015 at 5:56 AM, ben.play wrote:
> A question : is it possible with Postgres to change the temp_tablespace only
> for a session (or page) ?
> I have a cron which takes a lot of memory. I would like to say to PostGreSql
> to use this temp_tablespace only on this command and not aff
On Wed, Jun 10, 2015 at 9:39 AM, Johann Spies wrote:
> COPY
> (SELECT A.ut,
> B.go AS funding_org,
> B.gn AS grant_no,
> C.gt AS thanks,
> D.au
>FROM isi.funding_text C,
> isi.rauthor D,
> isi.africa_uts A
>LEFT JOIN isi.funding_org
On Tue, Jun 9, 2015 at 12:58 PM, ben.play wrote:
> Hi,
>
> Thank you a lot for your answer.
> I've done that (create a tablespace in another HD with POSTGRES role + put
> it as the main temp_tablespace in the conf).
>
> But ... my command ~# df show me that all queries use the default tablespace
>
On Wed, Jun 3, 2015 at 6:18 PM, Scott Marlowe wrote:
> On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra
> wrote:
>>
>>
>> On 06/03/15 17:09, Scott Marlowe wrote:
>>>
>>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra
I don't see why you think you have less than 3GB used. The output
On Wed, Jun 3, 2015 at 11:56 AM, Tomas Vondra
wrote:
> On 06/03/15 16:06, ben.play wrote:
>>
>> The query is (unfortunately) generated by Doctrine 2 (Symfony 2).
>> We can’t change the query easily.
>
>
> Well, then you'll probably have to buy more RAM, apparently.
There's an easy way to add disk
On Thu, Feb 12, 2015 at 7:38 PM, Kevin Grittner wrote:
> Nico Sabbi wrote:
>
>> Queries get executed very very slowly, say 20 minutes.
>
>> I'd like to know if someone has already succeeded in running
>> postgres with 200-300M records with queries running much faster
>> than this.
>
> If you go t
On Fri, Jan 9, 2015 at 4:26 PM, Steve Crawford
wrote:
> New hardware is quite different. 2x10-core E5-2660v3 @2.6GHz, 128GB
> DDR4-2133 RAM and 800GB Intel DC P3700 NVMe PCIe SSD. In essence, the
> dataset will fit in RAM and will be backed by exceedingly fast storage.
>
> This new machine is very
On Sat, Dec 13, 2014 at 12:05 AM, Andrew Dunstan wrote:
> On 12/12/2014 08:20 PM, Tom Lane wrote:
>>
>> We can't just add the operator and worry about usability later;
>> if we're thinking we might want to introduce such an automatic
>> transformation, we have to be sure the new operator is define
On Fri, Dec 12, 2014 at 6:44 PM, Tom Lane wrote:
> The usability issue could be fixed by teaching the planner to fold a
> construct like (jsonb ->> 'foo')::numeric into (jsonb ->># 'foo').
> But I'm not sure how we do that except in a really ugly and ad-hoc
> fashion.
It would be doable if you co
On Tue, Dec 9, 2014 at 3:42 PM, Vivekanand Joshi wrote:
> What I actually want to achieve is right now is that running the same load
> on Netezza and PostgreSQL [300 GB data, 20 concurrent queries, and 30k-40K
> queries in an hour].
You will need to provide far more information about the type
On Fri, Oct 3, 2014 at 5:55 AM, Roberto Grandi
wrote:
> Dear Pg people,
>
> I would ask for your help considering this scaling issue. We are planning to
> move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to
> 5 millions of items/day.
The most important hardware part the
On Tue, Sep 30, 2014 at 8:34 AM, Graeme B. Bell wrote:
>
>>> The existing cost estimation
>>> code effectively assumes that they're perfectly uniformly distributed;
>>> which is a good average-case assumption but can be horribly wrong in
>>> the worst case.
>
>
> Sorry, just an outsider jumping in
On Sat, Sep 20, 2014 at 3:38 AM, Greg Stark wrote:
>> > Is there a canonical case of where 'abort early' plans help? (I'm new
>> > to that term -- is it a recent planner innovation...got any handy
>> > links?)
>>
>> Yeah, here's an example of the canonical case:
>>
>> Table t1 ( a, b, c )
>>
>> -
On Wed, Jul 23, 2014 at 4:40 PM, Marc Mamin wrote:
>>On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht wrote:
>>> Is there any way that I can build multiple indexes on one table without
>>> having to scan the table multiple times? For small tables, that's probably
>>> not an issue, but if I have
On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht wrote:
> Is there any way that I can build multiple indexes on one table without
> having to scan the table multiple times? For small tables, that's probably
> not an issue, but if I have a 500 GB table that I need to create 6 indexes
> on, I don'
On Thu, Jun 26, 2014 at 12:48 PM, AJ Weber wrote:
> FWIW: I tested removing the quotes around each value, and it did not change
> the plan (I am a little surprised too, but I guess PG is "smarter than
> that").
>
> Thanks for the idea.
Ok, second round.
Try changing node_id in (...) into node.i
On Thu, Jun 26, 2014 at 12:38 PM, AJ Weber wrote:
> I noticed this too. I am trying to find where the actual SQL is generated,
> and I am seeing if this is an artifact of Hibernate.
>
> Will test the same query without the quotes as you recommend. (But I don't
> know where to fix that, if it is
On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber wrote:
> OK, the sample query is attached (hopefully attachments are allowed) as
> "query.sql".
> The "master table" definition is attached as "table1.sql".
> The "detail table" definition is attached as "table2.sql".
> The EXPLAIN (ANALYZE, BUFFERS) outp
On Tue, May 13, 2014 at 5:08 PM, Tim Kane wrote:
> Hi all,
>
> So I was thinking about the following, after experimenting with constraint
> exclusion.
>
> I thought I would see what happens when I do this:
>
> SELECT * FROM ONLY table_a UNION SELECT * FROM table_b;
>
>
> I noticed that despite t
I have this table that is quite big (several gig).
I was looking for a row manually (because a query would take too long)
- I know there is correlation between id and date, so I was doing
manual binary search for the id range that holds certain date, and I
found an interesting case where the plann
On Tue, Mar 25, 2014 at 12:22 PM, Joshua D. Drake
wrote:
> On 03/25/2014 08:18 AM, Ilya Kosmodemiansky wrote:
>>
>>
>> Joshua,
>>
>> that is really good point: an alternative is to use pg_basebackup
>> through ssh tunnel with compression, but rsync is much simpler.
>
>
> Or rsync over ssh. The ad
On Tue, Mar 25, 2014 at 4:39 AM, David Johnston wrote:
>> Hai,
>>
>> Can anyone tell me the difference and performance between pgdump and
>> pg_basebackup if I want to backup a large database.
>>
>> Thanks
>
> Yes. And many of their words have been written down in the documentation in
> a chapter
On Fri, Mar 21, 2014 at 3:36 PM, Guillaume Smet
wrote:
> On Fri, Mar 21, 2014 at 5:17 PM, Tom Lane wrote:
>> It will cost you, in ProcArray scans for example. But lots-of-idle-
>> connections is exactly what a pooler is supposed to prevent. If you have
>> a server that can handle say 10 active
On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder wrote:
>
>> Um... I think your problem is a misuse of CTE. Your CTE is building an
> intermediate of several thousands of rows only to select a dozen
> afterwards. You may want to consider a view or subquery, though I'm
> not sure pg will be able to op
e
> shared_buffers | 1920MB | configuration file
> TimeZone | UTC| configuration file
> wal_buffers | 16MB | configuration file
> work_mem | 8MB| configuration file
&g
On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder wrote:
> The primary query that I'm trying to optimize executes in about 1600ms on my
> laptop and about 800ms on production-like hardware (more for the score
> version). My target is to get the data fetch down below 100ms if possible.
Could you post
On Sun, Feb 9, 2014 at 12:50 PM, Tomas Vondra wrote:
> Also, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and
> post it here. That might at least help us eliminate some of the usual
> suspects.
I have the full dump if relevant. The relevant extract for:
Report:
1. public.full_bid
On Tue, Feb 11, 2014 at 8:29 PM, Jeff Janes wrote:
> On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire
> wrote:
>>
>> Hello list.
>>
>> I know all the theory about vacuuming. I've got log tables that get
>> periodically pruned. The pruning is... quirky,
On Tue, Feb 11, 2014 at 5:54 PM, Eildert Groeneveld
wrote:
> Dear All
>
> this probably not the best list to post this question:
>
> I use cascading deletes but would like to first inform the user what she
> is about to do.
> Something like : explain delete from PANEL where panel_id=21;
> -- you a
On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev wrote:
> On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire
> wrote:
>> What I'm seeing, though, is not that, but bloat proportional to table
>> size (always stuck at about 65% bloat). What's weird, is that vacuum
>&g
On Sun, Feb 9, 2014 at 4:40 PM, Tom Lane wrote:
> Claudio Freire writes:
>>>> I also do routine reindexing to stop index bloat on its tracks, yet
>>>> freshly-reindexed indexes get considerably reduced in size with vacuum
>>>> full.
>
> AFAIK t
On Sun, Feb 9, 2014 at 12:50 PM, Tomas Vondra wrote:
> On 7.2.2014 19:47, Claudio Freire wrote:
>>
>> Question is... why isn't all that free space being used? The table
>> grows in size even though there's plenty (65%) of free space.
>>
>> I've got
Hello list.
I know all the theory about vacuuming. I've got log tables that get
periodically pruned. The pruning is... quirky, though. It's not so
much deleting data, as summarizing many (thousands) of rows into one
single row. For that, a combination of deletes and updates are used.
In essence,
On Wed, Feb 5, 2014 at 4:47 PM, Pweaver (Paul Weaver)
wrote:
>> That is quite extreme. If a temporary load spike (like from the deletes
>> and the hinting needed after them) slows down the select queries and you
>> start more and more of them, soon you could tip the system over into kernel
>> sch
On Wed, Feb 5, 2014 at 8:50 AM, Katharina Koobs
wrote:
> Hi,
>
>
>
> We have a PostgreSQL DB, version 9.3 on a Suse Linux system.
>
> We ran the update from postgresql 8.4 to 9.3.
>
> After importing the database the query time of one sql query is about 30
> sec.
>
> After ANALYZE the DB the query
On Thu, Jan 30, 2014 at 2:43 PM, Dattaram Porob
wrote:
>
> Looks like this memory is being used to cache the query plan. Any ideas, why
> it is occupying such a huge heap in 9.2.6 as compared to 9.1.3 ? I know that
> the same SQL function occupies around 25MB heap in 9.1.3.
>
> Any thoughts/commen
On Thu, Dec 12, 2013 at 6:49 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
> It's strange that it isn't sequential at least for the electric usage, as
> i've clustered using the index that it's using..
>
electricusage is the inner part of the nested loop, which means it will do
~3000
On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
>
> Thanks a lot for the help. I'm not familiar with explain buffers, but
> here's the results:
>
> Limit (cost=0.00..648.71 rows=50 width=8) (actual
> time=653.681..52328.707 rows=50 loops=1)
>Buffers: s
On Thu, Dec 12, 2013 at 5:20 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
> I don't have much info on disks, since this is a virtual server on linode.
> It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume (noatime).
> Hopefully that's helpful.
>
> Bryce
>
Well, did you run
On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert
wrote:
> Hi,
>
> I'm seeing a slow running query. After some optimization with indexes, it
> appears that the query plan is correct, it's just slow. Running the query
> twice, not surprisingly, is very fast, due to OS caching or shared_buffers
> cachin
On Tue, Dec 10, 2013 at 5:03 PM, Merlin Moncure wrote:
> Also, can I see a typical 'top' during poor scaling count(*) activity?
> In particular, what's sys cpu%. I'm guessing it's non trivial.
There was another thread, this seems like a mistaken double post or
something like that.
In that othe
On Thu, Dec 5, 2013 at 1:03 PM, Metin Doslu wrote:
>> From what I've seen so far the bigger problem than contention in the
>> lwlocks itself, is the spinlock protecting the lwlocks...
>
> Postgres 9.3.1 also reports spindelay, it seems that there is no contention
> on spinlocks.
Did you check hu
On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund wrote:
> On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:
>> > I'd strongly suggest doing a "perf record -g -a ;
>> > perf report" run to check what's eating up the time.
>>
>> Here is one example:
>>
>> + 38.87% swapper [kernel.kallsyms] [k] hyp
On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu wrote:
>
> Here are the results of "vmstat 1" while running 8 parallel TPC-H Simple
> (#6) queries: Although there is no need for I/O, "wa" fluctuates between 0
> and 1.
>
> procs ---memory-- ---swap-- -io --system--
> -cpu--
On Tue, Dec 3, 2013 at 1:24 PM, Metin Doslu wrote:
> Looking into syncscan.c, it says in comments:
>
> "When multiple backends run a sequential scan on the same table, we try to
> keep them synchronized to reduce the overall I/O needed."
>
> But in my workload, every process was running on a diffe
On Tue, Dec 3, 2013 at 10:49 AM, Metin Doslu wrote:
> We have several independent tables on a multi-core machine serving Select
> queries. These tables fit into memory; and each Select queries goes over one
> table's pages sequentially. In this experiment, there are no indexes or
> table joins.
>
On Wed, Oct 2, 2013 at 10:47 PM, Samuel Stearns
wrote:
> Thanks, Claudio:
>
> http://explain.depesz.com/s/WJQx
If you have a test database, and if it doesn't hurt other queries of
course, try clustering on the ip index.
I believe your problem is that the index isn't helping much, it's
probably h
On Wed, Oct 2, 2013 at 10:17 PM, Samuel Stearns
wrote:
> The last part, the EXPLAIN, is too big to send. Is there an alternative way
> I can get it too you, other than chopping it up and sending in multiple
> parts?
Try explain.depesz.com
On Wed, Oct 2, 2013 at 10:30 PM, Samuel Stearns
wrote
On Wed, Sep 25, 2013 at 12:54 PM, Merlin Moncure wrote:
>> I was thinking an index over:
>>
>> (event, date_trunc('day', insert_time), log_id)
>>
>> And the query like
>>
>> SELECT min(log_id) FROM event_log
>> WHERE event='S-Create' AND
>> date_trunc('day',insert_time) = '2013-09-15'
>>
>>
>> Tha
On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure wrote:
> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire
> wrote:
>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong wrote:
>>> This event_log table has 4 million rows.
>>>
>>> “log_id” is the primary key (bigi
On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong wrote:
> This event_log table has 4 million rows.
>
> “log_id” is the primary key (bigint),
>
> there is a composite index “event_data_search” over (event::text,
> insert_time::datetime).
I think you need to add log_id to that composite index to get pg t
On Tue, Sep 3, 2013 at 8:11 PM, Tom Lane wrote:
> Claudio Freire writes:
>> So, I've got this query with this very wrong plan:
>
>> explain SELECT min(created) < ((date_trunc('day',now()) - '90
>> days'::interval)) FROM "aggregated_tra
This is postgres 9.1.9.
I'm getting a very weird case in which a simple range query over a PK
picks the wrong... the very very wrong index.
The interesting thing, is that I've got no idea why PG is so grossly
mis-estimating the number of rows scanned by the wrong plan.
I've got this table that's
On Thu, Aug 8, 2013 at 5:01 PM, Kevin Grittner wrote:
> Claudio Freire wrote:
>> On Wed, Aug 7, 2013 at 4:04 PM, Tom Lane wrote:
>>>> Yeah, but it's faster if it's in the same direction, because the
>>>> kernel read-ahead code detects sequential
On Wed, Aug 7, 2013 at 4:04 PM, Tom Lane wrote:
>> Yeah, but it's faster if it's in the same direction, because the
>> kernel read-ahead code detects sequential reads, whereas it doesn't
>> when it goes backwards. The difference can be up to a factor of 10 for
>> long index scans.
>
> Color me ske
On Wed, Aug 7, 2013 at 3:34 PM, Tom Lane wrote:
> Claudio Freire writes:
>> On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco
>> wrote:
>>> I want to get the last call_activity record for a single user.
>
>> Create an index over (user_id, called desc), and do
>
On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco wrote:
> Let's say I have a table something like this:
>
>create table call_activity (
> id int8 not null,
> called timestamp,
> user_id int8 not null,
> primary key (id)
> foreign key (user_id) references m
On Tue, Aug 6, 2013 at 8:03 PM, Claudio Freire wrote:
> On Tue, Aug 6, 2013 at 7:56 PM, Mark Kirkwood
> wrote:
>> Hmm - I wonder if the lack or ORDER BY is part of the problem here. Consider
>> a similar query on pgbench_accounts:
>>
>> bench=# explain analyze se
On Tue, Aug 6, 2013 at 7:56 PM, Mark Kirkwood
wrote:
> Hmm - I wonder if the lack or ORDER BY is part of the problem here. Consider
> a similar query on pgbench_accounts:
>
> bench=# explain analyze select aid from pgbench_accounts where aid > 10
> limit 20;
> QUERY PLAN
>
On Tue, Aug 6, 2013 at 7:46 AM, Ivan Voras wrote:
> ivoras=# explain analyze select * from lt where id > 90 limit 10;
>QUERY PLAN
> --
On Mon, Aug 5, 2013 at 8:04 PM, Ivan Voras wrote:
> SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10
>
> SELECT * FROM table WHERE active ORDER BY id DESC LIMIT 10 OFFSET 10
Did you try explain?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
On Thu, May 2, 2013 at 9:48 AM, Simon Riggs wrote:
>>> SELECT count(k0.id)
>>> FROM k0
>>> WHERE 1 = 2
>>> OR k0.id IN (
>>> SELECT k1.k0_id
>>> FROM k1
>>> WHERE k1.k1k2_id IN (
>>> SELECT k2.k1k2_id
>>> FROM k2
>>> WHERE
On Fri, Apr 12, 2013 at 6:15 PM, Scott Marlowe wrote:
> The really messed up bit is that the problems with the kswapd won't show up
> for weeks, months, or sometimes even longer. The symptoms of a kswapd
> problem is that swap is mostly full, but there's LOTS of free memory /
> kernel cache, and t
On Mon, Mar 18, 2013 at 3:47 PM, Kevin Grittner wrote:
> Rodrigo Barboza wrote:
>
>> So setting this as half of ram, as suggested in postgres tuning
>> webpage should be safe?
>
> Half of RAM is likely to be a very bad setting for any work load.
> It will tend to result in the highest possible nu
On Mon, Mar 4, 2013 at 6:31 PM, Joseph Pravato
wrote:
> Good Afternoon,
>
> We are having a performance issue with our views in PostgreSQL and based on
> the requirements for assistance you recommend providing the full table and
> index schema besides additional information from this site.
> https
On Tue, Feb 12, 2013 at 12:05 PM, Grant Johnson wrote:
> My experience has been that the performance advantage for numeric keys is
> primarily an Oracle thing. However, Oracle is popular enough for people to
> assume that it applies to databases in general.
The advantage in PG also exists, only t
On Sat, Jan 12, 2013 at 7:41 PM, Horst Dehmer wrote:
> Since the complete schema (of about 300 tables) is generated, I will just try
> char(20) instead of numeric(20) in the next days to see if it makes any
> difference. Which I somehow doubt.
I think that might just make it worse.
Well, maybe
On Sat, Jan 12, 2013 at 5:16 PM, Horst Dehmer wrote:
> Yes, the ids is something I don't like either.
> They carry additional semantics, which I cannot make go away.
> How are chances char(20) is more time efficient than numeric(20)?
> Disk space is no problem here.
What are the other tables like
On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer wrote:
> Except - and that's the wall I'm hitting - for one table which yielded just
> 75 records/second.
> The main 'problem' seem to be the FK constraints. Dropping just them
> restored insert performance for this table to 6k records/s. The table in
1 - 100 of 347 matches
Mail list logo