share how
you measure that and the measurements?
>
> Since we dont see any other way to find out what is slowing it down, we
> gathered data using the perf tool. Can somebody pls help on how do we go
> about reading the perf report.
Well, that's hard to do when you haven't
ny built-in operators are however exempt from that, as we consider
them leak-proof. This allows us to use non-RLS conditions for index
scans etc. which might be impossible otherwise)
Otherwise yes - it's pretty much the same as if you combine the
conditions using AND. It's "just"
is holding a SHARE lock on the "lineitem" table,
but we don't really know what the session is doing.
There's a PID in the pg_locks table, you can use it to lookup the
session in pg_stat_activity which includes the query (and also "state"
column that will tell you if it&
that queries may use multiple work_mem buffers, we
don't know how much memory the other queries are consuming, etc. We also
don't have any control over page cache, for example.
If you really need to do that, you'll need to do that at the OS level,
e.g. by specifying "mem=X"
we
collect for expression indexes, or the CREATE STATISTICS stuff? I assume
the former, because if you don't want the latter then just don't create
the statistics. Or am I missing something?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 2
S), that should
tell us more about how many blocks are found in shared buffers, etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@p
n't told us anything about what's happening on the
machine. It is reading a lot of data from the disks? Random or
sequential? Is it writing a lot of data into temporary files? Is it
consuming a lot of CPU? And so on.
regards
--
Tomas Vondra http://www.2ndQuadrant.co
(I'm not sure why there'd be a difference between Windows and Linux
though).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performan
ext to PostgreSQL. Look at top
and iotop while running the queries, and other system tools.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@
ns:
evidence_to_do.project_id = tool_performance.project_id
evidence_to_do.project_id = project.project_id
But the plan only seems to enforce the equality between 'project' and
'tool_performance'. So when joining the evidence_to_do, it performs a
cartesian product, pr
YZE of the query
(d) configuration of the database (work_mem, shared_buffers etc.)
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
On 04/28/2017 01:34 AM, Andres Freund wrote:
On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:
I can confirm this observation. I bought the Intel 750 NVMe SSD last year,
the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of
sustained O_DIRECT sequential writes. But when
it (power-loss protected), can do
~1GB/s of sustained O_DIRECT sequential writes. But when running
pgbench, I can't push more than ~300MB/s of WAL to it, no matter what I
do because of WALWriteLock.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7
you'll have such a busy system, you probably do some research and
testing first, before choosing the database. If we don't perform well
enough, you pick something else. Which removes the data point.
Obviously, there are systems that start small and get busier and busier
over time. And
for timed checkpoints, say 30+
minutes apart (or more). wal_compression is typically 'off' (i.e. the
default value).
- Could you quickly describe your workload?
Pretty much a little bit of everything, depending on the customer.
regards
--
Tomas Vondra http://www
On 04/25/2017 08:28 AM, Johann Spies wrote:
On 24 April 2017 at 15:17, Tomas Vondra wrote:
On 04/24/2017 08:48 AM, Johann Spies wrote:
Why would the planner prefer the use the gin index and not the btree
index in this case?
You'll need to show what queries are you running - tha
Seeing explain plans would also be helpful.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
om the heap and
save the index lookup step.
I don't follow - the queries are exactly the same in both cases, except
the parameter value. So both cases are eligible for index only scan.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support
y tables.
FWIW it's impossible to draw conclusions based on two EXPLAIN ANALYZE
executions. The timing instrumentation from EXPLAIN ANALYZE may have
significant impact impact (different for each plan!). You also need to
testing with more values and longer runs, not just a single execution
(
index. But imagine a driver with a lots of data long time
ago. That breaks the LIMIT fairly quickly.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgs
isk issue (at least the chart shows minimum usage). But
you're doing ~400tps, returning ~5M rows per second.
Also, if it turns out to be a database issue, more info about config and
data set would be useful.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Develop
heckpoint_segments enough to hold 5 minutes worth of WAL.
That means 300/30 * 64, i.e. roughly 640 segments (it's likely an
overestimate, due to full page writes, but well).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA,
differently, or something like
that. How did you do the upgrade?
ragards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
On 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:
On 29.07.2016 08:30, Tomas Vondra wrote:
On 07/29/2016 08:04 AM, trafdev wrote:
Hi.
I have an OLAP-oriented DB (light occasional bulk writes and heavy
aggregated selects over large periods of data) based on Postgres 9.5.3.
Server is a
cache for no apparent reasons.
>
This is probably a consequence of the primarycache misconfiguration.
Do I miss something important in my configs? Are there any double
writes\reads somewhere because of OS\ZFS\Postgres caches? How to avoid
them?
Please share your experience\tips. Thanks.
stem is overloaded, for example. The
simplest solution is to move the statistics file to RAM disk (e.g. tmpfs
mount on Linux) using stats_temp_directory in postgresql.conf.
The space neede depends on the number of objects (databases, tables,
indexes), and usually it's a megabyte in to
On 02/01/2016 10:38 AM, Albe Laurenz wrote:
Tomas Vondra wrote:
...
I didn't post the whole plan since it is awfully long, I'll include hyperlinks
for the whole plan.
work_mem = '100MB' (http://explain.depesz.com/s/7b6a):
-> Hash Join (cost=46738.74..285400.61 ro
ns of tuples that
need to be searched sequentially. Smaller work_mem values usually limit
the length of those chains in favor of batching.
Please, post the whole explain plan - especially the info about number
of buckets/batches and the Hash node details.
regards
--
Tomas Vondra
On 12/10/2015 11:45 PM, Alvaro Herrera wrote:
Tomas Vondra wrote:
Also, I don't think it makes much sense to set
(checkpoint_warning > checkpoint_timeout)
as it kinda defeats the whole purpose of the warning.
I agree, but actually, what is the sense of checkpoint_warning? I
es much sense to set
(checkpoint_warning > checkpoint_timeout)
as it kinda defeats the whole purpose of the warning.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-perf
iff;h=97f3014647a5bd570032abd2b809d3233003f13f
(I had to previously abandon pg_tgrm for a previous project and go
with solr; had this patch been in place that would not have happened)
Except that pg_tgrm-1.2 is not included in 9.5, because it was committed
in July (i.e. long after 9.5 was branched).
--
Tomas V
has dedicated resources, and the only other
VM on the host is the applicative server (which runs idle while
waiting for the database). There is nothing else running on the
server except postgresql (well, there were other things, but we
stopped everything to no avail).
PostgreSQL 9.3.5, comp
r both the slow and the fast query. That
makes it easier to spot the difference, and possibly identify the cause.
Also, what PostgreSQL version is this, and what are "basic" config
parameters (shared buffers, work mem)?
regards
--
Tomas Vondra http://www.2ndQuadran
using the memory through page cache (i.e. don't increase
maintenance_work_mem too much, you don't want to force the data to disk
needlessly).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ould not find a
way to see how it's progressing so there was no way for me to gauge when
it would be done.
Had it been waiting on a lock, it wouldn't consume 100% of CPU.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote
ry planner’s decision
or the index itself is larger than the table therefor it would decide
to do table scan?
What large number of tuples? The indexes are supposed to be more
efficient the larger the table is.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Develo
tuples may take a lot more space.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscript
t practices - separate the values
into multiple columns, and most of this will go away.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgs
On 06/04/15 02:58, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake wrote:
On 06/03/2015 03:16 PM, Tomas Vondra wrote:
Cache is not free memory - it's there for a purpose and usually
plays a significant ro
On 06/04/15 01:54, Yves Dorfsman wrote:
On 2015-06-03 16:29, Joshua D. Drake wrote:
On 06/03/2015 03:16 PM, Tomas Vondra wrote:
What is more important, though, is the amount of memory. OP reported the
query writes ~95GB of temp files (and dies because of full disk, so
there may be more
On 06/03/15 23:18, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra
wrote:
On 06/03/15 17:09, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra
Well, except that 15GB of that is shared_buffers, and I wouldn't call that
'free'. Also, I
On 06/03/15 17:09, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra
I don't see why you think you have less than 3GB used. The output you posted
clearly shows there's only ~300MB memory free - there's 15GB shared buffers
and ~45GB of page cache (fi
used. The output you
posted clearly shows there's only ~300MB memory free - there's 15GB
shared buffers and ~45GB of page cache (file system cache).
But you still haven't shows us the query (the EXPLAIN ANALYZE of it), so
we can't really give you advice.
--
Tomas Vondr
worse case, the
OOM killer will intervene, possibly crashing the whole database.
3) Check with developer to tune the query.
That's a better possibility. Sadly, we don't know what the query is
doing, so we can't judge how much it can be optimized.
--
Tomas Vondra
can do with the roundtrip time, usually, but you
can keep the connections open in a pool. That'll amortize the costs.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-per
On 05/31/15 18:22, Tom Lane wrote:
Tomas Vondra writes:
On 05/31/15 13:00, Peter J. Holzer wrote:
(There was no analyze on facttable_stat_fta4 (automatic or manual) on
facttable_stat_fta4 between those two tests, so the statistics on
facttable_stat_fta4 shouldn't have changed - only
On 05/31/15 13:00, Peter J. Holzer wrote:
[I've seen in -hackers that you already seem to have a fix]
On 2015-05-30 15:04:34 -0400, Tom Lane wrote:
Tomas Vondra writes:
Why exactly does the second query use a much slower plan I'm not sure. I
believe I've found an issue in plan
ot of things. The first step should be looking at pg_stat_activity,
what is the process eating the CPU doing.
We also need much more information about your system - what PostgreSQL
version are you using, what kind of OS, configuration etc.
regards
--
Tomas Vondra http:
different
amount of data.
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postg
43M rows), but the optimizer
only propagates fraction of the cost estimate (depending on how much of
the relation it expects to scan). In this case it expects to scan a tiny
part of the index scan, so the impact on the total cost is small.
A bit confusing, yeah.
regards
--
Tomas Vond
one when tuning it?
Have you seen this?
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list
On 04/19/15 22:10, Jon Dufresne wrote:
On Sun, Apr 19, 2015 at 10:42 AM, Tomas Vondra
wrote:
Or you might try creating an expression index ...
CREATE INDEX date_year_idx ON dates((extract(year from d)));
Certainly, but won't this add additional overhead in the form of two
indexes
might try creating an expression index ...
CREATE INDEX date_year_idx ON dates((extract(year from d)));
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailin
On 26.3.2015 17:35, Jeff Janes wrote:
> On Thu, Mar 26, 2015 at 5:44 AM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
>> That might work IMO, but maybe we should increase the coefficient a
>> bit (say, from 1.25 to 2), not to produce needlessly l
m 1.25 to 2), not to produce needlessly long MCV lists.
> It is also grossly inconsistent with the other behavior. If they are
> "29900; 98; 2" then all three go into the MCV.
Isn't the mincount still 12500? How could all three get into the MCV?
--
Tomas Vondra
tch to
sequential scan happens later (when more rows are estimated). Try to
decreasing random_page_cost from 4 (default) to 1.5 or something like that.
It may hurt other queries, though, depending on the dataset size etc.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Developm
On 22.3.2015 22:50, Vivekanand Joshi wrote:
> Any documentation regarding how to configure postgresql.conf file as per
> individual user?
That can't be done in postgresql.conf, but by ALTER ROLE commands.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Develo
/books which can tell us about
> the hardware requirements?
I'd say these two books would be helpful:
(1)
https://www.packtpub.com/big-data-and-business-intelligence/postgresql-9-high-availability-cookbook
- explains capacity planning etc.
(2)
https://www.packtpub.com/big-data-and-business-intell
materialized views (or tables maintained by triggers - this
might work for the 'latest record' subquery), etc.
(c) Try to tweak the cost parameters, to make the nested loops more
expensive (and thus less likely to be selected), but in a more
gradual way than enable_nestloops=false
On 17.3.2015 16:24, Thomas Kellerer wrote:
> Tomas Vondra schrieb am 17.03.2015 um 15:43:
>> On 17.3.2015 15:19, Thomas Kellerer wrote:
>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:
>>>> (2) using window functions, e.g. like this:
>>>>
&g
ty, the plan is reorganized (e.g. different join order), but the
misestimates are still lurking there.
> Is there a way, we can do anything about it?
Rephrasing the query so that the planner can estimate it more accurately.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL De
On 17.3.2015 15:19, Thomas Kellerer wrote:
> Tomas Vondra schrieb am 17.03.2015 um 14:55:
>> (2) using window functions, e.g. like this:
>>
>> SELECT * FROM (
>>SELECT *,
>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
>>
multi-column conditions is
difficult, no idea how to fix that at the moment.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgre
ficant differences between estimated and actual
row counts, and very expensive parts).
We can't do that, because we don't have your data or queries, and
without the explain analyze it's difficult to give advices.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL
this is transformed into a nested loop join. If there's a
misestimate, this may be quite expensive - try to create index on
s_f_promotion_history (touchpoint_execution_id, send_date)
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support,
On 16.3.2015 20:43, Jim Nasby wrote:
> On 3/13/15 7:12 PM, Tomas Vondra wrote:
>> (4) I suspect many of the relations referenced in the views are not
>> actually needed in the query, i.e. the join is performed but
>> then it's just discarded because
main advantage is
that you can create indexes, collect statistics. Disadvantage is
you have to refresh the MV, fill temporary table etc.
I expect (1) to improve the performance significantly, and (2) might
improve it even further by fixing the misestimates.
regards
--
Tomas Vondra
uch longer too. That only
increases the change of OOM issues.
It may work fine when most of the connections are idle, but it makes
storms like this possible.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Serv
was
mostly idle connections because of separate connection pools on each
application server. So mostly idle (90% of the time), but at peak time
all the application servers want to od stuff at the same time. And it
all goes KABOOOM! just like here.
--
Tomas Vondrahttp://www.2ndQuadr
changes the limits for
write-heavy databases - the main limit are still the drives.
regards
Tomas
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-
that's not the best kernel version
for PostgreSQL - see [1] or [2] for example.
[1]
https://medium.com/postgresql-talk/benchmarking-postgresql-with-different-linux-kernel-versions-on-ubuntu-lts-e61d57b70dd4
[2]
http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.ht
difficult for us.
That's difficult to say, because we really don't know where the problem
is and how much the queries can be optimized.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent v
ider taking experts advice on how to tune queries
> and server, but if postgres is going to behave like this, I am not
> sure we would be able to continue with it.
That's probably a good idea.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 S
might
want to check this:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
You have not provided the full query, just a query apparently
referencing views, so that the actual query is way more complicated.
Also, plain EXPLAIN is not really sufficient, we need EXPLAIN ANALYZE.
On 11.3.2015 18:30, Jeff Janes wrote:
> On Sat, Mar 7, 2015 at 7:44 AM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
> On 7.3.2015 03:26, Jeff Janes wrote:
> > On Fri, Mar 6, 2015 at 5:38 PM, Tom Lane <mailto:t...@sss.pgh.pa.us>
>
ing to
> know if this query is bad every time it is planner, or just
> sometimes.
Yeah, this might be the reason. Another possibility is that this is part
of some large batch, and autovacuum simply did not have change to do the
work.
regards
--
Tomas Vondrahttp://www.2ndQ
On 6.3.2015 01:44, Tom Lane wrote:
> Tomas Vondra writes:
>> On 5.3.2015 16:01, Gunnlaugur Thor Briem wrote:
>>> - postgres version is 9.1.13
>
>> The only thing I can think of is some sort of memory exhaustion,
>> resulting in swapping out large amounts of mem
I.e. a script that
demonstrates the issue? I tried to reproduce the issue using the
information provided so far, but unsuccessfully :-(
Even if you could reproduce the problem on another machine (because of
keeping the data internal) on a server with debug symbols and see where
most of the
On 29.1.2015 00:03, Josh Berkus wrote:
> On 01/28/2015 11:48 AM, Tomas Vondra wrote:
>> On 27.1.2015 08:06, Josh Berkus wrote:
>>> Folks,
>>>
>> ...
>>>
>>> On a normal column, I'd raise n_distinct to reflect the higher
>>>
any ideas of how that might work?
We're already collecting stats about contents of arrays, and maybe we
could do something similar for JSONB? The nested nature of JSON makes
that rather incompatible with the flat MCV/histogram stats, though.
regards
--
Tomas Vondrahttp://www
cular plan.
But as the condition is on the join column, you may try moving it back:
select *
from mixpanel_events_201409 mp
inner join mixpanel_event_list ev on ( ev.id = mp.event_id )
where mp.event_id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);
Of course, this only works on t
e_count
FROM events
GROUP BY 1;
and then using intarray with GIN indexes to query this table?
Something like this:
CREATE products_agg_idx ON aggregated
USING GIN (product_ids gin__int_ops);
SELECT * FROM events_aggregated WHERE product_ids @> ARRAY['82503']
cut the process in little part...
We're talking about a 600-800 ms query - even if you cut it to 1 ms, I
don't see how this would do a difference in a batch-style job.
If you're doing many such queries (with different id_article values),
you may do something like this
select caracteris
hat might go wrong.
OK. My recommendation is not to mess with default_statistics unless you
actually have to (e.g. increasing the value on all tables, withouth a
query where the current value causes trouble). It increases time to plan
the queries, collect statistics (ANALYZE / autovacuum) etc.
a. So you may not get more detailed stats, even when using
larger sample.
That being said, I really doubt increasing the statistics target above
1 (or even sampling the whole table) will help you in practice.
Might be worth showing an example of a bad estimate with your data, or
maybe a te
On 21.11.2014 19:38, Jeff Janes wrote:
>
> When I run this patch on the regression database, I get a case where
> the current method is exact but the adaptive one is off:
>
> WARNING: ndistinct estimate current=676.00 adaptive=906.00
>
> select count(distinct stringu1) from onek;
> 676
>
> It s
Hi,
On 14.11.2014 00:16, CS DBA wrote:
> This is on a CentOS 6.5 box running PostgreSQL 9.2
>
>
> On 11/13/14 4:09 PM, CS DBA wrote:
>> All;
>>
>> We have a large db server with 128GB of ram running complex
>> functions.
>>
>> with the server set to have the following we were seeing a
>> somewha
On 5.11.2014 20:16, Tory M Blue wrote:
>
> Thanks Thomas,
>
>
> On 4.11.2014 21:07, Tory M Blue wrote:
> > Well after fighting this all day and dealing with a really sluggish db
> > where even my slon processes were taking several seconds, I reduced my
> > shared_buffers back to
Hi Tory,
On 4.11.2014 21:07, Tory M Blue wrote:
> Well after fighting this all day and dealing with a really sluggish db
> where even my slon processes were taking several seconds, I reduced my
> shared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to 2GB.
> i actually undid all my chan
On 29.10.2014 16:12, jmcdonagh wrote:
> Hi Tomas- thank you for your thoughtful response!
>
>
> Tomas Vondra wrote
>> On 28.10.2014 21:55, jmcdonagh wrote:
>>> Hi, we have a nightly job that restores current production data to
>>> the development databases i
On 28.10.2014 22:15, Jeff Chen wrote:
> Hi friends!
>
> I'd love to get a sanity check on whether a fat select query I'm doing
> makes sense given the infrastructure that we have.
>
> We have 3 big tables that we typically join together for certain
> queries: a ~40 million row photos table, a ~20
On 28.10.2014 21:55, jmcdonagh wrote:
> Hi, we have a nightly job that restores current production data to
> the development databases in a 'warm spare' database so that if the
> developers need fresh data, it's ready during the day. When we moved
> from 9.0 to 9.2 suddenly the restores began to ta
Dne 22 Říjen 2014, 0:25, Montana Low napsal(a):
> I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel
> 3.16.3.
> I receive numerous Error: out of memory messages in the log, which are
> aborting client requests, even though there appears to be 23GB available
> in
> the OS cache.
>
On 17.10.2014 19:25, Greg Stark wrote:
> On Wed, Oct 15, 2014 at 7:02 PM, Tomas Vondra wrote:
>> If you know the title of the article, it's usually available
>> elsewhere on the web - either at the university site, or elsewhere.
>> I found these two articles
On 15.10.2014 19:20, Josh Berkus wrote:
> On 10/10/2014 04:16 AM, Greg Stark wrote:
>> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus wrote:
>>> Yes, it's only intractable if you're wedded to the idea of a tiny,
>>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we
>>> can get
On 10.10.2014 19:59, Craig James wrote:
> On Fri, Oct 10, 2014 at 9:53 AM, Tomas Vondra <mailto:t...@fuzzy.cz>> wrote:
>
>
> On 10.10.2014 16:21, Craig James wrote:
> > Our index is for chemical structures. Chemicals are indexed on
> >
On 10.10.2014 14:10, Tomas Vondra wrote:
> Dne 10 Říjen 2014, 13:16, Greg Stark napsal(a):
>> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus wrote:
>>> Yes, it's only intractable if you're wedded to the idea of a tiny,
>>> fixed-size sample. If we're a
On 10.10.2014 16:21, Craig James wrote:
> On Fri, Oct 10, 2014 at 5:10 AM, Tomas Vondra <mailto:t...@fuzzy.cz>> wrote:
>
> > I've gone looking for papers on this topic but from what I read this
> > isn't so. To get any noticeable improvement you nee
Dne 10 Říjen 2014, 13:16, Greg Stark napsal(a):
> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus wrote:
>> Yes, it's only intractable if you're wedded to the idea of a tiny,
>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we
>> can get a MUCH more accurate n_distinct estimate
On 3.10.2014 02:54, Peter Geoghegan wrote:
> On Thu, Oct 2, 2014 at 12:56 PM, Josh Berkus wrote:
>> Yes, it's only intractable if you're wedded to the idea of a tiny,
>> fixed-size sample. If we're allowed to sample, say, 1% of the
>> table, we can get a MUCH more accurate n_distinct estimate usi
1 - 100 of 351 matches
Mail list logo