> Mike Nolan wrote:
> > Is there a way to copy a table INCLUDING the check constraints? If not,
> > then that information is lost, unlike varchar(n).
>
> "pg_dump -t" should work fine, unless I'm misunderstanding you.
I was specifically referring to doing
ostgres expert, btw).
My question is: if I changed both fields to be text or varchar(1000)
then would the index be used?
Ta,
--
Mike
---(end of broadcast)---
TIP 8: explain analyze is your friend
duration.
Let me know if there is anything else I can supply that will help the
review, thanks!
One additional question - is there any way to check how long postgres is
estimating an operation will take to complete while it is running?
Thanks again,
Mike
.
this is a good start, if there are any other suggestions please let me know
- is there any query to check estimated time remaining on long running
transactions?
On Mon, Jun 6, 2011 at 3:37 PM, Tom Lane wrote:
> Mike Broers writes:
> > I am in the process of implementing cascade
n archived?
Mike
Did you check your read ahead settings (getra)?
Mike DelNegro
Sent from my iPhone
On Apr 3, 2012, at 8:20 AM, Cesar Martin wrote:
> Hello there,
>
> I am having performance problem with new DELL server. Actually I have this
> two servers
>
> Server A
f not, where should I look next?
______
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
6124 393344000 0 146 9586 9825 13 3
83 1
______
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
<http://www.rrdonnelley.com/>
nable?
______
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
<http://www.rrdonnelley.com/>
* *
On
ts.
Thanks for your help, BTW!
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
here.
Any suggestions are welcome.
Mike
ger. As that apparently is not an issue I went ahead and tried the
DELETE and COPY in a transaction. The load time is quite reasonable this
way.
Thanks!
Mike
heard it either.
>
Ah. That must have been what I'd half-remembered. Thanks for the
clarification.
Mike
OP INDEX outside the transaction?
The access pattern for the table is such that I can afford the occasional
stray hit without an index during the reload time. It's been pretty quick
using the above.
Mike
Ah. So it does. Testing with two psql sessions locks as you said, and
moving the DROP INDEX to a separate transaction give the results I was
looking for.
Thanks,
Mike
__
*Mike Blackwell | Technical Analyst
On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes wrote:
>
> But If you do keep the drop index inside the transaction, then you
> would probably be better off using truncate rather than delete, and
> rebuild the index non-concurrently and move that inside the
> transaction as well.
>
>
Hmm From
.4GHz 4-core E5-2609 CPUs
64GB RAM
2x146GB 15K SAS hard drives
3x200GB SATA SLC SSDs
+ the usual accessories (optical drive, rail kit, dual power supplies)
Opinions?
Thanks in advance for any suggestions you have.
-Mike
--
Mike McCann
Software Engineer
Monterey Bay Aquarium Research Institute
On May 7, 2013, at 4:21 PM, Jeff Janes wrote:
> On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe wrote:
> On Thu, May 2, 2013 at 5:11 PM, Mike McCann wrote:
> > Hello,
> >
> > We are in the fortunate situation of having more money than time to help
> > solve our Post
On May 13, 2013, at 4:24 PM, Jeff Janes wrote:
> On Mon, May 13, 2013 at 3:36 PM, Mike McCann wrote:
>
> Increasing work_mem to 355 MB improves the performance by a factor of 2:
>
> stoqs_march2013_s=# set work_mem='355MB';
> SET
> stoqs_march2013_s=
to setup DB cluster to handle 1000 concurrent users.
>
Ok. That's a start. Can you tell us more about what these users are
doing? What kind of queries are being issued to the database? How often
(per user or total per time)?
_
be missing?
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com
<http://www.rrdonnelley.com/>
* *
Thanks for your reply. Adding '-c fast' does seem to improve the initial
delay. I'm still seeing delays of several minutes between write bursts.
The server has light OLTP loading.
__
*Mike Blackw
heavily hit databases and maintain small lag times? I would like to believe
that the fault is something we have done though and that there is some
parameter we could tune to reduce this lag.
Any recommendations would be very helpful.
Mike Wilson
Predicate Logic Consulting
and when the purchase traffic increases that is when we
start to see extreme lag develop on the slave.
CPU utilization on the slave during extreme lag is similar to normal operation
even if the slave is lagging more than usual.
Thanks for the info on max_wal_senders. That’s good to know.
Mike
.
Mike Wilson
> On Nov 2, 2014, at 12:33 PM, Greg Spiegelberg wrote:
>
> Hi Mike,
>
> Sounds very familiar. Our master fans out to 16 slaves (cascading) and we
> had great success with segregating database queries to different slaves and
> some based on network late
I had a query that was filtering with a wildcard search of a text field for
%SUCCESS%. The query took about 5 seconds and was running often so I wanted
to improve it. I suggested that the engineers include a new boolean column
for successful status. They implemented the requested field, but the q
Thanks, very informative! I'll experiment with work_mem settings and report
back.
On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane wrote:
> Mike Broers writes:
> > I had a query that was filtering with a wildcard search of a text field
> for
> > %SUCCESS%. The query took
After bumping up work_mem from 12MB to 25MB that last materialize is indeed
hashing and this cut the query time by about 60%. Thanks, this was very
helpful and gives me something else to look for when troubleshooting
explains.
On Tue, Jul 7, 2015 at 11:10 AM, Mike Broers wrote:
> Tha
increase IOPs/distribute them) or switch to SSDs and
forget about almost everything…
Mike
--
From: Dave Stibrany [mailto:dstibr...@gmail.com]
Sent: Friday, March 18, 2016 7:48 AM
Hey Mike,
Thanks for the response. I think where I'm confused is that I thought v
question, feel free to clarify for me.
Mike
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Dave Stibrany
Sent: Thursday, March 17, 2016 1:45 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Disk Benchmarking Question
I
o haven't needed to pull
on that lever.
My latest model is placing large genomic data onto the AWS S3 file system,
keeping all of
the metadata inside the database. It's working very well so far, but we're
still in development.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
h 20 cores, it can
only support 40 active users?
I come from the SQL Server world where a single 20 core server could support
hundreds/thousands of active users and/or many dozens of background/foreground
data processes. Is there something fundamentally different between the two
platforms rela
oops=38831)
> Index Cond: (feature_id = f.id <http://f.id> )
> Buffers: shared hit=181429 read=426
> -> Index Scan using warning_feature_id_index on "Warning"
> warning (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0
> loops=26265)
> Index Cond: (f.id <http://f.id> = feature_id)
> Buffers: shared hit=81151 read=42
> Total runtime: 6273.312 ms
>
>
> ---Version---
> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit
>
>
> ---Table sizes---
> Extra_info 1223400 rows
> Feature 185436000 rows
> Measurement 50 rows
> Point 124681000 rows
> Warning11766800 rows
>
> ---Hardware---
> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)
> 6GB Memory
> 64-bit Operating System (Windows 7 Professional)
> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache
>
> ---History---
> Query gets slower as more data is added to the database
>
> ---Maintenance---
> Autovacuum is used with default settings
>
> Tommi Kaksonen wrote:
I don’t see a reason to partition such small data. What I do see is you
attempting to run a big query on what looks like a small desktop pc. 6GB of
ram, especially under Windows 7, isn’t enough ram for a database server. Run
the query on a normal small server of say 16gb and it should perform fine. IMO.
Mike
son between Pass 1 and Pass 2: average row lengths were within 7% of
each other (1121 vs 1203) using identical table structures and processing
code, the only difference was the target server.
I'm happy to answer questions about these results.
Mike Sofen (USA)
--
Sent via pgsql-perform
From: Claudio Freire Sent: Friday, September 02, 2016 1:27 PM
On Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen < <mailto:mso...@runbox.com>
mso...@runbox.com> wrote:
> It's obvious the size of the batch exceeded the AWS server memory,
> resulting in a profoundly slower pro
From: Jim Nasby [mailto:jim.na...@bluetreble.com] Sent: Wednesday, September
07, 2016 12:22 PM
On 9/4/16 7:34 AM, Mike Sofen wrote:
> You raise a good point. However, other disk activities involving
> large data (like backup/restore and pure large table copying), on both
> plat
Hello, I am curious about the performance of queries against a master table
that seem to do seq scans on each child table. When the same query is
issued at a partition directly it uses the partition index and is very
fast.
The partition constraint is in the query criteria. We have non overlappin
uld help, but its just my wish.
>
>
>
> Regards,
> Ganesh Kannan
>
>
>
> ------
> *From:* pgsql-performance-ow...@postgresql.org postgresql.org> on behalf of Mike Broers
> *Sent:* Wednesday, September 21, 2016 12:53 PM
> *To:* pgsql-performance@postgresql.org
plan by using a subquery on the indexed
partition and using those results to scan for the unindexed value.
On Wed, Sep 21, 2016 at 12:37 PM, Mike Broers wrote:
> Thanks for your response - Is 'selectively choosing what partition'
> different than utilizing each partitions index wh
it may become a
more frequent ad-hoc need so if there is something else I can do it would
be appreciated.
On Wed, Sep 21, 2016 at 9:11 PM, Tom Lane wrote:
> Mike Broers writes:
> > Hello, I am curious about the performance of queries against a master
> table
> > that seem to do
ther issues/requirements that are creating other performance
concerns that aren’t obvious in your initial post?
Mike Sofen (Synthetic Genomics)
scale, auto-shard, fault tolerant,
etc…and I’m not a Hadoopie.
I am looking forward to hearing how this all plays out, it will be quite an
adventure! All the best,
Mike Sofen (Synthetic Genomics…on Postgres 9.5x)
(with guaranteed IOPS), performance against even 100m row tables should still
stay within your requirements.
So Rick’s point about not needing millions of tables is right on. If there’s a
way to create table “clumps”, at least you’ll have a more modest table count.
Mike Sofen (Synthetic Genomics)
can easily do 500
million rows per bucket before approaching anything close to the 30ms max query
time.
Mike Sofen (Synthetic Genomics)
From: Mike Sofen Sent: Tuesday, September 27, 2016 8:10 AM
From: Greg Spiegelberg Sent: Monday, September 26, 2016 7:25 AM
I've gotten more responses than anticipated and have answered some questions
and gotten some insight but my challenge again is what should I capture along
the w
looks like it’s having to do a table scan for all the rows above
the id cutoff to see if any meet the filter requirement. “not in” can be very
expensive. An index might help on this column. Have you tried that?
Your rowcounts aren’t high enough to require partitioning or any other changes
to your table that I can see right now.
Mike Sofen (Synthetic Genomics)
**Short version of my question:**
If I hold a cursor reference to an astronomically huge result set in my
client code, would it be ridiculous (i.e. completely defeats the point of
cursors) to issue "FETCH ALL FROM cursorname" as my next command? Or would
this slowly stream the data back to me as I
I asked the same question at the same time on Stack Overflow (sincere
apologies if this is a breach of etiquette - I really needed an answer, and
I thought the two communities might not overlap).
Stackoverflow now has an answer, by me:
http://stackoverflow.com/q/42292341/#42297234 - which is based
clearly not as simple as that! I don't know if you can offer any more
helpful insight on this last aspect?
Many thanks,
Mike
I meant to say: "the `FETCH 1 FROM cursor` until exhausted pattern will
always be safe". Nasty typo, sorry!
> Seems odd. Is your cursor just on "SELECT * FROM table", or is there
> some processing in there you're not mentioning? Maybe it's a cursor
> WITH HOLD and you're exiting the source transaction?
Hi Tom,
I've deleted my own Stack Overflow answer in favour of Laurenz Albe's one.
New TL;DR (I'm
The generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes
sense.
I have done another test.
If I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the
PostgreSQL server, reported in its log.
If I execute `FETCH 500 FROM cursor` (exactly half the rows), I see a
Thanks, Tom.
Wouldn't this mean that cursors are noticeably non-optimal even for normal
data sizes, since the entire data to be streamed from the table is always
duplicated into another buffer and then streamed?
> if you want the whole query result at once, why are you bothering with a
cursor?
T
ing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
│ -> Index Scan using
ix_event__00076729_landing_id on event__00076729 e_18 (cost=0.56..4593.36
rows=11599 width=796) │
│ Index Cond: (landing_id =
t_sap.landing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
│ -> Index Scan using
ix_event__00078600_landing_id on event__00078600 e_19 (cost=0.56..4940.39
rows=13528 width=804) │
│ Index Cond: (landing_id =
t_sap.landing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
│ -> Index Scan using
ix_event__00080741_landing_id on event__00080741 e_20 (cost=0.56..4105.25
rows=6846 width=760)│
│ Index Cond: (landing_id =
t_sap.landing_id)
│
│ Filter: (((body ->> 'SID'::text) IS NOT
NULL) AND (validation_status_code = 'P'::bpchar))
│
└──
Any ideas for how to convince postgres to choose the faster plan in qa?
Thanks!
Mike
2017 at 08:28, Mike Broers wrote:
> > I have a query of a partitioned table that uses the partition index in
> > production but uses sequence scans in qa. The only major difference I
> can
> > tell is the partitions are much smaller in qa. In production the
> partitions
Buckets: 1024 Batches: 1 Memory Usage: 10kB
│
│ -> HashAggregate (cost=41.88..43.88 rows=200
width=4) (actual time=0.054..0.067 rows=45 loops=1)
│
│ Group Key: t_sap.landing_id
│
│ -> Seq Sca
exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as
rankings;
Based on the difference in row estimate I am attempting an analyze with a
higher default_statistic_target (currently 100) to see if that helps.
On Fri, Sep 15, 2017 at 3:42 PM, Tom Lane wrote:
> Mike Broers wri
andom_page_cost lower (we are on an SSD SAN anyway..), but I dont
think I have a satisfactory reason by the row estimates are so bad in the
QA planner and why it doesnt use that partition index there.
On Fri, Sep 15, 2017 at 3:59 PM, Mike Broers wrote:
> That makes a lot of sense, thanks for
On Wed, Sep 20, 2017 at 6:05 PM, David Rowley
wrote:
> On 21 September 2017 at 04:15, Mike Broers wrote:
> > Ultimately I think this is just highlighting the need in my environment
> to
> > set random_page_cost lower (we are on an SSD SAN anyway..), but I dont
> think
> &
date shows sequential search, while with
'int'::bigint goes to index search. Using pyPgSQL as the interface to
7.3.4 and 7.3.3.
Enjoy,
Mike
Ivar Zarans wrote:
On Fri, Dec 05, 2003 at 10:08:20AM +, Richard Huxton wrote:
...
I am using PyPgSQL for PostgreSQL access and making upda
tr__( self ):
return "%s::int8"%(self.value,)
Enjoy,
Mike
___
Mike C. Fletcher
Designer, VR Plumber, Coder
http://members.rogers.com/mcfletch/
---(end of broadcast)---
TIP 7: don't forget to
101 - 161 of 161 matches
Mail list logo