On Monday, September 26, 2016 9:44 AM, Tom Lane wrote:
>> Paul Jones writes:
>> For a freshly pg_restore'd 9.2 database, would VACUUM ANALYZE update
>> statistics any better than just an ANALYZE?
>
> VACUUM would have caused the page-all-visible flags to get set for all
> pages of unchang
On Tuesday, July 19, 2016 6:19 AM, Teodor Sigaev wrote:
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data jsonb_path_ops);
> Bitmap Heap Scan on json_tables (cost=113.50..37914.64 rows=1 width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
> Recheck
On Monday, July 18, 2016 10:14 PM, Kisung Kim wrote:
Hi,I recently test YCSB benchmark too.But contrary to my expectation, PG (9.5)
is slower than MongoDB 3.2.Paul said that making table with no logging option
improved the performance,and it might be equal to MongoDB's behavior.But in
On Friday, March 18, 2016 4:54 PM, Andreas Kretschmer
wrote:
>
>
>> Paul Jones hat am 18. Marz 2016 um 21:24 geschrieben:
>>
>>
>> In Postgres 9.5.1 with a shared_buffer cache of 7Gb, a SELECT from
>
> the first query reads only the tuple from heap that are matched the
> where-condit
On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com"
wrote:
> Your results are close enough to mine, I think, to prove the point.
> And, I agree that the EDB benchmark is not necessary reflective of a
> real-world scenario.
>
> However, the cache I'm referring to is PG's shared_bu
Your results are close enough to mine, I think, to prove the point. And, I
agree that the EDB benchmark is not necessary reflective of a real-world
scenario.
However, the cache I'm referring to is PG's shared_buffer cache. You can see
the first run of the select causing a lot of disk reads.
Very helpful!! Thanks!!
On Tuesday, March 1, 2016 9:32 AM, Peter Devoy wrote:
> MongoDB has released 3.2 with their WiredTiger storage. Has anyone
> benchmarked 9.5 against it, and for JSONB elements several MB in size?
>
> PJ
Hi Paul
I do not have an answer for you but there is a g
> On 12/23/2015 04:17 PM, Paul Jones wrote:
> >
> >I have been having disk errors that have corrupted something in
> >>my postgres database. Other databases work ok:
>
> This isn't the best characterization...the "postgres" data is not a "system"
> database but rather a convenient defau
On Wednesday, December 23, 2015 6:45 PM, Tom Lane wrote:
> Paul Jones writes:
> > I have been having disk errors that have corrupted something in
> > my postgres database. Other databases work ok:
>
> > postgres=# SELECT pg_catalog.pg_is_in_recovery();
> > ERROR: could not read block 3
That worked, thank you.
The Tip in 43.1 did not explain in that much detail. I searched but could
not find that explained anywhere in the docs. Your paragraph would be
a nice enhancement to the tip.
PJ
On Sun, 7/19/15, Tom Lane wrote:
Subject: Re
Has anyone successfully built Python 2 and 3 into the same installation
of Postgres 9.4.4? I tried it today on Ubuntu 10.04, Python 2.6.5,
Python 3.1.2 and got and error about undefined symbol: PyString_AsString.
The Python docs say that PyString_* have been renamed to PyBytes_*
and I find referen
On Thu, Nov 06, 2014 at 02:55:20PM +, Shaun Thomas wrote:
>
> These updates aren't equivalent. It's very important you know this, because
> you're also inflating your table with a lot of extra updated rows.
>
> Take the first UPDATE:
>
> > UPDATE second SET time1 = orig.time1
> > FROM orig
>
> On Mon, 11/3/14, Igor Neyman wrote:
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org]
> On Behalf Of p...@cmicdo.com
> Sent: Monday, November 03, 2014 11:34 AM
> To: pgsql-general@postgresql.org
> Subjec
Why does the UPDATE SET = FROM choose a more poorly performing plan than
the UPDATE SET = (SELECT ...)? It seems to me that it is the same join.
I'm using 9.3.5.
CREATE TABLE orig
(
key1VARCHAR(11) PRIMARY KEY,
time1 TIME
);
INSERT INTO orig (key1, time1)
SELECT
Hi Andres,
> Hi,
>
> On 2014-09-29 13:52:52 -0700, p...@cmicdo.com wrote:
>> I have a question about BDR Global Sequences.
>>
[deleted]
>> Is there way to increase a global sequence's reservation block for each
>> node so that I can tell the nodes, "I'm going to load 100M rows now so
>> yo
I have a question about BDR Global Sequences.
I've been playing with BDR on PG 9.4beta2, built from source from the
2nd Quadrant GIT page (git://git.postgresql.org/git/2ndquadrant_bdr.git).
When trying a 100 row \copy-in, letting PG choose the global sequence
values, I get "ERROR: could not
16 matches
Mail list logo