-> Parallel Seq Scan on big (cost=0.00..222744.43
rows=3 width=4) (actual time=0.143..992.890 rows=294118 loops=6)
Filter: ((id % 17) = 0)
Rows Removed by Filter: 4705883
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL D
On 1 October 2016 at 10:10, Tom Lane wrote:
> David Rowley writes:
>> On 1 October 2016 at 05:47, Tom Lane wrote:
>>> Somebody will need to trace through this on Windows and see where it's
>>> going off the rails.
>
>> I tried the test case on 9.6
x27;m just wondering - anyone has any thoughts or suggestions about
performance?
Any would be much appreciated
(I'm wandering if I just need faster instances/disks or if I'm doing
something wrong)
Any one has similar DB sizes? what CPU/RAM/Disks do you have?
Thanks,
David
Scala Academy
idea or explanation reasons if its normal behavior.
David
(See attached file: checkpoints)
--
-
Ing. David TUROŇ
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.:+420 591 166 224
fax:+420 596 621 273
mobil: +420 732 589 152
www.linuxbox.cz
completely normal. WAL files are recycled so a file with a
later name can have an earlier timestamp. What this means is it is
available but has not been used to record transactions yet.
So, 000100C6 is the end of your current WAL stream.
--
-David
da...@pgmasters.net
--
Sent via
On 11/3/16 1:16 PM, Tom DalPozzo wrote:
so if I understand right, the ...DE file's previous name, was less than
...C6, then it was renamed in big advance for later use. I was missing
this advance.
That is correct.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (
always drop the public schema as the first step of any build and have
never seen any ill effects.
Nothing is exposed by default in the public schema unless you install
extensions into it.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
..., now not produce extra WAL records.
David
--
-----
Ing. David TUROŇ
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.:+420 591 166 224
fax:+420 596 621 273
mobil: +420 732 589 152
www.linuxbox.cz
mobil servis: +420 737 238 656
em
dows.
> We need to know the equivalent library in Linux to build the same in Linux to
> get the symbols resolve correctly.
>
> Any help is appreciated.
>
> -Kiran G
--
David Guyot
Administrateur système, réseau et télécom / Sysadmin
Europe Camions Interactive / Stockway
Moulin
Hi guys,
I want to check my production server for the free space map issue.
https://wiki.postgresql.org/wiki/Free_Space_Map_Problems
I am on Centos 6 currently running 9.3.14 using pgdg repository.
I need to install postgresql93-contrib to get the pg_freespacemap extension.
Only postgresql93-co
Yes, I tried, it’s no longer available in the pgdg repository. I would have a
look if I find the rpm package somewhere.
From: Vladimir Rusinov
Date: Thursday, November 24, 2016 at 11:19 AM
To: David Richer , "pgsql-general@postgresql.org"
Subject: Re: [GENERAL] Extension compatibili
I am not an expert but I am assuming that the native part of the extension
(pg_freespacemap.so) is compiled with postgresql headers and libs.
On 2016-11-24, 11:55 AM, "Adrian Klaver" wrote:
On 11/24/2016 07:40 AM, David Richer wrote:
> Hi guys,
>
>
>
Yes are right, I meant a 9.3.15 extension with a 9.3.14 server.
Thanks for the help guys!
On 2016-11-24, 12:10 PM, "Tom Lane" wrote:
Adrian Klaver writes:
> On 11/24/2016 09:01 AM, David Richer wrote:
>> I am not an expert but I am assuming that the native par
d you look at doing backup/archive with a professional tool such
as pgBackRest (http://www.pgbackrest.org) or Barman
(http://www.pgbarman.org/).
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
You should ask on the PostGIS list - they will be closer to this:
https://lists.osgeo.org/mailman/listinfo/postgis-users
--
David Rader
dav...@openscg.com
On Mon, Dec 5, 2016 at 9:05 PM, sunpeng wrote:
> Does PostgreSQL support BIM(Building Information Modeling) storage?Or how?
> I ca
on so you will
not have any issues keeping the DB and filesystem in sync.
David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
control
or people will be able to download any/all PDFs in a given folder. In
the DB, you surely will have access control as I presume you don't allow
browser access to the DB .
Either way, you may want to see if your PDFs compress well or not as
that may save some storage space at th
request:
https://github.com/pgaudit/pgaudit_analyze/pull/1
I haven't had time to assess to performance impact of this patch yet,
which is why it hasn't been merged. You are welcome to give it a try.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.o
it into
the community Debian/RHEL packages yet, but should be available soon.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ing.
Perhaps each of the 2513 found rows, plus the 1068 filtered out rows
were spread over the table. Perhaps each on their own heap page, and
all those pages had to be read from disk. The BUFFERS option might
help show if this is the case.
Does it execute as slowly when you run it for a 2n
s, in the same order as the columns
defined on the table.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
x27;url','industry','status','comment')
> VALUES
> (1,'AG Spray Inc.',' ','PO Box
> 12129','Salem','OR','97309-0129','USA','503-371-7907','888-273-0937',
can use pg_stat_database to see blocks read and
blocks read from buffer cache as well as block read time and block write
time per database.
See: https://www.postgresql.org/docs/9.6/static/monitoring-stats.html
--
David Rader
dav...@openscg.com
--
David Rader
dav...@openscg.com
On Thu, Jan 19, 2017 at 8:56 AM, Kenneth Marshall wrote:
> On Thu, Jan 19, 2017 at 03:29:34PM +1100, Venkata B Nagothi wrote:
> > On Thu, Jan 19, 2017 at 6:17 AM, PAWAN SHARMA >
> > wrote:
> >
> > > Hello All,
> > >
er with a common table expression
such as:
with cte (select owner_id from settings where setting_id = 1 and setting_value
= 'common_1') as
select id1 from multi_id where id1 in (select owner_id from cte)
union
select id2 from multi_id where id2 in (select owner_id from cte)
union
select id3 from multi_id where id3 in (select owner_id from cte);
but you still have the union overhead.
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Training & Services
e lines of:
copy from '' delimiter ' '; -- <- use a tab
character between the quotes.
Of course, you'll need to create the table first with CREATE TABLE.
> Has anybody done anything like this before?
>
>
I'd imagine that it's fairly common.
--
Dav
Hi,
One of my co-workers came out of a NIST cyber-security type meeting today and
asked me to delve into postgres and zeroization.
I am casually aware of mvcc issues and vacuuming
I believe the concern, based on my current understanding of postgres inner
workings, is that when a dead tup
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, November 18, 2015 3:47 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead
tuples with sensitive data.
On 11/18/2015 11
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, November 18, 2015 4:05 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead
tuples with sensitive data.
On 11/18/2015
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Thursday, November 19, 2015 10:32 AM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead
tuples with sensitive data.
On 11/19/2015
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Thursday, November 19, 2015 11:06 AM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead
tuples with sensitive data.
On 11/19/2015
I am working with a client who has a 9.1 database rapidly approaching XID
wraparound. They also have an exceedingly large pg_largeobject table (4217
GB) that has never been vacuumed. An attempt to vacuum this on a replica
has run for days and never succeeded. (Or more accurately, never been
allo
OFFSET.
You can view the output from the planner by prefixing your SELECT statement
with EXPLAIN: See
http://www.postgresql.org/docs/current/static/sql-explain.html
This will give you the information you need to see how the query planner
has decided on how your query will be executed.
--
David Row
On Tue, Dec 1, 2015 at 9:12 AM, Jeff Janes wrote:
> On Mon, Nov 30, 2015 at 9:58 AM, David Kensiski
> wrote:
> > I am working with a client who has a 9.1 database rapidly approaching XID
> > wraparound.
>
> The hard limit at 2 billion, or the soft limit at
> autovacu
On Tue, Dec 1, 2015 at 1:48 PM, Roxanne Reid-Bennett
wrote:
> On 11/30/2015 9:58 AM, David Kensiski wrote:
>
> I am working with a client who has a 9.1 database rapidly approaching XID
> wraparound. They also have an exceedingly large pg_largeobject table (4217
> GB) that
org/user-guide.html#pitr
This method requires archive_mode to be enabled, which I believe is the
correct way to achieve the desired result.
--
-David
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
On 12/2/15 6:25 PM, Jim Nasby wrote:
On 12/2/15 1:56 PM, David Steele wrote:
>Also, I don’t want enable archive_mode = on as it needs to maintain
>archives files.
As it turns out, archiving would be the solution to your problem. If
you were archiving you could restore a*previous* back
orward to T3.
That should be explained pretty clearly in the user guide - if there's
something you don't understand then it would be helpful to know so I can
improve the guide.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
_club or away_club columns.
>
> How would I do that?
>
Use UNION:
select home_club from fixtures UNION select away_club from fixtures;
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
;
> Would anyone here point me in the right direction?
>
I think you're just missing the capture group to grab the filename. What
you capture in the group is then available to use as you please in \1
(which needs be escaped as \\1) so something like regexp_replace('
http://test.com/test/testfile.php','/([^/]*$)', E'&file=\\1');
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ty work for you:
http://www.pgbackrest.org/user-guide.html
And allows you to do Point-in-Time Recovery with a single command:
http://www.pgbackrest.org/user-guide.html#pitr
This a detailed guide that shows you exactly how PITR works and how to
verify your result. Even if you don't use pgBackRe
y to 148?
Rsync is possibly your issue here - maybe pg_control is not being copied
because the timestamp is the same on both systems (rsync only has a 1
second time resolution so this is very possible between a master and a
streaming replica). Try rsync with checksums (--checksum) and see if
that makes a difference.
--
-David
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
ng replication.
I have written a tutorial that covers setting up a hot standby with or
without streaming replication using pgBackRest:
http://www.pgbackrest.org/user-guide.html#replication
You can replace backup/restore/archive with other methods but the
principal remains the same. The tutorial is f
nce that state lives longer than an individual
process, it's possible some information leakage could occur that way,
but "object reuse", it seems doubtful.
David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
iver,recipient from
(select row_number() over (order by random()) rn, giver from secretsanta) g
inner join
(select row_number() over (order by random()) rn, giver recipient from
secretsanta) r on g.rn = r.rn
)
update secretsanta set recipient = cte.recipient from cte WHERE cte.giver =
secretsanta.giver;
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
has been enabled.
David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t, then index that
function, and then just include a call to that function in the join
condition matching with the equality operator. That'll allow hash and merge
joins to be possible again.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 30 December 2015 at 13:56, Hiroyuki Sato wrote:
> 2015年12月30日(水) 6:04 David Rowley :
>
>> On 30 December 2015 at 04:21, Hiroyuki Sato wrote:
>>
>>> 2015年12月29日(火) 4:35 Jeff Janes :
>>>
>>>>
>>>>
>>> But, the planner refus
route where id <
1350) as prev, (select min(id) from route where id > 1350) as next from
route where id=2; ?
That should be much more efficient for a larger table as it should avoid
the seqscan and allow the index to be used for all 3 numbers.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
did, then I think that would be a
bonus. Perhaps someone may mumble something in disagreement about that
though.
It's hard for me to imagine that I've been the only person to do this.
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
P
Uhm, you mean this one?
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b0bc65729070b9cbdbb53ff042984a3c545a0e34
+If however, anyone feels personally abused, threatened, or otherwise
+uncomfortable due to this process, that is not acceptable. If so,
+please contact the
eparate cluster.
Functionality-wise it's pretty similar since you can't join across
databases in Postgres.
There are some advantages to this design since you can separate the
buffer caches, tune optimizer variables, wal settings, etc.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general ma
nsion that I don't know about?
There are a number of logical replication tools (Slony, Bucardo, BDR,
etc.) but I don't see how they would help in this case (because of your
restore requirements).
--
-David
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
d like to find a "simple" long-term solution to this
under-estimation cost problem, which generate hazarduous performance
regressions in our production environments.
We would like to hear critiques or other solutions from PostgreSQL experts.
We would like to help developing and testing the solution.
Thank you very much!
Regards,
---
David Grelaud,
Ideolys.
loop).
We thought at the beginning we were alone but it seems to be a problem of
most database systems.
What do you think about the paragraph 4.1 of this paper
http://www.vldb.org/pvldb/vol9/p204-leis.pdf ?
Regards,
---
David Grelaud,
Ideolys.
2016-01-13 16:02 GMT+01:00 Tom Lane :
d to
> be raised.
>
> http://www.postgresql.org/docs/9.1/interactive/sql-droptrigger.html
>
> http://www.postgresql.org/docs/9.4/interactive/sql-droptrigger.html
Seems to have been changed in
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b152c6cd0de1827
On 14 January 2016 at 12:08, Adrian Klaver
wrote:
> On 01/13/2016 02:51 PM, David Rowley wrote:
>
>> On 14 January 2016 at 11:32, Adrian Klaver > <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 01/13/2016 02:24 PM, Tom Lane wrote
On 14 January 2016 at 03:48, David Grelaud wrote:
> 3) Always avoid nested-loop join when no indexes are available?
>
> Tom Lane said "There might be some cases where this would help, but there
> would be many more where it would be useless or counterproductive."
> Who
On 15 January 2016 at 04:00, Tom Lane wrote:
> David Rowley writes:
> > Perhaps separating out enable_nestloop so that it only disables
> > non-parameterised nested loops, and add another GUC for parameterised
> > nested loops would be a good thing to do. Likely setting
to complete. The performance difference is huge.
I mean, even if the plan is not the best one 100% of the time, it should at
least choose a "risk-free" plan, without these "bad" nested-loops. It is
maybe easier said than done but we want to try.
Regards,
*David Grelaud*
20
nimum locks?
Its correct behavior or not?
Thanks
David Turoň
--
-
Ing. David TUROŇ
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.:+420 591 166 224
fax:+420 596 621 273
mobil: +420 732 589 152
www.linuxbox.cz
mobil servis: +420 737 238 656
ing execution time. So I
agree with the "automatic partitioning" description.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
maintains sequential read speeds which I don't think would work
quite as efficiently with btree index performing heap lookups.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing li
up in the regular expression, then the value matching the .
gets stored in the variable \1, so your regex basically says; "match a
single character which has the same single character to its immediate
right hand side". The extra \ is just an escape character.
--
David Rowley
ill offer much faster aggregation.
There is also https://github.com/2ndQuadrant/fixeddecimal which may be
of some use if you need fixed precision up to a predefined scale. We
found that using fixeddecimal instead of numeric for the TPC-H
benchmark improved performance of query 1 significantly.
--
Da
eddecimal type than you won't have
already gotten from float8. My tests showed that it's very slightly
slower than float8, which is possibly due to float8 addition not
having overflow checks.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7
TEs might create
new tuples in some free space elsewhere in the relation, but it's not
hard to imagine other cases where there's no updates and "natural
correlation" is persisted.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support
Hello. Sorry if I have sent this to too may mail boxes. I hope one of them is
correct. Please forward this on if not.
This was working until recently. In Services I right clicked on properties and
I think I changed the METHOD in pg_hba.conf from md5 to trust.
I think after making this chang
r me.
Well that's a pretty big topic, and you've not hinted much as to which
aspects you'd like to compare.
Perhaps somewhere like [1] might be a good start.
[1]
https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems
--
David Rowley
-dave
--
David Rader - VP Product Development
e: dav...@openscg.com | m: 732-887-8140
On Wed, Feb 3, 2016 at 2:40 PM, Adrian Klaver
wrote:
> Ccing list
> On 02/03/2016 11:27 AM, Weisenberg, Sofy Hefets wrote:
>
>> Hi Adrian,
>>
>> I have already uninstalled and install
f the pg_xlog ramdisk is lost on the standby then Postgres will
start throwing errors but there will be no corruption of the heap. It's
possible that Postgres will terminate at this point, but after
remounting the ramdisk you can restart Postgres on the standby and
everything will be fi
inserts, updates and deletes every day as you.
Hopefully, we've never experienced a data corruption until now ("crossed
fingers").
*David Grelaud*
2016-02-10 8:06 GMT+01:00 Josh berkus :
> On 02/10/2016 05:10 AM, ioan ghip wrote:
>
>> I have a Firebird SQL database r
On 18/02/2016 9:34 am, "Tom Smith" wrote:
>
> Hi:
>
> I feel it is a stupid question.
>
> Can BRIN index enforce uniqueness?
> My issue is
> the column I'd like to apply BRIN index also needs to be unique
> (think of timestamp as primary key).
Only btree supports unique.
Is there a special reas
ns if possible)?
Is there a better solution?
Thank you,
*David Grelaud*
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John McKown
Sent: Wednesday, 2 March 2016 1:03 PM
To: da...@andl.org
Cc: pgsql-general-owner+M220260=david=andl@postgresql.org; Postgres General
Subject: Re: [GENERAL] Looking for pure C
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, 2 March 2016 1:30 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Looking for pure C function APIs for server extension:
language handler and SPI
On 2/29
s as the go
between, so I don't have to write too much C code.
Regards
David M Bennett FACS
Andl - A New Database Language - andl.org
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Wow -- you have an old db version there! When doing a little research, I
found that back in 2005 you actually had the same basic issue - that the
way you were using the "lo" contrib module in 7.x and 8.0 was not supported
in 8.1 anymore:
http://www.postgresql.org/message-id/439ffa3f.c5f7.004...@nsa
?
I’m not finding that easy to understand by reading source code.
Regards
David M Bennett FACS
_
MD Powerflex Corporation, creators of PFXplus
To contact us, please call +61-3-9548-9114 or go to
<http://www.pfxcorp.com/contact.htm> www.pfxcorp.com/contact.htm
From: pgsql-g
ing (after
naming things)?
[dmb>] Seems like DLL static memory with allocation from process memory (or
even malloc()) is "the simplest thing that could possibly work".
Regards
David M Bennett FACS
Andl - A New Database Language - andl.org
--
Sent via pgsql-general mailing li
out for extra VERPs and delete them (as
I have on this message).
Regards
David M Bennett FACS
Andl - A New Database Language - andl.org
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 3/17/16 5:07 PM, David G. Johnston wrote:
> Figured out it had to be added to 2016-09...done
Hmm ... this patch is currently marked "needs review" in CF 2016-03. Am
I missing something, should this have been closed?
--
-David
da...@pgmasters.net
--
Sent via pgsql-general
On 3/17/16 7:00 PM, Tom Lane wrote:
> David Steele writes:
>> On 3/17/16 5:07 PM, David G. Johnston wrote:
>>> Figured out it had to be added to 2016-09...done
>
>> Hmm ... this patch is currently marked "needs review" in CF 2016-03. Am
>> I miss
Hi Howard,
So long as you haven't touched anything else, simply reinstalling the
package should restore your cluster. Debian packages only do
initialization if the data directories are missing.
David
On Thu, Mar 24, 2016 at 05:29:23PM +, Howard News wrote:
> Hi,
>
> I u
On Thu, Mar 24, 2016 at 05:44:27PM +, Howard News wrote:
> Thanks David,
>
> Unfortunately my cluster wont start - I am not entirely sure on the state of
> postgresql-9.0, this is the output from dpkd --list
>
>
>
> rc postgresql-9.0 9.0.4-1~lucid1 object-relation
odifying your query to become:
select max(rh) into tmp from items where vslwhid=vvslid and
itoar(defid) ~ (select get_machdef_sister_defids(vdefid));
Viewing the EXPLAIN of this, you'll notice the InitPlan, which will
evaluate the function and allow the use the output value as a
parameter in t
laced the disk on the original server and we're
now trying to make it a streaming replication slave. This is the part
that's failing. If I do rough estimates of how fast the Exx number is
incrementing and compute backwards, E22 seems like about the time of the
original disk failure, give
On 4/4/16 10:57 PM, Michael Paquier wrote:
> On Sun, Apr 3, 2016 at 2:50 PM, David Caldwell wrote:
>> Hello,
>>
>> We're using streaming replication. Our technique for spinning up a db
>> slave is this:
>>
>> rsync from master (gross copy)
>>
Hello,
Understanding postgresql memory usage ?
While traffic loading our application that uses a postgres 9.3.11 database
instance, we observe higher RAM usage then expected. The system bogs down as
free memory decreases to minimal values. The most RAM usage seems to be with
postgres backe
On 18 April 2016 at 13:10, Sergei Agalakov wrote:
> Thank you, I know this place.
> I just wanted to check that my request will have the peoples support.
> So far it doesn't. It looks like that or people never need to compare two PG
> databases to find the differences in the schemas or security,
>
Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] understanding postgres backend process memory usage
Hi David,
How many different table/objects do you read from and do you use bind variables?
Each postmaster process keeps internal dictionary on objects and queries ran -
If you h
OK, got it.
I really wasn't expecting to have to deal with TOASTs in what looks like a
rather ordinary query -- perhaps there might be a note in the documentation?
But thanks, that works just fine. Problem solved.
Regards
David M Bennett FACS
Andl - A New Database Language - and
andler, since every call to a language
handler is a call to a function.
Did you mean 'inside a nested function'? Or something else?
Regards
David M Bennett FACS
Andl - A New Database Language - andl.org
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ere a series of PL/1
function calls we used rather than encoding the request as a string
as SQL systems require.
The IBM centre in Peterlee was closed, and the lab moved to Winchester
where I think it still resides.
David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
y of any significance.
If I were you I'd start with building a robust proxy server first,
serving up your custom protocol and rewriting it to a PG client
connection internally, and only then look at how that might be merged
in-proess if indeed there was a real need for it.
David
--
Sent
ds... probably not better. You can test this yourself
with; SET enable_nestloop = 0; you might need to SET enable_mergejoin
= 0; too. I imagine it'll be slower.
Likely the only way to speed this up would be to create indexes;
create index on reports (id, age, gender, created);
the above mi
f it ... or
> could further tweaking to PostgreSQL's configuration be of any help here?
EXPLAIN ANALYZE also has quite a bit of timing overhead, so it might
not be taking quite as long as you think.
How long does it take with EXPLAIN (ANALYZE, TIMING OFF) ... ?
Or perhaps just run th
an verify it works right. It's not that important -- Andl can
emulate it quite easily.
Regards
David M Bennett FACS
Andl - A New Database Language - andl.org
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ed into that effort.
See http://www.thethirdmanifesto.com/.
Hugh worked for some years for IBM on the SQL Committee, but eventually left
over a major disagreement in direction. TTM is based on the work he's done
since (with Chris Date). Andl derives from that.
I would say that very littl
On 9 May 2016 at 18:46, David G. Johnston wrote:
> On Sunday, May 8, 2016, Sterpu Victor wrote:
>>
>> Yes but it is very big.
>> I don't understand why the select list is influencing the CPU usage.
>> I was expecting that only the join and where clauses would inf
201 - 300 of 3828 matches
Mail list logo