Greetings,
We will be migrating these lists to pglister in the next few minutes.
This final email on the old list system is intended to let you know
that future emails will have different headers and you will need to
adjust your filters.
The changes which we expect to be most significant to user
David, all,
* David G. Johnston (david.g.johns...@gmail.com) wrote:
> On Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier
> wrote:
>
> > This triggers a table rewrite and makes sure that all the data gets
> > WAL-logged. The cost to pay for durability.
That's not entirely accurate- there are certai
Greeting, Lutz!
Please don't top-post on the PG mailing lists, our style is to relpy
in-line.
* Lutz Fischer (l.fisc...@ed.ac.uk) wrote:
> I often need some data from [s] where I don't care about [sp]. So in
> how far does having these arrays a part of [s] would make these
> queries slower. Or wo
Greetings,
* Lutz Fischer (l.fisc...@ed.ac.uk) wrote:
> Data in [sp] are never changed. I can probably reduce the size by
> changing datatypes from numeric to float but I was wondering if it
> would be more efficient - primarily in terms of storage - to change
> the structure to have two arrays i
Greetings,
* Rick Otten (rottenwindf...@gmail.com) wrote:
> Actually, I think this may be the way Oracle Hot Backups work. It was my
> impression that feature temporarily suspends writes into a specific
> tablespace so you can take a snapshot of it. It has been a few years since
> I've had to do
* Simon Riggs (si...@2ndquadrant.com) wrote:
> On 23 January 2017 at 17:12, Jeff Janes wrote:
> >> Just to make sure anyone reading the mailing list archives isn't
> >> confused, running pg_start_backup does *not* make PG stop writing to
> >> BASEDIR (or DATADIR, or anything, really). PG *will* c
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> On Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby wrote:
> > On 1/22/17 11:32 AM, Stephen Frost wrote:
> >> The 1-second window concern is regarding the validity of a subsequent
> >> incremental backup.
> >
> &g
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost wrote:
> > Just to make sure anyone reading the mailing list archives isn't
> > confused, running pg_start_backup does *not* make PG stop writing to
> > BASEDIR (or DATADIR, or an
Greetings,
* Torsten Zuehlsdorff (mailingli...@toco-domains.de) wrote:
> I just have around 11 TB but switched to ZFS based backups only. I'm
> using snapshots therefore which gives some flexibility. I can
> rolback them, i can just clone it and work with a full copy as a
> different cluster (and
* Jim Nasby (jim.na...@bluetreble.com) wrote:
> On 1/20/17 9:06 AM, Stephen Frost wrote:
> >All the pages are the same size, so I'm surprised you didn't consider
> >just having a format along the lines of: magic+offset+page,
> >magic+offset+page, magic+offset+page, e
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 19:59, Stephen Frost написал(а):
> >>> How are you testing your backups..? Do you have page-level checksums
> >>> enabled on your database?
> >>
> >> Yep,
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> Thanks for elaborating this Information, this is new, so whatever it is the
> procedure is *Correct and Workable*.
Backups are extremely important, so I get quite concerned when people
provide incorrect information regarding them.
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> Please elaborate more of what you are saying. What i am saying is based on
> the Official Docs, Forum and our own test. This is what we had to do to
> save time, both backing up and restoring.
>
> https://www.postgresql.org/docs/9.
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> CORRECTION:
>
> "you might you pg_start_backup to tell the server not to write into the
> DATADIR"
>
> become
>
> "you might *use* pg_start_backup to tell the server not to write into the
> *BASEDIR*, actually server still writes
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> Best practice in doing full backup is using RSYNC, but before you can copy
> the DATADIR, you might you pg_start_backup to tell the server not to write
> into the DATADIR, because you are copying that data. After finished copy
> all
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 18:06, Stephen Frost написал(а):
> >
> > Right, without incremental or compressed backups, you'd have to have
> > room for 7 full copies of your database. Have you looked at what your
&
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 16:40, Stephen Frost написал(а):
> >> Increments in pgbackrest are done on file level which is not really
> >> efficient. We have done parallelism, compression and page-level increments
>
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 15:22, Stephen Frost написал(а):
> >> This process can be automatized by some applications like barman
> >> http://www.pgbarman.org/
> >
> > Last I checked, barman is still single-t
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2017-01-20 12:53 GMT+01:00 Dinesh Chandra 12108 :
> > Thanks for quick response.
> >
> > May I know how can I use physical full backup with export transaction
> > segments.
> >
>
> https://www.postgresql.org/docs/9.1/static/continuous-archiving.ht
Ivan,
* Ivan Voras (ivo...@gmail.com) wrote:
> I'm investigating options for an environment which has about a dozen
> servers and several dozen databases on each, and they occasionally need to
> run huge reports which slow down other services. This is of course "legacy
> code". After some discussi
Greetings,
* Joao Junior (jcoj2...@gmail.com) wrote:
> I am running 2 Linux machines, kernel 3.13.0-45-generic #74-Ubuntu SMP.
> Postgresql version 9.4 in both machine, in a Hot Standby cenario.
>
> Master-Slave using WAL files, not streaming replication.
>
> The archive_command from master is:
Vaze,
* Swapnil Vaze (swapvaz...@gmail.com) wrote:
> We are using postgresql 9.2 on redhat linux instance over openstack cloud.
>
> Database is around 441 GB.
>
> We are using below command to take backup:
>
> pg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast
>
> Backup size created is aro
Greg,
* Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost wrote:
> > * Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> > > Bigger buckets mean a wider possibility of response times. Some buckets
> > > may contain
Greg,
* Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> Bigger buckets mean a wider possibility of response times. Some buckets
> may contain 140k records and some 100X more.
Have you analyzed the depth of the btree indexes to see how many more
pages need to be read to handle finding a row in
* Peter Geoghegan (p...@bowt.ie) wrote:
> On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby wrote:
> > I was able to see great improvement without planner parameters by REINDEX
> > the
> > timestamp index. My theory is that the index/planner doesn't handle well
> > the
> > case of many tuples wit
* Emi Lu (em...@encs.concordia.ca) wrote:
> >*
> >>Trying to insert into one table with 1 million records through java
> >>JDBC into psql8.3. May I know (1) or (2) is better please?
> >>
> >>(1) set autocommit(true)
> >>(2) set autocommit(false)
> >> commit every n records (e.g., 100, 500, 100
* Emi Lu (em...@encs.concordia.ca) wrote:
> Hello,
>
> Trying to insert into one table with 1 million records through java
> JDBC into psql8.3. May I know (1) or (2) is better please?
>
> (1) set autocommit(true)
> (2) set autocommit(false)
> commit every n records (e.g., 100, 500, 1000, etc
* Heikki Linnakangas (hlinnakan...@vmware.com) wrote:
> On 05/27/2014 02:06 PM, Albe Laurenz wrote:
> >I just learned that NFS does not use a file system cache on the client side.
> >
> >On the other hand, PostgreSQL relies on the file system cache for
> >performance,
> >because beyond a certain a
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Tom Coogan writes:
> > I'd like to understand why PostgreSQL is choosing to filter on the most
> > inefficient predicate first in the query below.
>
> It doesn't know that LIKE is any more expensive than the other operators,
> so there's no reason to do th
* Joe Van Dyk (j...@tanga.com) wrote:
> On Mon, Dec 16, 2013 at 4:14 PM, Tom Lane wrote:
> > Yup, that's what I was just about to suggest. You might want to use
> > 10 or 12 in case some of your queries are a bit more complex than
> > this one --- but don't go overboard, or you may find yourself
Zoltan,
* Boszormenyi Zoltan (z...@cybertec.at) wrote:
> When the virtual disk is a disk file on the host machine, we've measured
> 20% or lower. The host used Fedora 19/x86_64 with IIRC a 3.10.x Linux kernel
> with EXT4 filesystem (this latter is sure, not IIRC). The effect was observed
> both un
Marty,
* Marty Frasier (m.fras...@escmatrix.com) wrote:
> We have a particular query that takes about 75 minutes to complete. The
> selected execution plan estimates 1 row from several of the outermost
> results so picks nested loop join resolutions. That turns out to be a bad
> choice since act
Markus,
* Markus Schulz (m...@antzsystem.de) wrote:
> as you can see the query runs fine.
> I can run this query from a bash-psql-while-loop/jdbc-cli-tool
> endless without any problems.
> so far so good.
[...]
> JBoss EAP 5.1.2 with connection pooling and xa-datasource/two-phase-commits
> (tr
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> I had thought that too, but the catch is that the target expressions do not
> need to be constants when the function is created. Indeed, they can even
> be volatile.
Right, any optimization in this regard would only work in certain
instances- eg: when
2012/12/28 Vitalii Tymchyshyn :
> Why so? Basic form "case lvalue when rvalue then out ... end" is much like
> switch.
Sorry, to be honest, I missed that distinction and didn't expect that to
work as-is, yet apparently it does. Does it currently perform the same
as an if/elsif tree or is it imple
Vitalii,
* Vitalii Tymchyshyn (tiv...@gmail.com) wrote:
> There is switch-like sql case:
[...]
> It should work like C switch statement.
It does and it doesn't. It behaves generally like a C switch statement,
but is much more flexible and therefore can't be optimized like a C
switch statement ca
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> If the main goal is to make it faster, I'd rather see all of plpgsql get
> faster, rather than just a special case of partitioning triggers. For
> example, right now a CASE statement with 100 branches is about
> the same speed as an equivalent list of
Charles,
* Charles Gomes (charle...@outlook.com) wrote:
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
Do you end up having multiple threads writing to the same, underlying,
tables..? If so, I've seen that problem before. Look at pg_locks while
things are running and see if t
Karl,
* Karl Denninger (k...@denninger.net) wrote:
> That looks EXTREMELY useful and I'm looking forward to checking it out
> in 9.2; I have asked a similar question about profiling actual queries
> in the past and basically it came down to "turn on explain or run a
> separate explain yourself sin
* Rick Otten (rot...@manta.com) wrote:
> It seems like we almost have everything we need to track this in the stats
> tables, but not quite. I was hoping the folks on this list would have some
> tips on how to get query performance trends over time out of each node in my
> cluster.
I'm afraid
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Tatsuo Ishii writes:
> > Shall I commit to master and all supported branches?
>
> I'm not excited by this patch. It dodges the O(N^2) lock behavior for
> the initial phase of acquiring the locks, but it does nothing for the
> lock-related slowdown occurri
Gregg,
* Robert Haas (robertmh...@gmail.com) wrote:
> On Thu, Apr 5, 2012 at 11:39 AM, Gregg Jaskiewicz wrote:
> > I know this is a very general question. But if you guys had to specify
> > system (could be one server or cluster), with sustainable transaction
> > rate of 1.5M tps running postgres
* Rajesh Kumar. Mallah (mal...@tradeindia.com) wrote:
> We are running linux with kernel 3.2.X
> (which has the lseek improvements)
Ah, good.
> Thanks for the reference , even i thought so (LockManager) ,
> but we are actually also running out db max connections (also)
> ( which is currently at
Rajesh,
* Rajesh Kumar. Mallah (mal...@tradeindia.com) wrote:
> We are puzzled why the CPU and DISK I/O system are not being utilized
> fully and would seek lists' wisdom on that.
What OS is this? What kernel version?
> just a thought, will it be a good idea to partition the host hardware
>
* Sebastian Melchior (webmas...@mailz.de) wrote:
> Does anyone have any idea what could cause this issue or how we can further
> debug it?
Are you logging checkpoints? If not, you should, if so, then see if
they correllate to the time of the slowdown..?
Thanks,
Stephen
* Stefan Keller (sfkel...@gmail.com) wrote:
> So, are there any developments going on with PostgreSQL as Stephen
> suggested in the former thread?
While the idea has been getting kicked around, I don't know of anyone
actively working on developing code to implement it.
Thanks,
* Anibal David Acosta (a...@devshock.com) wrote:
> Unlogged tables are not memory tables don't?
Unlogged tables are not memory tables.
> If we stop postgres server (normal stop) and start again, all information in
> unlogged tables still remain?
Yes.
> So, can I expect a data loss just in case
* Stefan Keller (sfkel...@gmail.com) wrote:
> >> Adding more memory (say to total of 32 GB) would only postpone the problem.
> > Erm, seems like you're jumping to conclusions here...
>
> Sorry. I actually only wanted to report here what's special in my
> postgresql.conf.
My comment was referring
* Stefan Keller (sfkel...@gmail.com) wrote:
> Adding more memory (say to total of 32 GB) would only postpone the problem.
Erm, seems like you're jumping to conclusions here...
> First time the query lasts about 10 time longer (~ 1010 ms) - but I'd
> like to get better results already in the first
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Yeah, it's spending quite a lot of time finding the first matching row
> in each child table. I'm curious why that is though; are the child
> tables not set up with nonoverlapping firstloadtime ranges?
They are set up w/ nonoverlapping firstloadtime ranges
Mike,
* Michael Viscuso (michael.visc...@getcarbonblack.com) wrote:
> I spent the better part of the day implementing an application layer
> nested loop and it seems to be working well. Of course it's a little
> slower than a Postgres only solution because it has to pass data back
> and forth for
* Michael Viscuso (michael.visc...@getcarbonblack.com) wrote:
> Adding the final condition hosts_guid = '2007075705813916178' is what
> ultimately kills it http://explain.depesz.com/s/8zy. By adding the
> host_guid, it spends considerably more time in the older tables than
> without this condition
Tom,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> really pretty simple: decide whether to use a custom (parameter-aware)
> plan or a generic (not-parameter-aware) plan.
Before I go digging into this, I was wondering, is this going to address
our current problem of not being able to use prepared que
* Royce Ausburn (royce...@inomial.com) wrote:
> > Tom just mentioned that 9.1 will be able to re-plan parameterized prepared
> > statements, so this issue will go away. In the mean time you can only
> > really use the standard workaround of setting the prepare theshold to 0 to
> > disable server
* Jon Nelson (jnelson+pg...@jamponi.net) wrote:
> Isn't the WAL basically COW?
eh..? No.. The WAL is used to record what changes are made to the
various files in the database, it certainly isn't an kind of
"copy-on-write" system, where we wait until a change is made to data
before copying it..
* Igor Chudov (ichu...@gmail.com) wrote:
> Well, my question is, rather, whether the time to do a bulk INSERT of N
> records into a large table, would take substantially longer than a bulk
> insert of N records into a small table. In other words, does the populating
> time grow as the table gets mo
Igor,
* Igor Chudov (ichu...@gmail.com) wrote:
> Would the time that it takes, differ a great deal, depending on whether the
> table has only 100,000 or 5,000,000 records?
Yes, because PostgreSQL is going to copy the data. If you don't need or
want it to be copied, just use a view. I've never h
* Robert Schnabel (schnab...@missouri.edu) wrote:
> And getting back to the to-do list entry and reading the related
> posts, it appears that even if you could set work_mem that high it
> would only use 2GB anyway. I guess that was the second part of my
> question. Is that true?
Errr, and to get
* Robert Schnabel (schnab...@missouri.edu) wrote:
> And getting back to the to-do list entry and reading the related
> posts, it appears that even if you could set work_mem that high it
> would only use 2GB anyway. I guess that was the second part of my
> question. Is that true?
Yes and no. wor
* Claudio Freire (klaussfre...@gmail.com) wrote:
> I don't think you'd want that. Remember, work_mem is the amount of
> memory *per sort*.
> Queries can request several times that much memory, once per sort they
> need to perform.
>
> You can set it really high, but not 60% of your RAM - that woul
* Igor Chudov (ichu...@gmail.com) wrote:
> Can I partition data by month (or quarter), without that month being part of
> PRIMARY KEY?
The way partitioning works in PG is by using CHECK constraints. Not
sure if you're familiar with those (not sure if MySQL has them), so
here's a quick example:
C
* Igor Chudov (ichu...@gmail.com) wrote:
> Right now I have a personal (one user) project to create a 5-10
> Terabyte data warehouse. The largest table will consume the most space
> and will take, perhaps, 200,000,000 rows.
I run data-warehouse databases on that order (current largest single
insta
* Anthony Presley (anth...@resolution.com) wrote:
> I was really hoping that with hardware RAID that something would be faster
> (loading times, queries, etc...). What am I doing wrong?
ext3 and ext4 do NOT perform identically out of the box.. You might be
running into the write barriers problem
* Waldo Nell (pwn...@telkomsa.net) wrote:
> The fsync = off was because the production system runs on a uber expensive
> SAN system with multipathing over Fibre Channel, it is on UPS and backup
> generators in a secure datacenter, and we have daily backups we can fall back
> to.
So, two points:
* Tony Capobianco (tcapobia...@prospectiv.com) wrote:
> HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)
>-> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)
> Hash Cond: (o.emailcampaignid = s.emailcampaignid)
> -> Seq Scan on openactivity o (cost=
* Tory M Blue (tmb...@gmail.com) wrote:
> Is there anyone that could help me understand why all of a sudden with
> no noticeable change in data, no change in hardware, no change in OS,
> I'm seeing postmaster getting killed by oom_killer?
You would really be best off just turning off the oom_kille
Uwe,
* Uwe Bartels (uwe.bart...@gmail.com) wrote:
> So I checked this again and raised afterwards maintenance_work_mem step by
> step up 64GB.
> I logged in via psql, run the following statements
> set maintenance_work_mem = '64GB';
I believe maintenance_work_mem suffers from the same problem tha
* Mark Stosberg (m...@summersault.com) wrote:
> Recommendations?
PostGIS, geometry columns, and UTM.. I'm not sure where they are wrt
adding KNN support, but it's something they've been anxious to have for
a while, so I expect support will come quickly.
Thanks,
Stephen
* Jeremy Palmer (jpal...@linz.govt.nz) wrote:
> In normal circumstances does locking a table in access exclusive mode improve
> insert, update and delete operation performance on that table.
>
> Is MVCC disabled or somehow has less work to do?
MVCC certainly isn't disabled. Does it have less wo
* Scott Marlowe (scott.marl...@gmail.com) wrote:
> There's nothing wrong with whole table updates as part of an import
> process, you just have to know to "clean up" after you're done, and
> regular vacuum can't fix this issue, only vacuum full or reindex or
> cluster.
Just to share my experiences
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
>Depends on what you mean by that. The tables that I'm concerned with look
>something like bigint x2, char var x13, int x24, real x8, smallint x4 by
>about 65M rows, each. I only do the updates on one table at a time. The
>
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
> Once the bulk data is inserted into the tables I generally
> do some updates on columns to set values which characterize the
> data.
Please tell me you're not running actual full-table UPDATE statements...
You would be *much* better of
* Michael Kohl (michael.k...@tupalo.com) wrote:
> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1
I'm amazed no one else has mentioned this yet, but you should look into
splitting your data and your WALs. Obviously, having another set of
SSDs to put your WALs on would be ideal.
You should probably also
* Aaron Turner (synfina...@gmail.com) wrote:
> Basically, each connection is taking about 100MB resident
Errr.. Given that your shared buffers are around 100M, I think you're
confusing what you see in top with reality. The shared buffers are
visible in every process, but it's all the same actual
* Aaron Turner (synfina...@gmail.com) wrote:
> The graphing front end CGI is all SELECT. There's 12k tables today,
> and new tables are created each month.
That's a heck of alot of tables.. Probably more than you really need.
Not sure if reducing that number would help query times though.
> T
* Dan Harris (f...@drivefaster.net) wrote:
> On 10/7/10 11:47 AM, Aaron Turner wrote:
>> Basically, each connection is taking about 100MB resident. As we need
>> to increase the number of threads to be able to query all the devices
>> in the 5 minute window, we're running out of memory.
> I think
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> Robert Haas wrote:
> > perhaps it would be possible by, say, increasing the number of
> > lock partitions by 8x. It would be nice to segregate these issues
> > though, because using pread/pwrite is probably a lot less work
> > than rewriting
* Robert Haas (robertmh...@gmail.com) wrote:
> It's good to be you.
They're HP BL465 G7's w/ 2x 12-core AMD processors and 48G of RAM.
Unfortunately, they currently only have local storage, but it seems
unlikely that would be an issue for this.
> I don't suppose you could try to replicate the lse
* Robert Haas (robertmh...@gmail.com) wrote:
> Hey, I didn't know about those. That sounds like it might be worth
> investigating, though I confess I lack a 48-core machine on which to
> measure the alleged benefit.
I've got a couple 24-core systems, if it'd be sufficiently useful to
test with..
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> The reason it's sorting by all the columns is the DISTINCT
You might also verify that you actually need/*should* have the DISTINCT,
if it's included today.. Often developers put that in without
understanding why they're getting dups (which can often be due
* Mason Harding (mason.hard...@gmail.com) wrote:
> Hi all. I Have the following query (tested in postgres 8.4 and 9.0rc1)
Can you provide \d output from all the tables involved..?
Also, what does the query plan look like w/o 'enable_seqscan=off' (which
is not a good setting to use...)? Increasi
* Ivan Voras (ivo...@freebsd.org) wrote:
> Total runtime: 0.507 ms
[...]
> Total runtime: 118.689 ms
>
> See in the first query where I have a simple LIMIT, it fetches random 10
> rows quickly, but in the second one, as soon as I give it to execute and
> calculate the entire result set before I
Ranga,
* Ranga Gopalan (ranga_gopa...@hotmail.com) wrote:
> It seems that this is an issue faced by others as well - Please see this
> link:
> http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table
>
> Is this a known bug? Is this something that someone
Jens,
* Jens Hoffrichter (jens.hoffrich...@gmail.com) wrote:
> I'm just curious if there is any way to improve the performance of
> those queries. I'm seeing SeqScans in the EXPLAIN ANALYZE, but nothing
> I have done yet has removed those.
SeqScans aren't necessairly bad. Also, providing your po
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> Tom Wilcox wrote:
> > Is it possible to get postgres to make use of the available 96GB
> > RAM on a Windows 32-bit build?
>
> I would try setting shared_memory to somewhere between 200MB and 1GB
> and set effective_cache_size = 90GB or so.
Mozzi,
* Mozzi (mozzi.g...@gmail.com) wrote:
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
Yes, PG can only use 1 CPU for a given query or connection. You'll
start to see the other CPUs going when you have more than one connection
to the database. If y
* Matthew Wakeling (matt...@flymine.org) wrote:
> The major case I found when writing pl/pgsql was when trying to build
> arrays row by row. AFAIK when I tried it, adding a row to an array caused
> the whole array to be copied, which put a bit of a damper on performance.
Using the built-ins now
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
> Since PostgreSQL is written in C, I assume there is no
> such additional overhead. I assume that the PL/PGSQL implementation at its
> heart also uses SPI to perform those executions. Is that a fair statement?
Right, but I also wouldn't ex
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
> Thanks for the quick follow-up. So, you are saying that if I can do SPI in
> _PG_init, then I could prepare all my queries there and they would be
> prepared once for the entire function when it is loaded? That would
> certainly achieve wh
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
> Would a query such as this obtain any performance improvement by being
> re-written using C?
I wouldn't expect the queries called by the pl/pgsql function to be much
faster if called through SPI from C instead. I think the question you
n
* Yeb Havinga (yebhavi...@gmail.com) wrote:
>> Normalizing by date parts was fast. Partitioning the tables by year
>> won't do much good -- users will probably choose 1900 to 2009,
>> predominately.
> Ok, in that case it is a bad idea.
Yeah, now that I understand what the user actually wants,
* David Jarvis (thanga...@gmail.com) wrote:
> I was still referring to the measurement table. You have an index on
> > stationid, but still seem to be getting a sequential scan. Maybe the planner
> > does not realise that you are selecting a small number of stations. Posting
> > an EXPLAIN ANALYSE
* David Jarvis (thanga...@gmail.com) wrote:
> There are 72 child tables, each having a year index and a station index,
> which are defined as follows:
S, my thoughts:
Partition by something that makes sense... Typically, I'd say that you
would do it by the category id and when the measuremen
* David Jarvis (thanga...@gmail.com) wrote:
> I was hoping to eliminate this part of the query:
> (cast(extract( YEAR FROM m.taken ) + greatest( -1 *
> sign(
> (extract( YEAR FROM m.taken )||'-12-31')::date -
> (extract( YEAR FROM m.taken )||'-01-01')::date
Corin,
* Corin (wakath...@gmail.com) wrote:
> I fill this table with around 2.800.000 random rows (values between 1
> and 500.000 for user_id, ref_id).
Using random data really isn't a good test.
> The intention of the query is to find rows with no "partner" row. The
> offset and limit are j
Corin,
* Corin (wakath...@gmail.com) wrote:
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking
> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The
> database server is a dual dualcore
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> > Could some of you please share some info on such scenarios- where
> > you are supporting/designing/developing databases that run into at
> > least a few hundred GBs of data (I know, that is small by todays'
> > standards)?
Just saw this, so
* Gary Warner (g...@cis.uab.edu) wrote:
> - if you have 8 pentium cores, 12GB of RAM and "infinite" diskspace, what
> sorts of memory settings would you have in your start up tables?
If the PG database is the only thing on the system, I'd probably go with
something like:
shared_buffers = 4GB
te
* Kevin Buckham (kbuck...@applocation.net) wrote:
> I came across links to pg_reorg previously but it seemed that the
> project was a bit "dead". There is active development but not much
> information, and not much in the way of discussions. I will definitely
> be testing both partitioning and pg_
* Heikki Linnakangas (heikki.linnakan...@enterprisedb.com) wrote:
> That's what the OP really should've complained about. If we addressed
> that, so that a generic plan was created that determines which child
> tables can be excluded at run time, there would be no need for the
> persistent plan cac
1 - 100 of 173 matches
Mail list logo