I see. Thanks for the quick responses!
On Wed, Jul 19, 2017 at 11:32 PM, Tom Lane wrote:
> Matthew Byrne writes:
> > Would a more feasible approach be to introduce new types (say, TEXT2 and
> > JSONB2 - or something better-sounding) which are the same as the old ones
> >
arrays to the implementations of those types and keep backward
compatibility by leaving TEXT and JSONB alone.
Matt
On Wed, Jul 19, 2017 at 7:30 PM, Tom Lane wrote:
> Matthew Byrne writes:
> > Are there any plans to support \u in JSONB and, relatedly, UTF code
> > point 0
Are there any plans to support \u in JSONB and, relatedly, UTF code
point 0 in TEXT? To the best of my knowledge \u is valid in JSON and
code point 0 is valid in UTF-8 but Postgres rejects both, which severely
limits its usefulness in many cases.
I am currently working around the issue by
On 07/25/2016 03:20 PM, Tom Lane wrote:
Matthew Musgrove
<mailto:mmusgr...@emortgagelogic.com> writes:
One of our instances has been behaving -- oddly. Most queries are blazing fast.
It appears to just be some of the stat views that are slow.
It sounds like requests for stats updat
ithout restarting the instance? (I am not yet sure when
that can happen but it would have to be scheduled [in advance] to occur during
an after hours maintenance window.
Thanks in advance,
Matt
--
Matthew Musgrove
Senior Software Engineer
Assurant Mortgage Solutions
817.788.4482
mmusgr...@emor
I just ran into monitoring this and came up with the following 1 liner for
monitoring this in releases < 9.6 through the SQL layer. Once you are at 9.6
Jeff Janes solution is correct.
It does make the assumption that the queue size is 8GB. It can misjudge the
queue usage by up to one file seg
I was afraid of that. Any idea if it is a technical or resource limitation?
In other words, is such functionality impossible, undesirable, anticipated, or
in the works?
From: David G. Johnston [mailto:david.g.johns...@gmail.com]
Sent: Thursday, March 31, 2016 10:38 AM
To: Matthew Syphus
Cc
I would like a trigger on a materialized view. Is this possible? I have tried
both an INSTEAD OF and BEFORE trigger with no luck. It responds with:
"project_milestone_mv" is not a table or view.
It is absolutely present and spelled correctly. It is the same with or without
the schema qualificat
GRESQL!
--
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
You can set log_min_duration_statement = 0 to log every statement.
On Fri, Mar 13, 2015 at 3:38 AM, Sreerama Manoj wrote:
>
> Hi,
> As we know that "pg_stat_statements" will monitor the queries after
> normalizing the queries(Removes the values present in query). I want to
> know is there
The way I’ve solved the problem before 9.4 is to use a command called 'pv'
(pipe view). Normally this command is useful for seeing the rate of data flow
in a pipe, but it also does have a rate limiting capacity. The trick for me
was running the output of pg_basebackup through pv (emulates havi
Thanks,
How long did it take to complete?
On 15/10/14 06:39, Adrian Klaver wrote:
On 10/14/2014 10:31 AM, Matthew Chambers wrote:
Would something like this work best, or is it better to use pgdump?
CREATE DATABASE newDatabase TEMPLATE oldDatabase ENCODING 'UTF8'
I actually jus
Would something like this work best, or is it better to use pgdump?
CREATE DATABASE newDatabase TEMPLATE oldDatabase ENCODING 'UTF8'
Does using a template do a file system copy or just SQL copy everything
over?
-Matt
On Sun, Aug 24, 2014 at 2:50 PM, Daniele Varrazzo
wrote:
> It should be added to the library (it was first discussed in 2003...),
> but it's one of these things that will stop working when psycopg will
> start using the "extended query protocol" (together with other nifty
> features such as string
I’m in the process of installing mediawiki, and ran into this error while it
was setting up the database:
• Query:
CREATE FUNCTION page_deleted() RETURNS TRIGGER LANGUAGE plpgsql AS $mw$ BEGIN
DELETE FROM recentchanges WHERE rc_namespace = OLD.page_namespace AND rc_title
= OLD.page_tit
I still don't want to break my "no string interpolation"
rule of thumb unless I absolutely have to.
And I know I could switch to some gigantic library like SQLAlchemy,
but I really don't want to.
Any advice is welcome. Thanks in advance!
Matt
--
W. Matthew Wilson
We are currently running with the en_US.UTF-8 collation. It was a decision
made long ago, and seeing as we never actually rely on the sort order of
internationalized strings (other than for stability, apparently), we have never
had any motivation to change this practice.
Some way of versioning
The following is a real critical problem that we ran into here at TripAdvisor,
but have yet figured out a clear way to mitigate.
TL;DR:
Streaming replicas—and by extension, base backups—can become dangerously broken
when the source and target machines run slightly different versions of glibc.
On 04/04/14 10:22, John R Pierce wrote:
On 4/3/2014 1:31 PM, Matthew Chambers wrote:
This removes all the boilerplate associated with old style JDBC. It
also means you get great reuse of your SQL code since the transaction
starts at the entry point. Most of my SQL related code is just 1
On 04/04/14 05:39, John R Pierce wrote:
On 4/3/2014 9:10 AM, Miller, Michael W wrote:
The issue I'm running into is the interaction between the Spring
Framework JDBC functionality and the PostgreSQL JDBC driver. The
specific issue is that I'm using SimpleJdbcCall.execute() to call the
datab
cess"? I was sure it was called the background
writer before.
-Matt
On 11/03/14 12:03, Venkata Balaji Nagothi wrote:
On Tue, Mar 11, 2014 at 8:30 AM, Matthew Chambers
mailto:mchamb...@wetafx.co.nz>> wrote:
Hi, just wondering if this is normal, DB is operating just fine.
Hi, just wondering if this is normal, DB is operating just fine.
I upped bgwriter_lru_maxpages to 200 and issued a reload. Normally, I'd
see the bgwriter constantly churning as one of my main I/O using
processes, but now I have:
postgres: wal writer process
postgres: checkpointer process
Th
On 02/03/14 05:08, bricklen wrote:
On Fri, Feb 28, 2014 at 1:54 PM, Matthew Chambers
mailto:mchamb...@wetafx.co.nz>> wrote:
Initially, I had my application servers using the slave for short,
read only queries, but this turned out to be highly unstable. The
slave would
Hi,
I have a postgres server that is fairly high traffic. (about 5MB/second
in writes to SSD backed Netapp NFS storage). The outbound network
traffic for this machine averages 61MB/second, but gets over
250MB/second during peal times. (includes data to slave + user requests)
Initially, I h
Thanks Tom,
That's very enlightening and I really appreciate you taking time to respond.
I've tried cursor_tuple_fraction values as low as 0.01 which by my
reckoning should be low enough and also 0 but the planner is still selecting
seqscan-and-sort.
Kind regards,
Matthe
e configuration
parameter 'cursor_tuple_fraction' is having no observable effect.
This is being seen on Postgres 9.1 (Ubuntu x64), on a server with fast disks
and large amount of memory. Basic memory tuning has also been performed.
Thanks in advanced, I appreciate any insights.
Kind
ols.product(*[['north', 'south'], ['retail',
'manufacturing', 'wholesale']]))
[('north', 'retail'),
('north', 'manufacturing'),
('north', 'wholesale'),
('south', 'retail'),
('south', 'manufacturing'),
('south', 'wholesale')]
All advice is welcome. Thanks in advance!
Matt
--
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com
Hi,
I can get postgres to log unsuccessful queries, including the user who
wrote them, but I'm missing how to get postgres to log the successful
queries too (I don't need a store of the answer, just the query
itself). How do I do this?
Thanks,
Matthew
--
Matthew Vernon
Quantitative
t...@sss.pgh.pa.us (Tom Lane) writes:
> Matthew Vernon writes:
>> naiively, you might try:
>> \set pwd '\'' `pwd` '\''
>> COPY table FROM :pwd || '/relative/path/to/data' ;
>
> Umm ... why don't you just use a relative pat
Matthew Vernon writes:
> naiively, you might try:
> \set pwd '\'' `pwd` '\''
> COPY table FROM :pwd || '/relative/path/to/data' ;
so I could do:
\set path '\'' `pwd` '/path/to/data1' '\''
COPY table1 FROM
Matthew Vernon writes:
> naiively, you might try:
> \set pwd '\'' `pwd` '\''
> COPY table FROM :pwd || '/relative/path/to/data' ;
I should also note that I want to run a series of these commands, hence
setting pwd once and then wanting to us
;\''
COPY table FROM :pwd || '/relative/path/to/data' ;
but that doesn't work because the concatenation operator can't be used
there. How should I be doing this?
Thanks,
Matthew
--
Matthew Vernon
Quantitative Veterinary Epidemiologist
Epidemiology Research Unit,
Hi pgsql-general,
I'm looking for advice on good training courses for PostgreSQL (on- or
off-site, on- or off-line). I'm hoping to find something that can cover basic
administration, performance optimization topics, and clustering tools like
Slony and pgpool for someone. I realize that Postgr
--username
"postgres" --schema-only --binary-upgrade -f pg_upgrade_dump_all.sql
.. both of which worked without any problems.
Hope that helps.
Matthew
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: 01 October 2012 14:39
To: Carrington, Matthew (P
NULL.
Hope that helps.
Matthew
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: 01 October 2012 14:39
To: Carrington, Matthew (Produban)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_upgrade: out of memory
"Carrington, Matthew (Produban)" wr
00010002fcec in getSchemaData (fout=0x11000bad0,
numTablesPtr=0x8a4)
at common.c:145
#4 0x00011370 in main (argc=11, argv=0x950) at
pg_dump.c:683
(gdb) quit
A debugging session is active.
Inferior 1 [process 483438] will be killed.
Quit anyway? (y or n
exit_horribly ()
#1 0x00010003243c in pg_malloc ()
#2 0x00018f14 in getAggregates ()
#3 0x00010002fcac in getSchemaData ()
#4 0x00011330 in main ()
(gdb) quit
A debugging session is active.
Inferior 1 [process 241858] will be killed.
Quit anyway? (y or n) y
... hope
efined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: out of memory
Has anyone else seen this problem on AIX or elsewhere ?
Matthew
Emails aren't always secure, and they may be intercepted or changed
after they've been sent. Produban doesn&
I want to run a query like to_tsquery("A | B | C") and then rank the
results so that if a document contained A, B, and C, then it would
rank above a document that just had some subset.
How would I do such a thing?
--
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com
--
Sent
search. Is it possible?
I understand that I can do a query and rank the results by how closely
the words are to each other, but I want to exclude any matches where
the words are not within two words of each other.
Thanks in advance!
Matt
--
W. Matthew Wilson
m...@tplus1.com
http://tplus1.
X box with 64GB of memory and the upgrade was run with
nothing else running on the machine so I find it hard to believe that it is
genuinely out of memory. The whole of the first database could fit in real
memory as its only 28GB.
Matthew
-Original Message-
From: Tom Lane [mailto:t...@sss
name
"postgres" --schema-only --binary-upgrade -f pg_upgrade_dump_all.sql
... works fine.
Has anyone else hit this problem ?
Matthew
Emails aren't always secure, and they may be intercepted or changed
after they've been sent. Produban doesn't accept liability if this
hap
ation (via log shipping), or will I have to use
slony?
Thanks,
Matthew
--
Matthew Vernon
Quantitative Veterinary Epidemiologist
Epidemiology Research Unit, SAC Inverness
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.post
or implementing a
custom migration script.
Thanks for all your help Thom.
Regards, Matt
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thom Brown
Sent: 01 May 2012 12:43
To: Matthew Churcher
Cc: pgsql-general
ostgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thom Brown
Sent: 01 May 2012 11:58
To: Matthew Churcher
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Having trouble with pg_dumpall -o
On 1 May 2012 11:55, Matthew Churcher wrote:
> Thanks Thom, that's really
--Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thom Brown
Sent: 01 May 2012 11:23
To: Matthew Churcher
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Having trouble with pg_dumpall -o
On 1 May 2012 11:12, Matthe
Hi PostgreSQL users,
I'm having difficulty migrating a postgres 8.4.11 database to postgres
9.1.2, neither of the included pg_dumpall tools appear to honour the -o or
--oids options and fail to dump the table oids from the old database as we
require.
I've tried various combinations and orders of
Hello Postgres community,
I am running Postgres 9.0.3 on dual node RHEL 5.5 system. I have
postgres setup to run on both nodes as active hot-standby. I am seeing a
potential issue...or hopefully I have a configuration problem.
Every time I failover the DB I am seeing wal segement files (or
ch
> From: Stephen Frost [mailto:sfr...@snowman.net]
>
> * Matthew Hawn (matth...@donaanacounty.org) wrote:
> > I have a table with privileged data that is restricted using column
> level
> > permissions. I would like to have single query that returns data
> from the
Steve, NULL
Permission:
Dave, 456
Bill, 789
Steve, 123
I have tried:
Select name, case when has_column_permission('people','ssn','select') then
ssn else NULL end as ssn;
But I still get a permission denied.
Any Ideas?
re both able to
make their own temporary tables.
More generally, how to tame this big ol' query?
The temporary tables mean I'm only pulling data from the database one
time. ORMs often pull data from one query and then use that data to
write the next query. This seems slow to me.
Matt
--
I have a large database full of irreplaceable data, and due to a
ridiculous happenstance I accidentally executed this code (as a superuser,
of course):
DELETE FROM pg_catalog.pg_type;
Now the database is *seriously* unhappy - every SQL command returns an
error message. How do I get at my data?
I am running into an odd problem when fetching from multiple cursors in
Perl. However, I do not feel knowledgeable enough to ask an intelligent
question.
Can you in the community direct me at some deeper documentation. Perhaps a
developers guide (for Postgres itself), that might discuss the interna
I have a table like this:
create table event(
destination_id integer not null references destination
(destination_id),
starts timestamp,
ends timestamp
);
I want to make sure that no two rows **with the same destination_id**
overlap in time.
I'm not sure how to write this exclu
> So the in-place update is a vital part of the program, because a range of
values
> of column col should be stored together on the disk. Is there any way to
do such
> a in-place update without generating much extra overhead?
>
Although in-place update is not possible, can he not use partitioning
I thought that when I first read about WAL archiving but the documentation
explains quite well. Basically it is to stop a successful result being
returned in the event that the file already exists in the archive
destination (to cause an error in the event it tries to overwrite a file).
On Thu, No
Deepak,
Does your backup script exclude the pg_xlog directory? There is no point
backing up the WAL files but you will want to make sure the archived WAL
files are added to the backup set afterwards.
How many files are in pg_xlog at the point where you stop the backup? It
may actually be that i
Jen,
Regarding the first point, is postgres actually running? You can check this
by typing the following at a terminal -
ps -ef|grep postgres
Reload configuration is used to tell PostgreSQL to read in the configuration
file for any chances since the database started. If you want to start the
d
/19/2010 03:12 PM, Matthew Walden wrote:
>
>> Vangelis,
>>
>> I don't believe you can do file level copying of single databases
>> (especially as they are different versions).
>>
>
> Hi Matthew, thanks for your answer.
>
> If the different versions i
Vangelis,
I don't believe you can do file level copying of single databases
(especially as they are different versions).
Take a look at pg_dump in the documentation. This will do what you need I
think but at a logical level rather than physical.
On Fri, Nov 19, 2010 at 11:52 AM, Vangelis Katsik
On Wed, Nov 17, 2010 at 2:45 PM, Gauthier, Dave wrote:
> How does one set pager off in interactive sql ?
>
> I tried \set pager off, doesn't seem to work.
>
>
>
> Thanks in Advance !
>
\pset pager off
You can create a trigger on the table to store the old/new values in an
audit table if that is what you mean?
Dave,
Does your application use temporary tables? This may explain high
autovacuum activity in the catalog tables.
With regards to your tables, I wrote this very script to give me an
indication of the amount of required activity on the user tables. I
deliberately keep this one simple and it doe
"Apparently (after reading the documentation link provided by Tom) there can
be value to indexes when accessed with leading columns missing".
That is a new one on me too - interesting.
I suppose it comes down to testing at the end of the day - if you "set
enable_seqscan to false" and "EXPLAIN ANA
Dan,
It depends on your application. There is no point in creating an index with
the same 3 columns in the primary key (in the same order).
If you have an index on COL1, COL2 and COL3 (in that order) then if you have
a query such as SELECT COL1, COL2, COL3 from T1 then the index will be
consider
Dave,
Does your application use temporary tables? This may explain high
autovacuum activity in the catalog tables.
With regards to your tables, I wrote this very script to give me an
indication of the amount of required activity on the user tables. I
deliberately keep this one simple and it doe
I've posted a short guide on how to get Movable Type 5, Postgres 9, and Tomcat
7.0 working together.
http://www.greenskagitvalley.com/blog/movable-type-5-with-tomcat-70x-and-postgres-90.html
-M@
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subsc
Even though Movable Type 5 has dropped official support for Postgres it still
comes with the drivers in order for it to work. I've gotten it setup and
working with Postgres, but there is one problem that keeps me from being able
to use it. That issue I've explained over on the MT forums:
http
I've seen mention that it might be possible for MT5 to re-gain Postgres support
(they dropped it in version 5) through the use of plugins. Does anyone know of
such a thing and/or have an installation working themselves? Would greatly
appreciate any info you can share.
Thanks,
-M@
--
Sen
character:
';' or '\g' gives the usual orientation,
'\G' gives the rotated orientation.
Cheers,
Matthew
--
Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard
Flat 3
PGP: http://www
On Mon Aug 16 10:26:36 2010, Tom Lane wrote:
> Matthew Wilson writes:
>> All I can come up with so far is to use a view and then another view on
>> top of that one:
>
> Note that you don't actually need a view, as you can just write the
> subselect in-line:
>
&g
I'm converting some procedural code to SQL as an experiment. Here's the
pseudocode:
c = a - b
if c < 0 then d = 'no'
else d = 'yes'
In SQL, I've got this:
select a, b, a - b as c,
case when a - b < 0 then 'no'
else 'yes'
end as d
from foo;
This is a trivial exa
I have a daily job that pushes data from the production database into
the reporting database, which right now, is an exact copy.
I have a webapp that builds lots of reports for users. Most of these
reports involve elaborate joins of lookup tables and lots of summations,
and they take too long to
Just recently I discovered the listen/notify feature in postgresql.
Now I don't have external processes polling tables, watching for new
inserted rows.
Anyhow, I'm curious if there is some other feature that will help me out
with a new puzzle.
I want to store emails to deliver at a later time in
On Tue, 24 Nov 2009, Denis Lussier wrote:
Bouncing the app will roll back the transactions.
Depends on the application. Some certainly use a shutdown hook to flush
data out to a database cleanly.
Obviously if you kill -9 it, then all bets are off.
Matthew
--
Software suppliers are trying
m in dev/test is the priority, but I
would be very cautious about killing transactions in production. You don't
know what data is uncommitted. The safest thing to do may be to bounce the
application, rather than Postgres.
Matthew
--
All of this sounds mildly turgid and messy and confu
action and thoroughly confuse the client
application, you can send a TERM signal to a backend, but the consequences
to your data are on your own head.
Fix the application, don't tell Postgres to stop being a decent database.
Matthew
--
I would like to think that in this day and age people would k
hat the
data is more in cache, and postgres is able to process it eight times
*faster*. Restarting Postgres kills the cache and puts you back at square
one.
Which of these is it?
Matthew
--
Reality is that which, when you stop believing in it, doesn't go away.
: Matthew Seaborn
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Search Path vs Synonyms
On Fri, Jul 24, 2009 at 09:38, Matthew
Seaborn wrote:
> Ah, I am actually using EnterpriseDB (a commercial 'fork' of PSQL) which does
> have synonyms.
For support on EnterpriseDB yo
Ah, I am actually using EnterpriseDB (a commercial 'fork' of PSQL) which does
have synonyms.
Is it possible set define the default search_path for a given user?
-Original Message-
From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
Sent: 24 July 2009 07:22
To: Matthew Seab
Given the situation where a user connecting to the database needs access to two
separate schemas: the primary schema which contains the data they will be
updating and a second schema which contains read-only reference data, used by
many users, that will be using in joins on queries.
I don't wan
TableAndCriteria
) as allTables
Group by part_number
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
.now.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
nner to the WHERE clause. And unless you have duplicate rows
to eliminate, use UNION ALL rather than UNION for a speed increase.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
.now.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
ype t ON
> t.machine_type_id=h.machine_type_id
> WHERE h.part_id=379 AND h.machine_type_id=1
> WHERE t.machine_type_id=1
> GROUP BY t.name,j.workorder
> ORDER BY avgtime
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospita
s. If no such lock exists, one is created and returned to the
application. The lock is released at the end of the current task. The
advantage is that if anything crashes, there is a page in the
application that an administrator can delete any lock from, or see who
holds a lock on what from
Try this.
select idn,
array_to_string(array(select code from tbl t2 where
t2.idn = t1.idn order by code), ', ') as codes
fromtbl t1
group byidn
order byidn
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hos
If it's static (i.e. the planets don't move too much, hah), calculate
and store. No sense in re-calculating it each and every time.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
> -Original Message-
> Fro
Just create a unique constraint on all of the columns.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresq
Good morning.
I am itching to upgrade my 8.3 development database to 8.4 before I move
to production. Pg_migrator is listed as beta so I'd like to avoid that.
Has anyone made the leap yet?
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613
I do have autovacuum on (as of yesterday). This was discovered when I
ran vacuum on a whim.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent
But it'll do so immediately after I run a full vacuum on the entire database?
Nothing has changed. This is a development box.
You know, I bet it doesn't refresh the view of the database after having run
the maintenance script..
Matthew Hartman
Programmer/Analyst
Information Manag
Windows XP, PostgreSQL 8.3.5, using pgAdmin III 1.8.4. It's the typical
"Running vacuum on this table is recommended" dialog box.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Grze
Good morning.
On occasion I'll perform a full vacuum on a database but will still
receive a suggestion to vacuum an individual table immediately after.
Does the full database vacuum not handle each individual table?
Thanks,
Matthew Hartman
Programmer/Analyst
Information Management
Thanks! That'll reduce the amount of copy/pasting I have to do to figure
out the differences in times.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Chris Spotts [mailto:rfu...@gmail.com]
Hy the raise notice is a good idea, thanks. I use raise notice
already for other uses, may as well go with it. Thanks.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Merlin Moncure
Is there a recommended approach when trying to use EXPLAIN on a
function? Specifically, a function that is more than the typical SELECT
statement or tiny loop. The one in question that I'm hoping to optimize
is around 250 lines.
Thanks,
Matthew Hartman
Programmer/Analyst
Information Manag
t;, and "COLUMN_VALUE". Perform joins as you see fit to build up the
structure at times. Use arrays if you'd like or perform multiple joins, which
ever.
Matthew Hartman
Programmer/Analyst
Information Management
Kingston General Hospital, ICP
(613) 544-2631 x4294
__
You might want to try CREATE CLUSTER. I had a 40M row table that was
taking ages to access, I tried partitioning it into 12000 sub-tables,
and obtained a modest speed up. Using CREATE CLUSTER on an
un-partitioned table resulted in an enormous speed up though.
You will need to choose the axis you w
I've solved it.
I just used a subselect. Worked very quickly. I had a lot of trouble with
subqueries when I first started databases with MySQL. So I have been afraid
of them.
update entities
set customer_status = select(customer_status from entity_dimension_update
where entities.entity_id = entit
Hello,
I am looking for a way to update one table with another. I tried the
following schema to update table2 based on data in table1. The idea is that
I have a slowly changing dimension and I need to update data in the
dimension based on an updated version of the table. I don't want to have to
dr
1 - 100 of 528 matches
Mail list logo