e indexes before analyzing?
I usually do, just to be safe, but I thought statistics were based
solely on sampling of the heap.
--
- David Steele
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
iend in this case:
http://www.postgresql.org/docs/9.4/static/view-pg-locks.html
--
- David Steele
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
Hi, I'm trying to add a C-Language function but getting the following error:
$ psql -f test_cfunc.sql
psql:test_cfunc.sql:3: ERROR: incompatible library
"/usr/local/pgsql/lib/test_cfunc.so": missing magic block
HINT: Extension libraries are required to use the PG_MODULE_MAGIC macro.
I am callin
Forgot to mention, I am on 9.2.1. "SELECT version();" reports:
PostgreSQL 9.2.1 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.4.5-8) 4.4.5, 64-bit
-Dave
On Tue, Jan 8, 2013 at 12:30 PM, David Fuhry wrote:
> Hi, I'm trying to add a C-Language function but get
Thank you Tom, you are correct. Adding the line:
OBJS = test_cfunc.o
to the Makefile resolved the problem.
-Dave
On Tue, Jan 8, 2013 at 12:55 PM, Tom Lane wrote:
> David Fuhry writes:
> > Hi, I'm trying to add a C-Language function but getting the following
> err
iate design
trade-offs. I would also question why you wouldn't just store the actual
timestamp value as opposed to just storing the date.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/How-to-store-clickmap-points-tp5739121p5739449.html
Sent from
In regards to 9.1.x, I would like to learn some details of the nature of
"checkpointing"
=== Question 1 ===
- page 123 is dirty
- "checkpointing" starts
- page 123 gets written to disk, as part of this checkpoint
- page 123 gets modified again
? Does it get written to disk again, as part of
I'm not sure the last time I saw this discussion, but I was somewhat
curious: what would be your ideal Linux distribution for a nice solid
PostgreSQL installation? We've kinda bounced back and forth between
RHEL, CentOS, and Ubuntu LTS, so I was wondering what everyone else
thought.
We run
Using Postgres 9.1
(1)
Simulate a "power failure" by hitting "Restart" on my VMware Workstation.
(2)
Turn on the VM (without any script removing lock or socket file).
(3)
Try to start postgres.
$ pgsql/bin/pg_ctl start -D pgdata
server starting
2013-01-14 FATAL: lock file "/tmp/.s.PGSQL.5432.lo
VALUES (1,2,3), (4,5,6), (7,8,9)
)
SELECT a, b, c
FROM values_to_insert
WHERE a = 4
;
See: http://www.postgresql.org/docs/9.2/interactive/sql-values.html
<http://www.postgresql.org/docs/9.2/interactive/sql-values.html> for more
detail on "VALUES". Basically it provides a way t
On 1/28/13 1:05 PM, François Beausoleil wrote:
I would stay away from MacPorts.
Gotta agree on that one.
The last time I have been working with PostgreSQL on MacOS X, I used
the installer from
http://www.postgresqlformac.com/
There's also a different approach, that I've never tried:
s there something constructive you might have been asking about that
you forgot to include in this email? If so, what was it?
Cheers,
David.
--
David Fetter http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fet...@gmail.com
iCal: web
ne || two)::varchar AS result FROM temptbl);
END;
$$
LANGUAGE plpgsql
STRICT
VOLATILE
;
Thanks!
David J.
PostgreSQL 9.1.x
I try to enable my trigger
"ALTER TABLE cool_tbl ENABLE TRIGGER trg_for_cool_tbl;"
I notice that it gets blocked by another backend doing
"select xxx from cool_tbl;"
(of course, my pain is if this transaction lasts for minutes, instead of
seconds)
I understand that [postgre
servers for different sites
within our organization and I’m sort of going nuts trying to find how to do
this.
Of course, mostly being screwed over due to GPO & AD, as opposed to pgadmin….
Thanks for your time
David Wooffindin
***
Cons
uations
of the function you have to treat the output set as a single typed column
(func_result) and then in an outer query layer (in this case outside the
WITH) you can generically expand the typed column into its component parts.
HTH,
David J.
--
View this message in context:
http://postg
gle related to this error seem to point to catalog
corruption, or a postgres bug.
Any pointers/enlightenment would be appreciated.
-davidc
--
*David Clymer*
VistaShare
866-828-4782, ext. 828
www.VistaShare.com <http://www.vistashare.com/>
[image: Facebook] www.facebook.com/vistashar
On Mon, Feb 11, 2013 at 12:20 PM, Tom Lane wrote:
> David Clymer writes:
> > I've been seeing the following error in one database of ours:
> > "cache lookup failed for relation 7640518"
>
> Always the same OID, or does it change?
>
It appears that alm
On Mon, Feb 11, 2013 at 12:47 PM, Pavel Stehule wrote:
> 2013/2/11 Tom Lane :
> > David Clymer writes:
> >> I've been seeing the following error in one database of ours:
> >> "cache lookup failed for relation 7640518"
> >
> > Always
On Mon, Feb 11, 2013 at 1:13 PM, David Clymer
wrote:
> On Mon, Feb 11, 2013 at 12:47 PM, Pavel Stehule
> wrote:
>
>> 2013/2/11 Tom Lane :
>> > David Clymer writes:
>> >> I've been seeing the following error in one database of ours:
>>
Version=9.1.7
INFO: clustering "my_cool_table" using sequential scan and sort
INFO: "my_cool_table": found 1 removable, 1699139 nonremovable row versions in
49762 pages
Detail: 1689396 dead row versions cannot be removed yet.
CPU 9.80s/4.98u sec elapsed 175.92 sec.
INFO: clustering "my_cool_tab
Howdy!
This query is coming from PgPool I believe.
SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.relname = 'import_jobs'
AND c.relpersistence = 'u'
This is a very small database, like 10/15 tables, it's basically empty.
If i run this query manually, it comes back immediatly.
However
appen with jruby.
I have standard ruby programs where this does not occur.
- On Friday, February 15, 2013 01:58:55 PM David Kerr wrote:
- > Howdy!
- >
- > This query is coming from PgPool I believe.
- >
- > SELECT count(*) FROM pg_catalog.pg_class AS c WHERE c.relname =
-
On Sat, Feb 16, 2013 at 10:30:44AM -0800, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Also, if anyone else stumbles upon this, it only seems to happen with jruby.
-
- > I have standard ruby programs where this does not occur.
-
- It sounds like it is at least possible that it is the
I got the impression that the query still runs even
though the page was closed.
I want to make sure the query is dropped when the customer closes the page.
Any method to kill such query would it still hang out there?
Thanks
David Harel
QIS LTD
We get this FATAL pg_log entry, just after Postgres startup.
(Postgres does start OK)
//9.1.3 on Linux
2013-02-20 10:15:46.637 EST 50eedb22.3602 0
LOG: database system is ready to accept connections
2013-02-20 10:15:46.637 EST 50eedb22.3607 0
LOG: autovacuum launcher started
2013-02-20
put them into a function/script or otherwise use them interactively via
queries...
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/selecting-for-type-cast-failures-tp5747875p5747890.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
you provide inadequate information to provide anything more than a guess...
Are you sure your issue isn't simply a client display consideration -
basically word-wrapping?
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/crosstab-creating-multiple-row
PostgreSQL 8.4.16 on CentOS 5.9.
I've run into a situation where executing a \COPY from psql will hang
and at that point it's impossible to terminate the COPY command.
I've tried pg_cancel_backend and pg_terminalte_backend - even sending
the process itself a TERM signal.
Sending the process KILL
On Wed, Mar 20, 2013 at 1:12 PM, Martín Marqués
wrote:
> What happens if you use COPY ... FROM with the same data?
I will try that, but if you look at pg_stat_activity the full command
is a COPY ( Are you sure the process hangs (strange thing is that you can't
> terminate the backend)? Could it
On Wed, Mar 20, 2013 at 12:37 PM, David Rees wrote:
> PostgreSQL 8.4.16 on CentOS 5.9.
>
> I've run into a situation where executing a \COPY from psql will hang
> and at that point it's impossible to terminate the COPY command.
Some additional notes:
Running psql on the sa
our API after
querying the dimension map table to decide how many output columns you will
need. hstore avoids that by giving you a dynamic table-in-a-column.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/How-to-join-table-to-itself-N-times-tp5749107p5749125.htm
feel a
more stringent input string is required you will need to use a regular
expression to assert that constraint.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Money-casting-too-liberal-tp5749919p5749933.html
Sent from the PostgreSQL - general mailing
kern.ipc.semmns="1024"
kern.ipc.semmni="128"
kern.ipc.shmall="1048576"
kern.ipc.shmseg="2048"
kern.ipc.shmmax="2147483647"
kern.ipc.shmmni="2048"
kern.maxusers="1024"
kern.maxswzone="335544320"
postgresql.conf, all standard/default except for:
max_connections = 256
Any thoughts? What other information can I provide?
Regards,
-David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
n for up to several hours, then the
utilization issue repeats. So it's a very odd issue I've run into.
On 4/2/13, David Noel wrote:
> I'm running into a strange issue whereby my postgres processes are
> slowly creeping to 100% CPU utilization. I'm running
> postgres
On 4/2/13, Ian Lawrence Barwick wrote:
> 2013/4/3 David Noel :
>> I'm running into a strange issue whereby my postgres processes are
>> slowly creeping to 100% CPU utilization. I'm running
>> postgresql-server-9.2.3, FreeBSD 8.3-RELEASE-p6, and using the
>
On 4/2/13, Kevin Grittner wrote:
> David Noel wrote:
>
>> 'select * from pg_stat_activity' shows that the queries are not
>> waiting, and are in the idle state.
>
> The process is idle or the process is running the query? If the
> latter, what do you mean
On 4/2/13, John R Pierce wrote:
> On 4/2/2013 3:35 PM, David Noel wrote:
>> The hardware is a Dell PowerEdge 1420, dual Xeon Nocona's, 3.2ghz,
>> 16gb ram. The disks are 4 Kingston HyperX SATA3's attached to a
>> HighPoint RocketRAID 2721 controller, ZFS, RAID10.
27;. It may be that on restore, the
large objects are all owned by dbadmin instead of dbuser?
Thanks for any clarifications. I may just find I'll put that script
above in my table grants that we use to set all such permissions for tables.
David
--
Sent via pgsql-general mailing
different pg_largeobject_metadata.lomowner
value before and after I ran that DO script to alter each and the
problem with reading a large object in my code went away.
Thanks, David
doubt it solved something back then since TABLE
GRANTS could be run after a restore since before 9.0 the loids had no
permission issue to deal with.
Thanks again, Tom! Best regards, David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your sub
On 4/3/13, Kevin Grittner wrote:
> David Noel wrote:
>> On 4/2/13, Kevin Grittner wrote:
>>> David Noel wrote:
>>>
>>>> 'select * from pg_stat_activity' shows that the queries are not
>>>> waiting, and are in the idle state.
>>
On 4/2/13, John R Pierce wrote:
> On 4/2/2013 3:35 PM, David Noel wrote:
>> The hardware is a Dell PowerEdge 1420, dual Xeon Nocona's, 3.2ghz,
>> 16gb ram. The disks are 4 Kingston HyperX SATA3's attached to a
>> HighPoint RocketRAID 2721 controller, ZFS, RAID10.
state of the process.
>
> d) Do you use the standard values for zfs? Specially arc values.
Hmm, your points do make sense. Tuning postgresql.conf seems to have
done the trick for now, but if this issue pops up again I'll
definitely run through the diagnostics you have suggested.
Standa
First I need to say that I'm asking this question on behalf of "a
friend", who asked me what I thought on the subject -- I host all the
databases important to me and my livelihood, on physical machines I own
outright. That said, I'm curious as to the current thinking on a)
whether it is wise,
I thanks very much for your detailed response. A few answers below inline:
On 4/7/2013 9:38 AM, Tomas Vondra wrote:
As for the performance, AFAIK the EBS volumes always had, and probably
will have, a 32 MB/s limit. Thanks to caching, built into the EBS, the
performance may seem much better ini
On 4/8/2013 3:15 AM, Vincent Veyron wrote:
Could someone explain to me the point of using an AWS instance in the
case of the OP, whose site is apparently very busy, versus renting a
bare metal server in a datacenter?
I am the OP, but I can't provide a complete answer, since personally
(e.g. a
Howdy,
I'm having a couple of problems that I believe are related to AWS and I'm
wondering
if anyone's seen them / overcome them.
Brief background, I'm running PG 9.2.4 in a VPC on Amazon Linux.
I'm also (attempting) to use PgPool for load balancing/failover.
The overall problem is that it seem
On Mon, Apr 08, 2013 at 02:14:14PM -0600, Quentin Hartman wrote:
- What version of pgpool are you using?
-
- Are there other commands you have a problem with? I would suspect that the
- restart is causing the postgres server to go away, pgpool decides to
- disconnect, and then it has to be manuall
On Mon, Apr 08, 2013 at 02:09:42PM -0700, David Kerr wrote:
- On Mon, Apr 08, 2013 at 02:14:14PM -0600, Quentin Hartman wrote:
- - What version of pgpool are you using?
- -
- - Are there other commands you have a problem with? I would suspect that the
- - restart is causing the postgres server to
On Mon, Apr 08, 2013 at 02:59:56PM -0700, David Kerr wrote:
- On Mon, Apr 08, 2013 at 02:09:42PM -0700, David Kerr wrote:
- - On Mon, Apr 08, 2013 at 02:14:14PM -0600, Quentin Hartman wrote:
- - - What version of pgpool are you using?
- - -
- - - Are there other commands you have a problem with
On Mon, Apr 08, 2013 at 04:24:45PM -0700, David Kerr wrote:
- On Mon, Apr 08, 2013 at 02:59:56PM -0700, David Kerr wrote:
- - On Mon, Apr 08, 2013 at 02:09:42PM -0700, David Kerr wrote:
- - - On Mon, Apr 08, 2013 at 02:14:14PM -0600, Quentin Hartman wrote:
- - - - What version of pgpool are you
On Apr 8, 2013, at 5:52 PM, Tatsuo Ishii wrote:
>> 2013/4/9 Tatsuo Ishii :
While debugging this with a coworker we figured out that pg_ctl was
attaching to the tty and then it clicked
that we needed to be using '-t' where I was using -T or (neither).
>>>
>>> Are you sure? I chec
even in extraordinary circumstances? Having to
manually fix things up is acceptable in the use case we're planning
for, even if it's clearly preferable to not have to intervene.
Thank you,
--
David N. Welton
http://www.welton.it/davidw/
http://www.dedasys.com/
--
Sent via pgsql-gener
I've setup streaming replication between two 9.2 servers, and have a few
concerns/questions. I set it up with a very large wal_keep_segments, 17000, and
do NOT ship the logs to the standby.
When I failover to the slave, MUST the process for bringing back up the former
master initially as a slav
"PostgreSQL 9.1.8 on x86_64-unknown-linux-gnu,
compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit"
I see in the postgres log, on a DEV box
. . .
2013-04-18 17:09:44.721 EDT 516f10af.34fb 2/160904 0 ERROR: not enough stack
items
2013-04-18 17:09:44.721 EDT 516f10af.34fb 2/160904 0 S
The users who do care about bridging between the two worlds should
have the search_path default to something invalid and in doing so force
those users to be explicit regarding which schemas they intend to work with.
My $0.02
David J
--
View this message in context:
http://postgresql.1045698.
log_line_prefix = '%m %a %u %c %v %x '
# %m Time stamp with milliseconds
# %a Application name
# %u User name
# %c emits a quasi-unique Session ID,
# consisting of two 4-byte hexadecimal numbers (without leading zeros)
separated by a dot.
# The numbers are the "Process
On 5/10/2013 9:19 AM, Matt Brock wrote:
After googling this for a while, it seems that High Endurance MLC is only
starting to rival SLC for endurance and write performance in the very latest,
cutting-edge hardware. In general, though, it seems it would be fair to say
that SLCs are still a bett
On 5/10/2013 10:21 AM, Merlin Moncure wrote:
As it turns out the list of flash drives are suitable for database use
is surprisingly small. The s3700 I noted upthread seems to be
specifically built with databases in mind and is likely the best
choice for new deployments. The older Intel 320 is a
On 5/10/2013 11:20 AM, Merlin Moncure wrote:
I find the s3700 to be superior to the 710 in just about every way
(although you're right -- it is suitable for database use). merlin
The s3700 series replaces the 710 so it should be superior :)
--
Sent via pgsql-general mailing list (pgsql-gen
On 5/10/2013 11:23 AM, Lonni J Friedman wrote:
There's also the 520 series, which has better performance than the 320
series (which is EOL now).
I wouldn't use the 520 series for production database storage -- it has
the Sandforce controller and apparently no power failure protection.
--
On 5/11/2013 3:10 AM, Matt Brock wrote:
On 10 May 2013, at 16:25, David Boreham wrote:
I've never looked at SLC drives in the past few years and don't know anyone who
uses them these days.
Because SLCs are still more expensive? Because MLCs are now almost as good as
SLCs for p
btw we deploy on CentOS6. The only things we change from the default are:
1. add "relatime,discard" options to the mount (check whether the most
recent CentOS6 does this itself -- it didn't back when we first deployed
on 6.0).
2. Disable swap. This isn't strictly an SSD tweak, since we have en
On 5/12/2013 7:20 PM, John R Pierce wrote:
the real SLC drives end up OEM branded in large SAN systems, such as
sold by Netapp, EMC, and are made by companies like STEC that have
zero presence in the 'whitebox' resale markets like Newegg.
Agreed. I don't go near the likes of Simple, HGST,
I see in the Release Notes for 9.3 beta . . .
"Force cached functions to be replanned if the search_path changes (Tom Lane)
Previously functions already run in the current session ignored search_path
changes."
Question:
Do Prepared Statements also get replanned, if the search_path changes
VC\BIN\nmake.EXE"' : code retour '0x2'
Stop.
Symbol externe non résolu means "Undefined symbol". Is VS2012 unsupported?
Regards,
--
Demelier David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, May 10, 2013 at 11:01:15AM -0500, Larry Rosenman wrote:
- On 2013-05-10 10:57, Tom Lane wrote:
- >Larry Rosenman writes:
- >On 2013-05-10 09:14, Tom Lane wrote:
- >... and verify you get a cheap plan for each referencing table.
- >
- >We don't :(
- >
- >Ugh. I bet the problem is that in s
On Thu, May 16, 2013 at 06:01:51PM -0500, Larry Rosenman wrote:
- On 2013-05-16 17:52, David Kerr wrote:
- >On Fri, May 10, 2013 at 11:01:15AM -0500, Larry Rosenman wrote:
- >- On 2013-05-10 10:57, Tom Lane wrote:
- >- >Larry Rosenman writes:
- >- >On 2013-05-10 09:
On 5/19/2013 7:19 PM, Toby Corkindale wrote:
On 13/05/13 11:23, David Boreham wrote:
btw we deploy on CentOS6. The only things we change from the default
are:
1. add "relatime,discard" options to the mount (check whether the most
recent CentOS6 does this itself -- it didn't bac
I would think this would be possible. I'm on 9.0.8
I have a reference between two tables, and want to populate a field in one table
with a value that's in the referenced table ( based on the FK reference of
course ).
with row as ( select my.atmos_site_id, my.stationid from my_stations my,
at
On 5/31/13 4:21 PM, Jeff Janes wrote:
On Fri, May 31, 2013 at 2:37 PM, David Salisbury mailto:salisb...@globe.gov>> wrote:
I would think this would be possible. I'm on 9.0.8
I have a reference between two tables, and want to populate a field in one
table
with a valu
On 5/31/13 4:45 PM, Bosco Rama wrote:
On 05/31/13 15:33, David Salisbury wrote:
And without trying too much ;), I'll bet there is no way to do this in SQL
proper. i.e.
I can't correlate an update with a select stmt, as in a correlated sub-query
sort of way.
So for this to wo
et higher level
compatibility tools handle the conversion.
I agree with documenting the "GET DIAGNOSTICS var := item;" syntax and in
fact from a purely OCD standpoint wonder why non-diagnostic assignment is
documented for ":=" while diagnostic assignment is documented for
omething where prevention is worth the cost of breaking forward
compatibility.
A comment was made that "GET DIAGNOSTICS var = item;" is standard defined.
Is the use of ":=" for assignment also standard defined? If so its not that
inconsistent standards surprise me but
nt behavior.
The status-quo, from the lack of discussion surrounding this recently,
doesn't seem to be that bad. My sample and exposure to other's code,
though, is quite minimal and not everything makes it to the lists (or is
read if it does) so maybe there is some underlying si
mple looks made-up you are on your own for the "mechanism" but
if time can be used you can possibly get away with either an "serial" column
or a "current timestamp" column to enforce the order.
David J.
--
View this message in context:
http://postgresql.104
actically valid
queries - referencing columns from the select-list - would result in an
error being raised simply by changing the where-clause.
There may be a way I am not aware of, my use of triggers is minimal, but I
really doubt it an question whether it would be a good idea to use said
fu
ke a person
id as an input. Let the user decide the best way to obtain that person id.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Passing-a-WHERE-clause-by-trigger-to-a-function-tp5757825p5757877.html
Sent from the PostgreSQL - general mailing list arch
I've setup two 9.2.4 servers to serve as master-slave in a streaming
replication scenario. I started with a fresh database on the master, setup the
replication, then imported using pg_restore about 30GB of data. The master and
slave are geographically separated, so replication of this amount of
otherwise
the parser thinks "func_out_col" is a table and errors out.
This all definitely applies to 9.2 and earlier. 9.3 (with lateral) may
behave differently...
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Function-use-in-query-tp5758051p5758066.
From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
on behalf of John R Pierce [pie...@hogranch.com]
Sent: Wednesday, June 05, 2013 5:00 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Trouble with replication
On 6/5/2013 1:39 PM, David Greco wrote:
I’ve
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Wednesday, June 05, 2013 9:43 PM
To: David Greco
Cc: John R Pierce; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Trouble with replication
On Thu, Jun 6, 2013 at 7:23 AM, David Greco
mailto:david_gr...@harte-hanks.com>>
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Wednesday, June 05, 2013 9:43 PM
To: David Greco
Cc: John R Pierce; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Trouble with replication
On Thu, Jun 6, 2013 at 7:23 AM, David Greco
mailto:david_gr...@harte-hanks.com
nd the results of
function_call without causing it to execute multiple times - once for each
column being expanded.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Function-use-in-query-tp5758051p5758159.html
Sent from the PostgreSQL - general mailing l
e-calls.
For <= 9.2 you will have to either live with the slightly verbose syntax or
wrap what you need into user-defined functions.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Function-use-in-query-tp5758051p5758201.html
Sent from the PostgreSQL - g
nd-new connection.
The only difference is that the underlying connection handler process is not
killed and the user is not required to login again - both of which are
time-intensive actions.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Postgresql-Currval-V
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Thursday, June 06, 2013 7:01 PM
To: David Greco
Cc: John R Pierce; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Trouble with replication
On Thu, Jun 6, 2013 at 9:19 PM, David Greco
mailto:david_gr...@harte-hanks.com
models a 1-to-1(optional) relationship between A and B; such
that every record in A must exist in B (assuming you disallow NULL) but B
can have records that do not exist in A. Whether multiple records in A can
share the same "source" record in B is undefined by can be made explicit by
Steven Schlansker-3 wrote
> 1) The common value is not known at schema definition time, and may change
> (very slowly) over time.
>
> 2) JDBC uses prepared statements for everything, and the value to be
> selected is not known at statement prepare time, so any partial indices
> are ignored (this i
ot;version_upgraded" to distinguish them from
records originally insert using the newest version. Or have "original
version" as the partition key and a second "current version" field that
varies. Not sure how the planner would be able to use constraint exclusion
to limit
ges and identify which version all the
records are at and select ranges based upon how far behind the current
version they are (or whatever priority algorithm you desire - including
manual tweaks).
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Index-over-only-un
WHEN amt < 0 THEN amt ELSE 0.00 END AS credit
FROM source_table
)
SELECT ..., SUM(debit) AS total_debit, SUM(credit) AS total_credit
FROM make_debit_credit_columns_for_each_record
GROUP BY ...
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/postgresql-query-tp5
RTITION BY id
ORDER BY idx
ROWS 3 PRECEDING --# attempts to limit size of array by only going back a
limited number of days
) AS possibles
FROM input_src
ORDER BY idx ASC
)
SELECT * --, some_function_to_get_most_recent_nonnull_value(possibles)
FROM construct_possibles
;
HTH
David J.
--
Vie
;m not sure on the necessary syntax.
There may be third-party ETL tools that fulfill this need as well.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Exporting-Data-tp5760108p5760118.html
Sent from the PostgreSQL - general mailing list archive at Nabble.
nce the query itself is
possibly not the problem but rather your data model is flawed.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760192.html
Sent from the PostgreSQL - genera
itishree sukla wrote
> Hi All,
>
> I am using coalesce(firstname,lastname), to get the result if first name
> is
> 'NULL' it will give me lastname or either way. I am having data like
> instead of NULL, blank null ( i mean something like '' ) for which
> coalesce is not working, is there any work
Jason Long-2 wrote
> David,
>
> Thank you very much for your response.
> Below is a script that will reproduce the problem with comments
> included.
>
>
>
> /*
> This is the new query that is not working correctly.
> I am trying to left join the base t
a view name (and possibly, separately, the
ORDER BY clause). Catalog lookups can be used to check for identical view
output types.
No idea of something like this exists and is readily available.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Easiest-
th "model_id, size, area" as
columns. A particular house would then link in "model" and "price". You
could possibly further restrict that certain models can only sell for
certain prices if necessary - in which case you would have "model_price" and
possibly &qu
601 - 700 of 4082 matches
Mail list logo