eads disappeared.
Still not sure how it comes to twice a minute per cluster. There are 5
databases, and as far as I know, a client connection to one of them, not
doing anything, while this was happening.
--
Bryan Henderson San Jose, California
--
Sent via pgs
Looking at audit logs, I see that my Postgresql server generates a new thread
precisely every 30 seconds, in two series (so 4 threads every minute). This
is an otherwise idle server.
Does anyone know what these threads are for? Just curious.
--
Bryan Henderson
On 05/17/2016 08:25 AM, Victor Yegorov wrote:
I had a bit of fun with this SQL version and came up with this query:
WITH src(s) AS (
VALUES
('729472967293732174412176b12173b17111752171927491b1744171b17411217181417211718141734172b191721191724173b1714171912175b17221b1912174b1412178b121715122a
ry 2016 at 19:13, Joe Conway wrote:
> On 02/25/2016 03:42 PM, Bryan Ellerbrock wrote:
> > Hi, I'm first time mailing-list user with a problem. I'm working on a
> > UTF8 encoded database using psql (9.5.1, server 9.4.6)
> >
> > I've implemented a very large m
r keeping materialized
views update to date I could explore?
And ideas are welcome, this has been driving me crazy!
--
Bryan Ellerbrock
Research Specialist, Mueller Lab
Boyce Thompson Institute for Plant Research
Office/Lab: 211 | 607-227-9868
own wide-format tables:
https://cgsrv1.arrc.csiro.au/blog/2010/05/14/unpivotuncrosstab-in-postgresql/
Good luck,
Bryan
e can make for a pretty frightening moment.
Thanks,
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
a.id...
My question is, then, how is it that the query embodied in "view_1" below
executes fine, but cannot seem to be restored? Is this telling me my query is
dumb? If so, any advice on how to easily derive "view_1" from "tab_1" and
"tab_2"
ll 5 hours ahead. What gives? Not the end of the
world but a bit annoying.
Bryan.
n and version; it makes things
> much easier.
>
> Reply follows inline.
>
>
> On 11/06/2012 09:04 PM, Bryan Montgomery wrote:
>
> I'm wondering what general ways there are to monitor and improve
> performance? We look at pgadmin's server status but that only sees
rver typically has up to 500 connections with a max of 750
connections.
Below are the non-default values of our configuration file.
Any thoughts on what we should look at?
Thanks,
Bryan
listen_addresses = '*' # what IP address(es) to listen on;
max_connections
I recently upgraded from Postgresql 9.0.10 to 9.2.1. I am now running into
problems with Postgresql running out of memory during large data operations,
more specifically loading the OpenStreetMap data into the database. The load
under 9.0 went fine and there were no memory issues. This is on the
es without any issue?
Thanks,
Bryan.
On Fri, Jun 1, 2012 at 8:07 AM, Bryan Murphy wrote:
> On Thu, May 31, 2012 at 4:28 PM, Jeff Davis wrote:
>
>> On Thu, 2012-05-31 at 15:55 -0500, Bryan Murphy wrote:
>> > I'm having a problem upgrading a cluster from 9.0.7 to 9.1.3. Here's
>>
On Thu, May 31, 2012 at 4:28 PM, Jeff Davis wrote:
> On Thu, 2012-05-31 at 15:55 -0500, Bryan Murphy wrote:
> > I'm having a problem upgrading a cluster from 9.0.7 to 9.1.3. Here's
> > the error:
>
> Please send /srv/pg_upgrade_dump_globals.sql
>
> Als
/bin/psql" --set ON_ERROR_STOP=on --no-psqlrc --port
5432 --username "postgres" -f "/srv/pg_upgrade_dump_globals.sql" --dbname
template1 >> "/dev/null"
psql:/srv/pg_upgrade_dump_globals.sql:54: ERROR: duplicate key value
violates unique constraint "pg_authid_oid_index"
DETAIL: Key (oid)=(10) already exists.
There were problems executing "/opt/postgresql-9.1/bin/psql" --set
ON_ERROR_STOP=on --no-psqlrc --port 5432 --username "postgres" -f
"/srv/pg_upgrade_dump_globals.sql" --dbname template1 >> "/dev/null"
Failure, exiting
"/opt/postgresql-9.1/bin/pg_ctl" -w -l "/dev/null" -D "/srv/postgres-9.1"
-m fast stop >> "/dev/null" 2>&1
Thanks,
Bryan
I now have "libreadline.a" in /usr/local/lib. Assuming that's the goal,
would you be kind enough to walk me through the next step -- linking that
lib to psql?
bryan
On Fri, Apr 13, 2012 at 3:07 PM, Tom Lane wrote:
> Bryan Hughes writes:
> > Prior to updating my Snow
Prior to updating my Snow Leopard Mac to OSX Lion (10.7.3), I was able to
open psql from a terminal and then use "tab complete" to auto-complete
table or field names (i.e., "select * from [TAB -- list of table names]").
Unfortunately, something appears to have changed and tab complete now does
noth
Hello,
It seems that the program is thinking I'm passing a table but instead I'm
passing a query. Now, I could put a hack in place, create a view and pass
that to pgsql2shp but I thought I'd ask whether anyone else has seen this
behavior and has a way to force / trick the program to treat the para
web service xml using public
/ private keys, or using ssl to pass the md5 hash of the clients password.
The more elegant way seems to be using the encrypted web service, but the
more universal method for clients would probably be ssl.
On Tue, Mar 20, 2012 at 3:16 PM, Bryan Montgomery wrote
both PostgreSQL and PostGIS:
http://www.kyngchaos.com/software/postgres
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Interesting idea. However, I think this is ssl between the client and
database. Given the client would be the server hosting the web service I
don't think this would work for the web service client.
On Fri, Mar 16, 2012 at 2:54 PM, Raymond O'Donnell wrote:
> On 16/03/2012
nd provide
through the web service.
Hopefully this makes sense :)
Bryan.
the statistics is
not a critical problem, I can't see any way to make the server stop trying in
vain to update it, so I plan just to change the code to make the statistics
collector terminate the server when it can't update the statistics file.
--
Bryan Henderson
Thanks Tom that did it :)
James: I'll add those books to my list
I appreciate everyone's help!
On Fri, Jul 15, 2011 at 2:16 PM, Tom Lane wrote:
> Bryan Nelson writes:
>> Tom, rake is a rails command, also after doing a \d geo_data it does
>> show that it
e, figured it had to be something simple. Shows how
new I am at postgres.
On Fri, Jul 15, 2011 at 1:44 PM, Tom Lane wrote:
> Bryan Nelson writes:
>> Tom, the file was created in linunx and is utf-8. Here is the rake
>> task that created the table:
>
>> class Creat
at8
t.column :city, :text
t.column :state, :text
t.column :county, :text
end
add_index "geo_data", ["zip_code"], :name => "zip_code_optimization"
end
def self.down
drop_table :geo_data
end
end
On Fri, Jul 15, 2011 at 1:10 PM, Tom
thout quotes.
>>
>> Susan
>>
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org
>> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bryan Nelson
>> Sent: Friday, July 15, 2011 9:04 AM
>> To: pgsql-gener
be
> around text fields. That is the norm for CSV files.
>
> Susan
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bryan Nelson
> Sent: Friday, July 15, 2011 9:04 AM
> To: pgsql-gene
Hi Adrian, yes that is the entire table definition.
On Fri, Jul 15, 2011 at 12:30 PM, Adrian Klaver wrote:
> On 07/15/2011 09:03 AM, Bryan Nelson wrote:
>>
>> I am having problems importing a CSV file of sample data for testing
>> in a web app.
I am having problems importing a CSV file of sample data for testing
in a web app.
Columns & Types
---
zip_code - text
lattitude - float8
longitude - float8
city - text
state - text
county - text
Some Sample Data From CSV File
--
96799,-7.209975,-170.77
ive (the
server just can't take no for an answer)?
--
Bryan Henderson San Jose, California
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Mon, May 2, 2011 at 10:39 PM, Tom Lane wrote:
> alan bryan writes:
>> Checking out postgres.core and we see:
>
>> (gdb) bt
>> #0 0x0008f5f19afd in pthread_mutex_lock () from /lib/libthr.so.3
>> #1 0x000800d22965 in xmlRMutexLock () from /us
Our developers started to use some xpath features and upon deployment
we now have an issue where PostgreSQL is seg faulting periodically.
Any ideas on what to look at next would be much appreciated.
FreeBSD 8.1
PostgreSQL 9.0.3 (also tried upgrading to 9.0.4) built from ports
Libxml2 2.7.6 (also
g said, it does not explain WHY we are seeing
such a memory usage pattern.
Thanks,
Bryan
erate our upgrade to 9.0.2 for these servers, however,
I'm concerned that we have not identified the source of the memory leak and
this upgrade won't necessarily fix the problem.
Any advice? What should I be looking for?
Thanks,
Bryan
Thanks for the comments. Just to clarify, I gave these two values as
examples. The readings could be between a handful for one vehicle type up to
40 or more for another type of vehicle.
On Thu, Dec 16, 2010 at 12:26 PM, Vincent Veyron wrote:
> Le mercredi 15 décembre 2010 à 19:12 +0100, Jan Keste
rd approach would be to just have the detail table with
duplication on the vehicle id and time - for each data type recorded.
Thanks,
Bryan.
On Tue, Sep 21, 2010 at 8:08 PM, Tatsuo Ishii wrote:
> Unfortunately the gdb backtrace does not show enough information
> because of optimization, I guess. Can you take a backtrace with
> optimization disabled binary?
>
> You can obtain this by editing Makefile around line 147.
>
>
I edited conf
On Tue, Sep 21, 2010 at 10:45 AM, Bryan Murphy wrote:
> I'm sorry, when I went back over to double check my steps I realized I ran
> the wrong command. I am *still* having the problem. It appears that the
> MD5 hashes now match, but it's still failing. I have postgres and pg
On Tue, Sep 21, 2010 at 10:26 AM, Bryan Murphy wrote:
> On Mon, Sep 20, 2010 at 6:23 PM, Tatsuo Ishii wrote:
>
>> I have used PostgreSQL 9.0 + pgpool-II 3.0 and they work fine with md5
>> auth. Your log seems to indicate that the password in pool_passwd and
>> the
started working.
Many thanks for your help!
Bryan
On Sun, Sep 19, 2010 at 11:31 PM, Tatsuo Ishii wrote:
> Sorry for delay. I had a trip outside Japan.
>
No problem.
> I found nasty bug with pgpool. Please try attached patches.
>
I tried the patch file and I still cannot connect. The only other
difference is that I've already upgraded our im
On Tue, Sep 14, 2010 at 6:55 PM, Tatsuo Ishii wrote:
> Sorry for not enough description about pool_passwd. It's located under
> the same directory as pgpool.conf. So the default is
> /usr/local/etc/pool_passwd.
>
> You need to create /usr/local/etc/pool_passwd if the uid to run pgpool
> server d
I can't get md5 authentication working with postgres 9rc1 and pgpool-II 3.0.
I see references to "pool_passwd" in the pgpool documentation, but I see
nothing indicating *where* this file should exist and how pgpool finds it.
I've set my accounts up in pcp.conf, however, I do not believe this is w
Under the assumption that you properly modeled the data - achieved a
nice balance of normalization and de-normalization, examined the size of
your relations in such a context, and accounted for
how the data will grow over time and if it will grow over time, then
partitioning, as Joshua mentioned,
ke a connection from a Java
application with JDBC.
Bryan.
On Wed, Jun 16, 2010 at 10:17 AM, wrote:
> OMG!!!
>
> I finally got it working. Problem was that on the windows side on the
> service account within the account options, we needed to check "Use DES
> encryption t
t;http://domain.com/>(DES cbc mode with
> RSA-MD5)
>
> That timestamp seems kinda funky, doesn't it? 12/31/69? That can't be
> right, can it?
>
>
> Thanks again.
>
> Greig
>
> - Original Message -
> From: "Stephen Frost"
>
uld have worked, the only thing that I can think of was that the
stats on the tables were maybe out of date? This is on 8.4.3.
Thanks,
Bryan.
Yeah, the interesting thing is we're supposed to move to AES, but on the
current AD it isn't available :) Will be a bit ironic if it is all down to
using DES!
On Wed, Jun 16, 2010 at 11:05 AM, Stephen Frost wrote:
> Greig,
>
> * greigw...@comcast.net (greigw...@comcast.net) wrote:
> > I finally
both server by hostname?
Thanks - Bryan.
eytab HTTP/poe3b.lab2k.net
kinit(v5): Preauthentication failed while getting initial credentials
I'd be interested to know if you get something different - and the steps you
went through on the AD side.
Bryan.
On Fri, Jun 11, 2010 at 5:51 PM, wrote:
> I'm trying to get my Pos
Hello,
I'm trying to get kerberos working with postgres 8.4 on openSUSE
authenticating against AD. I have the server configured and can do a kinit
against my account on the server. I have a keytab file produced by the
administrators.
$ klist -kt poe3b.keytab
Keytab name: FILE:bob.keytab
KVNO Times
nd a quicker/simpler method. My math
tells me that my current script is going to take 24 days to test every
record. Obviously, there are ways I can speed that up if I have no
choice but I'm hoping for a simpler solution.
I'd prefer to run a COPY TABLE like command and have it skip the
pped spares (assuming the wal shipped spares
would suffer the same problem, which is a big assumption), but this is
a lot of effort to get going.
Help!
Thanks,
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ppens if I have two slave servers (A and B) and I want to fail
over to slave A and have it start shipping log files to slave B but B
has more queries applied to it than A? I assume in this case I would
instead want to fail over to B and ship to A. How would I know which
server to fail over to?
ogs
I was able to recover after the fact seemed to indicate some kind of
massive memory failure but I'll never know for sure.
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
tream back into memory.
We had to fail over to one of our spares twice in the last 1.5 years.
Not fun. Both times were due to instance failure.
It's possible to run a larger database on EC2, but it takes a lot of
work, careful planning and a thick skin.
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
like to fix this, because this has literally given me nightmares.
:)
Bryan
and asking questions
on the mailing list. The information is in the docs, you just have to read
it a few times for it to sink in.
Bryan
Thanks for the suggestion. I wasn't able to get the whole pg to compile -
but I was able to take one of the contrib packages and use that as a
template for my needs.
Bryan.
On Wed, Feb 24, 2010 at 2:00 AM, Magnus Hagander wrote:
> 2010/2/23 Bryan Montgomery :
> > Hello,
> >
you'd need twice the disk space I guess but the 'downtime'
would be a lot less. I'd imagine you could have databases on different ports
and switch them at the roll over point, or maybe even just different
database names.
Just an alternative idea to throw out there.
Bryan.
On Tue
ers.
The errors below are the ones that I'm struggling with now.
Thanks - Bryan.
warning C4005: '_WIN32_WINNT' : macro redefinitionc:\program
files\postgresql\include\server\pg_config_os.h7
error C2894: templates cannot be declared to have 'C' linkagec:\program
er, some of the
large tables literally take hours across the network, for maybe a few dozen
changes.
On Wed, Sep 23, 2009 at 4:58 PM, Scott Marlowe wrote:
> On Wed, Sep 23, 2009 at 11:11 AM, Bryan Montgomery
> wrote:
> > Hi,
> > I'm looking for a way to replica
accomplish this.
Thanks,
Bryan.
Standby (C) --> etc.
Master Fails, now becomes:
Old Master (A) x> New Master (B) --> Warm Standby (C)
And, of course, you might have an additional replication chain from Master
(A) just in case you goof something up in the failover process, but that's
the basic idea.
Thanks,
Bryan
m spare
from the new machine.
Which leads me to the one big flaw in all of this, the log files were all
going to the local drives and not the EBS volumes so I've lost them and am
now kicking myself in the ass for it.
Bryan
On Mon, Aug 17, 2009 at 12:41 PM, Tom Lane wrote:
> Bryan Murphy writes:
> > On Mon, Aug 17, 2009 at 12:17 PM, Tom Lane wrote:
> >> Hm, what's your current XID counter? (pg_controldata would give an
> >> approximate answer.) I'm wondering if the xmax
On Mon, Aug 17, 2009 at 12:17 PM, Tom Lane wrote:
> Bryan Murphy writes:
> > Here's the xmin/xmax/ctid for three problematic records:
>
> > prodpublic=# select xmin,xmax,ctid from items_extended where id in
> > ('34537ed90d7546d78f2c172fc8eed687
Could I run pg_resetxlog on a warm spare? Would that give the same result?
Unfortunately, this is our production system and I simply cannot bring it
down at the moment to run pg_resetxlog.
Bryan
On Mon, Aug 17, 2009 at 11:35 AM, Greg Stark wrote:
> On Mon, Aug 17, 2009 at 4:23 PM, Br
On Mon, Aug 17, 2009 at 11:35 AM, Greg Stark wrote:
> On Mon, Aug 17, 2009 at 4:23 PM, Bryan Murphy
> wrote:
> > I've identified 82 bad records. When I try to query for the records,
> > we get the following:
> > ERROR: missing chunk number 0 for toast value 25
st been
deleting the offending records. However, in this particular table, when I
try and delete the records, I get the following error message:
ERROR: attempted to delete invisible tuple
I'm at a loss what to do next.
Thanks,
Bryan
ation as the current server is already overloaded.
Thanks,
Bryan
On Fri, Jun 12, 2009 at 11:08 AM, Bryan Murphy wrote:
> I've read through the PITR documentation many times. I do not see anything
> that sheds light on what I'm doing wrong, and I've restored older backups
> successfully many times in the past few months using this
On Fri, Jun 12, 2009 at 10:48 AM, Alan Hodgson wrote:
> On Friday 12 June 2009, Bryan Murphy wrote:
> > What am I doing wrong? FYI, we're running 8.3.7.
>
> See the documentation on PITR backups for how to do this correctly.
>
I've read through the PITR documentati
base to a point where I was able to run queries against it,
however it was missing data that should have been there. I tried again this
morning with a different snapshot and I've run into the same problem again.
What am I doing wrong? FYI, we're running 8.3.7.
Thanks,
Bryan
or restartpoint to complete, so will be
> significantly faster.
8.4 is already looking like it's going to be a great upgrade for us,
this would be another win.
Thanks,
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
ht
bly
should change anyway, one less thing for the master database to do.
We create file system snapshots of the hot spares, and I periodically
purge the old log files after I've verified that we can bring the most
recent snapshot live.
We've used NFS in the past, but we're currently investigating other
distribution alternatives (primarily londiste and pgpool2). We've
used slony in the past, but find it introduces too much administrative
overhead and is too brittle for our tastes.
Thanks again!
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
nd I'm trying to limp along
as best I can with the legacy database until we can get everything
migrated.
So, to recap, I've raided up the volumes, thrown as much RAM and CPU
at the process as is available and just can't seem to tease any more
performance out.
Thanks,
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ze = 12GB (15GB total)
checkpoint_segments = 10
checkpoint_completion_target = 0.7
(other checkpoint/bgwriter settings left at default values)
sysctl:
kernel.shmmax = 2684354560
vm.dirty_background_ratio = 1
vm.dirty_ratio = 5
Thanks,
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@pos
as if the apostrophe was not there:
O'Daniel
Oliveira
Oliver
O'Neill
I think the MSSQL output is more correct for listing names
alphabetically. How can I configure or query PGSQL to get the same sort
order?
Thanks,
Bryan
ratch which then becomes a log shipped copy of the secondary.
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
hat the problem is.
I think this is a pretty good strategy, but I've been so caught up in
this I may not be seeing the forest through the trees so I thought I'd
ask for a sanity check here.
Thanks,
Bryan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
ot looking forward to manually breaking up 20,000 lines
of sql into separate files. :(
Just fishing for ideas.
Thanks,
Bryan
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail comman
m hoping there's a better (simpler) way.
Thanks,
Bryan
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
I've got a web site (apache/php) with a postgres 8.2.5 database(s).
We're now getting some periods of high load. We have a lot of dynamic
queries so I'm not able to just tune and optimize a few known queries
ahead of time.
Is there a way that I can get a list of all the actually SQL queries
as p
nt me to any publication on implementing this basic pattern with postgres
and plpgsql.
Thanks,
-bryan
eachy for this.
I did not know about that option but it sounds like it will get the
job done. This is our last database running 8.1.9, so even if it
doesn't support that, I plan on migrating it to 8.2 soon anyway.
Thanks,
Bryan
---(end of broadcast)-
e amount of time the table wasn't available, which is why
we didn't use a truncate based strategy.
Thanks,
Bryan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ste link?
http://pgsql.privatepaste.com/5ako244Xe5
Sorry about that. Google tricked me into thinking it would format properly. :)
Bryan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Sorry about the formatting, here's the dump as a text file.
Thanks,
Bryan
On Dec 5, 2007 10:05 AM, Bryan Murphy <[EMAIL PROTECTED]> wrote:
> When we run pg_dump on our database, our web site becomes completely
> unresponsive. I thought pg_dump was runnable while the database
2k 119M| 132k 7632M
38 2 52 7 0 0| 015M: 0 608k: 0 0 : 0
14M| 543M 96k 14G 1230M| 56k 24k| 132k 7632M
Thanks,
Bryan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
rks just great for me.
>
> Cheers
> joao
That's not my intention at all. My intention is to justify the
validity of each index in our database. Some indexes have snuck in
that I find of questionable value, and I want the data to backup my
intuition.
Anyway, I
Is there a way I can track index usage over a long period of time?
Specifically, I'd like to identify indexes that aren't being regularly
used and drop them.
Bryan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
First question... did you create the appropriate indexes on the appropriate
columns for these tables? Foreign keys do not implicitly create indexes in
postgres.
Bryan
On 7/30/07, Cultural Sublimation <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm fairly new with Postgresql,
I highly recommend you use the Npgsql driver, and if you're feeling really
saucy try NHibernate on top of that.
http://pgfoundry.org/projects/npgsql
http://www.nhibernate.org/
Bryan
On 7/23/07, longlong <[EMAIL PROTECTED]> wrote:
hi,all
i have a local system with windows xp.
i wa
of those areas where
I have yet to find a lot of guidance on the issue.
Bryan
On 3/29/07, Lew <[EMAIL PROTECTED]> wrote:
Bryan Murphy wrote:
> I think the other guys suggestion will work better. ;)
>
> Really, the table was just an example off the top of my head. I believe
> w
ike a financial system where you can't retroactively change the data.
We always want to know who was associated with the original transaction,
even long after their account was deleted.
Thanks for the suggestion though!
Bryan
On 3/29/07, John D. Burger <[EMAIL PROTECTED]> wrote:
On Mar
Thanks! That works great!
Bryan
On 3/29/07, Jonathan Hedstrom <[EMAIL PROTECTED]> wrote:
Bryan Murphy wrote:
> Is it possible to declare a unique constraint in combination with a
> deleted flag?
>
> For example, if I have a table like this:
>
> CREATE TABLE
> (
only checks Key and Value when
Deleted = 0?
Thanks,
Bryan
1 - 100 of 166 matches
Mail list logo