> test=> delete from cash_journal where ARRAY[click, cash_journal_id] NOT in
> (select max(ARRAY[click,cash_journal_id]) from cash_journal group by
> fairian_id); DELETE 7
For what it's worth, we've run into *severe* performance issues using in() if
there are a large number of values in conjunct
On Saturday, December 05, 2015 11:08:05 AM Berend Tober wrote:
> WITH max_click AS (
>SELECT
> cash_journal.fairian_id,
> max(cash_journal.click) AS click
> FROM cash_journal
> GROUP BY cash_journal.fairian_id
> )
>delete from cash_journal j
> using max_click b
On Monday, December 14, 2015 05:25:16 PM Adrian Klaver wrote:
> > FOLLOWUP QUESTION: is there a way to ask the query planner what
> > tables/fields were output in a database result?
>
> Just dawned on me, are you asking if EXPLAIN can output more detailed
> information?
Ha ha, in another post, I
On Monday, December 14, 2015 05:20:52 PM Adrian Klaver wrote:
> > FOLLOWUP QUESTION: is there a way to ask the query planner what
> > tables/fields were output in a database result?
>
> I am not following, that would be in the query output would it not? A
> more detailed explanation of what you w
Is there a way to set PG field-level read permissions so that a deny doesn't
cause the query to bomb, but the fields for which permission is denied to be
nullified?
In our web-based app, we have a request to implement granular permissions:
table/field level permissions. EG: userX can't read cu
I have a horribly-performing query similar to below, and I'd like to convert
it to use a "WITH mytable as ( ... ) " without having to re-architect my code.
For some reason, using a WITH prefix seems to generally work much faster than
IN() sub clause even allowing identical results. (runs in 1/4t
On Wednesday, September 30, 2015 03:49:44 PM Keith Fiske wrote:
> We've run postgres on ZFS for years with great success (first on
> OpenSolaris, now on OmniOS, and I personally run it on FreeBSD). The
> snapshotting feature makes upgrades on large clusters much less scary
> (snapshot and revert if
On Wednesday, September 30, 2015 09:58:08 PM Tomas Vondra wrote:
> On 09/30/2015 07:33 PM, Benjamin Smith wrote:
> > On Wednesday, September 30, 2015 02:22:31 PM Tomas Vondra wrote:
> >> I think this really depends on the workload - if you have a lot of
> >> random w
On Wednesday, September 30, 2015 02:22:31 PM Tomas Vondra wrote:
> I think this really depends on the workload - if you have a lot of
> random writes, CoW filesystems will perform significantly worse than
> e.g. EXT4 or XFS, even on SSD.
I'd be curious about the information you have that leads you
On Tuesday, September 29, 2015 10:35:28 AM John R Pierce wrote:
> On 9/29/2015 10:01 AM, Benjamin Smith wrote:
> > Does anybody here have any recommendations for using PostgreSQL 9.4
> > (latest) with ZFS?
>
> For databases, I've always used mirrored pools, not raidz*.
Does anybody here have any recommendations for using PostgreSQL 9.4 (latest)
with ZFS?
We've been running both on ZFS/CentOS 6 with excellent results, and are
considering putting the two together. In particular, the CoW nature (and
subsequent fragmentation/thrashing) of ZFS becomes largely irr
On Thursday, November 03, 2011 10:59:37 AM you wrote:
> There's a pretty varied mix of speed, durability, and price with any
> SSD based architecture, but the two that have proven best in our
> testing and production use (for ourselves and our clients) seem to be
> Intel (mostly 320 series iirc), a
On Wednesday, November 02, 2011 01:01:47 PM Yeb Havinga wrote:
> Could you tell a bit more about the sudden death? Does the drive still
> respond to queries for smart attributes?
Just that. It's almost like somebody physically yanked them out of the
machine, after months of 24x7 perfect performa
On Wednesday, November 02, 2011 11:39:25 AM Thomas Strunz wrote:
> I have no idea what you do but just the fact that you bought ssds to
> improve performance means it's rather high load and hence important.
Important enough that we back everything up hourly. Because of this, we
decided to give
Well,
After reading several glowing reviews of the new OCZ Vertex3 SSD last spring,
we did some performance testing in dev on RHEL6. (CentOS)
The results were nothing short of staggering. Complex query results returned
in 1/10th the time as a pessimistic measurement. System loads dropped from
On Sunday, April 17, 2011 01:55:02 AM Henry C. wrote:
> On Thu, April 14, 2011 18:56, Benjamin Smith wrote:
> > After a glowing review at AnandTech (including DB benchmarks!) I decided
> > to spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost
> > about $300
&g
On Friday, April 15, 2011 09:50:57 AM Tom Lane wrote:
> If you simply unpacked the tar archive and started a postmaster on that,
> you'd be pretty much guaranteed to get a corrupt database. The tar
> archive is not a valid snapshot by itself --- you have to replay
> whatever WAL was generated duri
After a glowing review at AnandTech (including DB benchmarks!) I decided to
spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost about $300
with shipping, etc and at this point, won't be putting any
Considering that I sprang for 96 GB of ECC RAM last spring for around $5000,
eve
The speed benefits of SSDs as benchmarked would seem incredible. Can anybody
comment on SSD benefits and problems in real life use?
I maintain some 100 databases on 3 servers, with 32 GB of RAM each and an
extremely rich, complex schema. (300+ normalized tables)
I was wondering if anybody her
Try this:
http://lmgtfy.com/?q=web+hosting+postgresql
On Sunday, March 06, 2011 11:33:01 am Eduardo wrote:
> At 17:24 06/03/2011, you wrote:
> >On 3/5/2011 4:08 PM, matty jones wrote:
> >>I already have a domain name but I am looking for a hosting company
> >>that I can use PG with. The few I
I'd also add: run pg_tune on your server. Made a *dramatic* difference for us.
On Friday, February 25, 2011 05:26:56 am Vick Khera wrote:
> On Thu, Feb 24, 2011 at 6:38 PM, Aleksey Tsalolikhin
>
> wrote:
> > In practice, if I pg_dump our 100 GB database, our application, which
> > is half Web f
On Wednesday, August 18, 2010 04:58:08 pm Joshua D. Drake wrote:
> Well if you are just using it for updates to the schema etc... you
> should only need to launch a single connection to each database to make
> those changes.
And that's exactly the problem. On each server, we have at least dozens o
On Wednesday, August 18, 2010 08:40:21 pm Adrian von Bidder wrote:
> Heyho!
>
> On Thursday 19 August 2010 01.32:06 Benjamin Smith wrote:
> > This way we can be sure that either all the databases are in synch, or
> > that we need to rollback the program patch/update.
>
On Thursday, May 13, 2010 11:51:08 pm Brian Modra wrote:
> Maybe the best way to solve this is not to do automatic distribution
> of the data, but rather to provide tools for implementing distributed
> references and joins.
Here's my vote! I'd *LOVE* it if I could do a simple cross-database join
Is there a way to update a number of databases hosted on a single server
without opening a separate psql connection to each database?
We have a cluster of servers hosting an application on Postgres. Right now, we
have dozens of databases per server, enough that we're starting to have
problems
"A deep unwavering belief is a sure sign that you're missing something."
-- Unknown
I had no intention of sparking an ideological discussion.
I read Joe's article reference previously - a simple case for using a
normalized database. I
I've used this same concept in subqueries for a very long time. Doing this
allows me to "dive in" and get other values from the joined table, rather than
just the thing that we're getting the most of.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"I kept looking fo
I have some questions about the best way to best use foreign keys in complex
schemas. It's becoming cumbersome to manage a large set of foreign keys - is
there a better way?
// FOUNDATIONAL //
Let's say that you want to keep addresses, and one of the values that you need
to keep is the state
On Wednesday 19 September 2007, Bjørn T Johansen wrote:
> It's a Dell server with the following spec:
>
> PE2950 Quad-Core Xeon E5335 2.0GHz, dual
> 4GB 667MHz memory
> 3 x 73GB SAS 15000 rpm disk
> PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x
6 backplane
Asking
On Wednesday 19 September 2007, Chester wrote:
> Hi
>
> I have a question regarding foreign keys, I just cannot get it to create
> them for meI must be doing something wrong but I have no idea what
> that might be :)
>
> I have a table "clients"
>
> clientID (primary)
> ticode
> Firstname
I'm using 8.1 RPMs for CentOS and so far, it's been great.
Now, I'm going to enable SSL. I had no trouble with the instructions on the
documentation for server-only certificates, and verified that psql (Linux)
acknowledges the SSL connection.
But I am stumped as to how to create a client cert
On Saturday 10 March 2007, Christian Schröder wrote:
> Let's assume that the values in this table are some limits that are
> given for different data (identified by the id). Some of the limits are
> only valid after a given date, whereas other limits are valid all the
> time. How would you put this
I've purchased a number of systems (a dozen or so) from avadirect.
http://www.avadirect.com
Their prices are excellent, hardware is solid quality, their service is
median. This is a discount shop, so don't expect lightening support. But you
can buy three fast AVA systems of top-notch quality f
On Tuesday 23 January 2007 13:55, Carlos wrote:
> What would be the faster way to convert a 7.4.x database into an 8.x
> database? A dump of the database takes over 20 hours so we want to convert
> the database without having to do a dump and resptore.
You've probably already accounted for this,
On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
> Nature of the beast. Sequence increments aren't rolled back on
> transaction abort (for performance and concurrency reasons), so you
> should expect gaps.
Behavior long ago noted and accounted for. But I've always wondered why this
was
Andreas,
Would you mind explaining what you mean by "localized object names" and why it
might be bad? Or where I might go to learn more?
Thanks,
-Ben
On Tuesday 23 January 2007 07:38, Tino Wildenhain wrote:
> A. Kretschmer schrieb:
> > am Tue, dem 23.01.2007, um 10:12:13 -0500 mailte Brand
On Thursday 21 December 2006 14:41, Joshua D. Drake wrote:
>You should read up on schemas and how they work. Plus the
> addition of schemas and table spaces means you can infinite scaling
> within the confines of your hardware itself.
Ok, so I'd like you to correct me if I'm wrong:
1) Schemas op
On Thursday 21 December 2006 11:47, Ron Johnson wrote:
> This gives you linear growth potential, since if your current box
> gets over-utilized, buy a 2nd box and move some of the databases to it.
So far, I'm inclined to go this way, due to the option for linear scaling.
> >> 2) Copy out the dat
I'm breaking up a database into several sets of data with similar layout. (we
currently have multiple customers using a single database and tableset, we're
splitting it out to give us more "wiggle room")
It seems that there are basically two ways to proceed:
1) Copy out the data specific to a
I have a situation that can be summarized to the following:
-- day in 20061215 format
Create table calendar (
day integer unique not null
);
Create table customers (
id serial unique not null,
name varchar,
address varchar,
);
Create table d
On Wednesday 20 September 2006 18:59, Brian Maguire wrote:
> I justed wanted to let you know how impressed and pleased I have been with
> postgres over the past 5 years . The timeliness and quality of the releases
> are always robust and stable. Every release has a very nice mix of admin,
> perf
On Monday 11 September 2006 11:30, stig erikson wrote:
> Hi.
> We are looking to open a small web shop. I looked around to see if there are
any open source web shops.
> Can anyone recommend any web shop system (free or non-free)?
I'd guess you're looking for OSCommerce. (Sucks, but less so than
We have a growing ASP-hosted application built on PHP/Postgres 8.1, and are
getting requests from clients to manipulate the databases more directly.
However, the structure of our databases prevents this from happening readily.
Assume I have two tables configured thusly:
create table customers
I have a customer table (very important) and have numerous fields in other
tables FK to the serial id of the customer table.
There's an option to delete a customer record, but it has to fail if any
records are linked to it (eg: invoices) in order to prevent the books from
getting scrambled.
I'm stumped on this one...
I have a table defined thusly:
create table items (
id serial,
category integer not null references category(id),
name varchar not null,
price real,
unique(category, name));
It has a LARGE number of entries. I'd like to grab the 10 most expensive items
from ea
On Wednesday 22 March 2006 03:06, Jimbo1 wrote:
> Hello there,
>
> I'm a freelance Oracle Developer by trade (can almost hear the boos now
> ;o)), and am looking into developing my own Snowboarding-related
> website over the next few years. Anyway, I'm making some decisions now
> about the site ar
How can I update a range of constrained values in order, without having to
resubmit a query for every single possiblity?
I'm trying to create a customer-specific sequence number, so that, for each
customer, the number starts at one, and continues, 1, 2, 3, 4, 5... etc. with
no values skipped.
// FIXED //
Tom, thank you so much for your help! Now running 8.1.2, the query now works
quickly and properly.
-Ben
On Wednesday 25 January 2006 13:17, Benjamin Smith wrote:
> Version: postgresql-8.1.0-4.c4
>
> I'll have to see about getting an update...
>
> Than
Version: postgresql-8.1.0-4.c4
I'll have to see about getting an update...
Thanks a TON,
-Ben
On Wednesday 25 January 2006 13:11, you wrote:
> Benjamin Smith <[EMAIL PROTECTED]> writes:
> > Aha, yep. Sorry:
> > Program received signal SIGSEGV, Segmentation fault.
# locale for monetary
formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting
add_missing_from = on
-Ben
On Wednesday 25 January 2006 11:
to get this to fail on another system that's not in
production use, though it's a uniprocessor P4.
-Ben
On Wednesday 25 January 2006 07:52, you wrote:
> Benjamin Smith <[EMAIL PROTECTED]> writes:
> > What's the best way to do this? Take PG down (normally starte
Thanks,
What's the best way to do this? Take PG down (normally started as a service)
and run directly in a single-user mode?
I've never reallly worked with gdb...
-Ben
On Tuesday 24 January 2006 17:27, you wrote:
> > What information do you need to help figure this out?
>
> Reproduce it
I'm running PostgreSQL 8.1 on CentOS 4.2, Dual proc Athlon 64 w/4 GB RAM.
I'm trying to get a PHP app to work, but the failure happens when the command
is copy/pasted into pgsql. Trying to run a large insert statement, and I get:
server closed the connection unexpectedly
This probably
that, and then using a regex to
rename the table in the output... (eg
/TABLE\s+TABLEaBcDeFgH_U/TABLE customers/
Ugh. I was hoping there was a cleaner way...
-Ben
On Wednesday 04 January 2006 23:35, you wrote:
> On Wed, 4 Jan 2006 21:00:25 -0800, Benjamin Smith <[EMAIL PROTECTED]&g
Is there a way to put a limit on pg_dump, so that it doesn't dump ALL data,
but that matching a particular query?
Something like:
pg_dump -da --attribute-inserts -t "customers" \
--matching-query="select * from customers where id=11";
I'd like to selectively dump information from a query, bu
I'm using CentOS 4.2 on a dual-opteron, but I'd guess it'd probably work for
IA64...
Try putting this somewhere in your /etc/yum.repos.d/...
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=http://mirror.centos.org/centos/$
ought more or less unimportant!
// tries to put jaw back into mouth //
-Ben
On Monday 12 December 2005 16:11, you wrote:
> Benjamin Smith <[EMAIL PROTECTED]> writes:
> > The example that I gave was a small one to illustrate my understanding of
> > multiple foreign keys,
Dual proc Opteron 2.0 Ghz,
4 GB ECC RAM.
10k SCSI drives, software RAID 1
Centos 4.2 (Redhat ES clone)
PostgreSQL 8.1, 64 bit, loaded with RPMs from the PG website for Redhat ES.
max_connections 64
shared_buffers 25
temp_buffers 1
max_prepared_transactions = 0
work_mem 1024
maintenan
I have a small company growing fast, selling a product based largely on
Postgres. We have a rapidly growing database with (currently) 117 tables.
Our primary server, an aging system based around an IDE, 2.4 Ghz P4, is being
replaced by a new, 10k SCSI, dual-proc AMD opteron/64, with 4 GB of ECC
Currently running Postgres 7.3.9 32bit on Fedora Core 1, and upgrading to PG
8.1 64bit on Centos 4.
When I load the file,
psql -U dbname < dbname.sql
I get this error:
ERROR: invalid UTF-8 byte sequence detected near byte 0x96
when inserting fields that seem to contain HTML. What could be c
Prepared statements are the way to go.
I developed my own prepared statements methodology (I called it "SafeQuery")
some time back before discovering that others had done it. It's so nice,
since I've not worried about SQL injection for YEARS.
Sample of my API:
$_REQUEST['username'],
'pass
I'm running 7.3, and considering the upgrade to 8.1 to make use of multiple
indexes. Where is the upgrade notes from 7.3->7.4, and from 7.4-> 8.x so that
I can see what impact this would have on my app? I can't seem to find them...
Thanks
-Ben
--
"The best way to predict the future is to inv
Wow.
Does it really produce the expected (probably dramatic, in my case!)
improvement in performance? (I'll be trying it out anyway..., but I'd love
your feedback)
How stable is it? Looks like I have a PG upgrade in my near future...
-Ben
On Friday 28 October 2005 14:51, you wrote:
> > 3)
I have a rapidly growing database with a very complex schema, and I'm looking
to improve performance. It's typical to have 2-4 foreign keys in each table,
and there are currently 113 tables, and queries with 5-10 tables with
combined inner/outer joins are pretty typical. (I avoid subqueries anyw
I ran into something like this once, where a complex update occurred inside a
transaction. When the update happened, I saw what you describe - the DB hung,
and the load average shot out thru the roof until I restarted the PG daemon.
The query otherwise worked fine, but only failed with this spec
I'm sure I've done this before, but for some reason, my main noodle is drawing
a blank. Assume we have three tables defined thusly:
create table classrooms (
id serial unique not null,
name varchar
);
create table seats (
classrooms_id integer not null references classrooms(id),
position var
I wrote a rule a while back that, due to the software being extended, now
needs to be deleted.
How do I drop a rule?
DELETE FROM pg_rules WHERE rulename='foo';
doesn't seem to cut it...
-Ben
--
"The best way to predict the future is to invent it."
- XEROX PARC slogan, circa 1978
Based on the extensive feedback here, as well as other information from other
websites found since asking here, I've decided that I'm still, very happily,
a PG user.
No significant issues to date - PG has "just worked" for me for 5 years now,
and the frustrating limitations (EG: alter table dr
As a long-time user of Postgres, (First started using it at 7.0) I'm reading
recently that Firebird has been taking off as a database.
Perhaps this is not the best place to ask this, but is there any compelling
advantage to using Firebird over Postgres? We have a large database (almost
100 tab
Ok, I have a stupid-simple table:
create table files (
id serial unique not null,
mime varchar not null,
name varchar not null
);
Idea being that I can save values from a file upload into a table, and use
throughout my application. This gives me a central repository to look for
files of
Is it possible to have the equivalent of a serial data type in a table,
sub-categorized?
Assume the following:
create table categories (id serial, title varchar);
Now, I want to create an entries table, and by default, count serially by
category, so that category 1 has entries.sequence of 1
Given the tables defined below, what's the easiest way to check for schedule
conflicts?
So far, the only way I've come up with is to create a huge, multi-dimensional
array in PHP, with a data element for every minute of all time taken up by
all events, and then check for any of these minutes t
Has anybody had any experience using PG with OpenSSI (Single System Image) for
a high-availability cluster?
http://openssi.org/cgi-bin/view?page=openssi.html
Is this feasible? Possible? Easier/harder than other PG clustering solutions?
-Ben
--
"The best way to predict the future is to inven
Is there a way in PG 7.3, given a field, to find out what other tables &
records are linked to it via a foreign key? EG:
create table cities (id serial primary key,
title varchar not null);
insert into cities(title) values ('San Fransisco');
insert into cities(title) values ('Los Angeles');
In one of my apps, I have an "images manager" that's become unmanageable. I'd
like to be able to easily track key dependencies.
Let's say I have a set of tables like this:
create table Customers (
id serial unique not null primary key,
name varchar not null,
address varchar not null
image i
Thanks much for your help!
It took a few tries to get what it was all about, but I got it.
On Thursday 16 December 2004 14:09, Michael Fuhr wrote:
> > Dates are kept as ]MMDD', eg 2004114 for Nov 14, 2004.
>
> Why not use a DATE type? You can reformat it with to_char() if
> need be.
Not
I have a list of students, and a list of enrollment records, and I'm trying to
get a list of students and their most recent enrollment/disenrollment dates.
create table students (id serial primary key, name varchar);
create table enrollments (
students_id integer not null references stud
77 matches
Mail list logo