onth1 = 12 then
> inputmonth2 = 1;
> else
> inputmonth2 = inputmonth1 + 1;
> end if;
>
> resultdate = (inputyear2)::text || '-' || (inputmonth2)::text || '-' ||
> '01';
> resultdate = to_date(resultdate::text,'yyyy-MM-DD');
RETURNS date AS
> $BODY$
> BEGIN
> RETURN date_trunc('month', inputdate + interval '1 month');
> END;
> $BODY$
> LANGUAGE 'plpgsql' IMMUTABLE;
>
> And with that I wonder why you'd even need a function :)
Because it's clear w
ware.
If the business need is to store X gigabytes with no regard for how
old the data is, then you need to adjust your data storage methods
to work with that. Create a table to store the size of each LO, and
run a regular maintenance job that purges old data when the used
size gets too big.
--
ou need to trigger the process. It's probably going to take some
experimentation and babysitting on your part to get it right.
Were it me, I'd just add some hard drives to get the system up to about
1T of disk space. If you can't get that budget, you'll have to be a
er than install uuid-ossp.
Anything else is going to be a hack, and uuid-ossp was created specifically
to address this requirement.
Unless, of course, I've misunderstood your question.
--
Bill Moran
http://www.potentialtech.com
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
301
$ SELECT CONVERT('6 hours 17 minutes'::INTERVAL AS hour);
hour
--
6.2833
Am I approaching this problem wrong? or is there something out there
and my Google skills are lacking?
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~w
In response to Tom Lane <[EMAIL PROTECTED]>:
> Bill Moran <[EMAIL PROTECTED]> writes:
> > There seems to be a lack of useful functions for converting intervals
> > to useful representations. For example, I want to display an interval
> > in hours and fract
8.3/interactive/high-availability.html
Cheers, Bill
On Tue, Mar 25, 2008 at 2:24 PM, Thomas Kellerer <[EMAIL PROTECTED]> wrote:
> Bill Wordsworth wrote on 25.03.2008 19:16:
> > When traffic goes up, my webserver creates multiple instances of
> > postgresql.exe. At some basic level, aren't they similar to Oracle's
Given:
CREATE ROLE joe WITH LOGIN;
CREATE ROLE dumpable;
ALTER GROUP dumpable ADD USER joe;
If I have a database called db1 to which the role dumpable has enough
permissions to do a full pg_dump, but he user joe does not, how can
joe do a pg_dump? Is it possible?
--
Bill Moran
Collaborative
In response to Bill Moran <[EMAIL PROTECTED]>:
>
> Given:
>
> CREATE ROLE joe WITH LOGIN;
> CREATE ROLE dumpable;
> ALTER GROUP dumpable ADD USER joe;
>
> If I have a database called db1 to which the role dumpable has enough
> permissions to do a full pg_dump,
t unless you have an explicit ordering
clause, there's no guarantee what order rows will be accessed in.
Try putting an explicit ORDER BY in the queries and see if the problem
goes away.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED
Please don't top-post. I've attempted to reconstruct the conversation
flow.
In response to "antiochus antiochus" <[EMAIL PROTECTED]>:
>
> On Thu, May 22, 2008 at 2:57 PM, Bill Moran <[EMAIL PROTECTED]>
> wrote:
>
> > In response to "antio
In response to "antiochus antiochus" <[EMAIL PROTECTED]>:
> On Thu, May 22, 2008 at 4:20 PM, Bill Moran <[EMAIL PROTECTED]>
> wrote:
> >
> > In response to "antiochus antiochus" <[EMAIL PROTECTED]>:
> > >
> > > On Thu
ource is a special kind of PHP variable, it is not a string, it is
a resource, with a resource ID. It points to internal PHP data structures
that do special things (in this case, point to a PG connection)
You're using the correct commands, but the data your passing them doesn't
appear to be
> various places (everywhere) on all platforms (even MS[TM])? You know. a
> UNIVERSAL id?
Just give each separate system it's own unique identifier and a sequence
to append to it.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phon
ble that's bloating, a VACUUM FULL or CLUSTER
of that table alone on a regular schedule might take care of things.
If your data is of a FIFO nature, you could benefit from the old trick
of having two tables and switching between them on a schedule in order
to truncate the one with stale data in i
hod to generate unique
table names and store the names in the HTTP session. Create some sort
of garbage collection routines that removes tables when they're no longer
needed.
The details of exactly how you pull this off are going to depend heavily
on the rest of your application architecture.
--
Bill
In response to Tim Tassonis <[EMAIL PROTECTED]>:
>
> Bill Moran wrote:
> > In response to Tim Tassonis <[EMAIL PROTECTED]>:
> >
> >>
> >> Now, with apache/php in a mpm environment, I have no guarantee that a
> >> user will get the same p
by design.
Is the problem just that it's overwhelming the logs? If so, you're
best bet is probably to reduce the amount of logging that occurs.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
--
Sent
ate?
I seems to me that it's your coworker that needs the disambiguation.
Based on the argument you describe, he doesn't seem to understand the
difference between UPDATE and INSERT.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone:
is pretty slim.
> Oh, and the problem has been intermittant. Another
> thing that happened this morning is that Postgres had today as 18/06/2008
> when in fact it was 19/06/2008 and the OS reported this correctly. Restarting
> postgres sorted it, could this be the problem?
Sounds t
I've got to load some large fixed-legnth ASCII records into PG and I was
wondering how this is done. The Copy command looks like it works only
with delimited files, and I would hate to have to convert these files to
INSERT-type SQL to run them through psql.. Is there a way one can
specify a tab
can then use roles to set permissions, use search_path to determine
what users see by default, and schema-qualify when needed.
If you can't migrate your setup to use schemas, then I expect anything
else you do will feel sub-optimal, as PostgreSQL is designed to use
schemas for this sort of th
erage_yield / 100 ;') and it also failed
in the same manner in about the same time (~10 minutes).
Does anyone know what happened and how I can fix it?
- Bill Thoen
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Patrick TJ McPhee wrote:
In article <[EMAIL PROTECTED]>, Bill Thoen <[EMAIL PROTECTED]> wrote:
% I've got to load some large fixed-legnth ASCII records into PG and I was
% wondering how this is done. The Copy command looks like it works only
% with delimited files, and I woul
he new postgreSQL installation to take the dump. In simpler words, use
> 8.3.x pg_dump to make a dump of a running instance of postgreSQL 7.x for
> later restore on postgreSQL 8.3.x.
Also, you can probably use CLUSTER to get the DB size down to something
manageable. If you CLUSTER one table
n the
event occurred. If it was not enabled, then you may want to look at
whether the GUI you're using logs actions like that, but that will
depend on what GUI you're using.
Since your question isn't a bug, I've redirected the thread to the
pgsql-general@ mailing list.
>
&
."
If you haven't installed anything else recently or changed any other drivers
(and you've tried the same w/your AV turned off), I'd strongly suspect a
hardware error. Run a CHKDSK to check the system drive volume and a RAM test to
rule out bad RAM (bad RAM would be the f
forum to ask this sort of question, I'd
appreciate being pointed to a more appropriate one.
TIA,
- Bill Thoen
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thanks for tip on OFFSET. That's just what I needed. It's so easy when
you know the command you're looking for, and so hard when you know what
you want to do but don't know what the command is called!
Thanks,
- Bill Thoen
--
Sent via pgsql-general mailing list (pgsql-ge
ut this DB (lots of tables, for instance)?
> Can you try strace and/or gdb to figure out what the collector is doing?
Just in case you're not a FreeBSD expert, it's ktrace on FreeBSD. strace
is the Linux equivalent.
--
Bill Moran <[EMAIL PROTECTED]>
--
Sent via pgsql-
use the -f option to specify another file)
Use the kdump utility to convert the ktrace.out file to something usable.
Something like "kdump > ktrace.txt" will probably get you what you want,
assuming your ktrace file is ktrace.out.
--
Bill Moran <[EMAIL PROTECTED]>
--
Sent v
"Marcelo Giovane" wrote:
>
> Please, remove me from the list!
Look in the message headers:
List-Unsubscribe: <mailto:majord...@postgresql.org?body=unsub%20pgsql-general>
--
Bill Moran
http://www.potentialtech.com
--
Sent via pgsql-general mailing list (pgsql-general@p
what you're describing and it will work well. I
am curious as to why you'd want to, though. What problem are you trying
to solve by doing this? I don't see it being worth the extra complexity
and size you've added to the schema.
--
Bill Moran
http://www.potentialtec
g
PostgreSQL to look for the data directory in the new location, or create
a symlink.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I
can't directly advise you there.
And I laughed when you asserted "I have enough RAM" ... If I had a dollar
for everyone who said something like that and was wrong, I'd buy an island
in the Pacific ...
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefu
e there
are occasionally exceptions.
If you're updating to a major release (8.2.x -> 8.3.x), then yes.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make ch
.
Basically, you can set up the new database server (using a different port
or whatever), and install/configure Slony. Slony will then keep your
two database in sync. Then you can switch over to the new database
whenever suits you:
http://www.slony.info
--
Bill Moran
http:
In response to Grzegorz Jaśkiewicz :
> On Wed, Jun 3, 2009 at 8:14 PM, Bill Moran wrote:
> > In response to "Carlos Oliva" :
> >
> >> Woudl it be possible to keep the current postgresql version running in a
> >> different port, install a new version of
do this. Doing a select count(*) on a table
with 750,000 rows produces no write activity.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
xactly what Slony is for.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
In response to "Tim Bruce - Postgres" :
> >
> > Tim, yes, I am using the tool "ProcessExplorer" from the windows site. It
> > shows all the activity but can't see to where those writes are being done
> > with that tool. Any ideas?
FileMon
In response to Jennifer Trey :
> Bill, did you see my last message on the mailing list? I have tracked down
> the file. Is this some statistics file? Could this be a bug caused by auto
> vacuum being on?
I didn't see any message saying which file was getting all the activity.
Sorry
if this sounds offensive, but this thread has shown a pattern with you
of chasing things without doing proper research first, and making
assumptions about what's causing the problem, without even knowing what
the problem is.
--
Bill Moran
http://www.potentialtech.com
http://people.coll
e database on a regular schedule. Figuring out how
often to vacuum is an art in itself (which is why autovac was written)
You do _need_ to run vacuums periodically, so don't do one without the
other.
--
Bill Moran
http://www.potentialtech.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
backup
without having to stop the sever and a few other cool perks) you back up
the entire database or nothing.
Hope this helps.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.or
from that table, then it's going to be the same table every
time you pg_dump it.
If you're talking about doing a filesystem-level backp, then I wouldn't
assume anything. Depending on various maintenance schedules, a vacuum
or reindex could change the files around (although the dat
In response to Jennifer Trey :
> Bill, you wrote earlier :
>
> "
> Additionally, this convinces me further that you're chasing the wrong
> problem. The stats collector writes tiny bits of information to disk
> every time you execute a command. If your system is slo
date_trunc( 'week', now() - interval '1 week' )
AND
date_trunc( 'week', now() - interval '1 week' )
+ interval '1 week' - interval '1 second'
Is there a better approach?
--
Bill Moseley
mose...@hank.org
Sent from my i
}
> results.close();
>
> connection.close();
> }
> catch (Exception cnfe)
> {
> cnfe.printStackTrace();
> }
> }
>
In response to Vyacheslav Kalinin :
> On Mon, Jun 8, 2009 at 8:33 PM, Bill Moran wrote:
>
> >
> > Perhaps you want to take an exclusive lock on the table? The operation
> > you describe seems to suggest that you'd want to guarantee exclusive
> > write access
ly don't have enough. We do some huge transactions over Slony
(although not into the millions per transaction) but we have enough
free RAM, free disk space, and free CPU cycles to clean up after it so
it's not hurting us.
--
Bill Moran
http://www.potentialtech.com
http://people.co
wouldn't be a need to have other settings, now would there)
And I'll reiterate something that was said on this thread earlier ... it's
likely that autovacuum isn't going to be enough for your usage pattern.
Have you posted the output of VACUUM VERBOSE ye
insights.
The pg_stat_activity table holds 1 row for each connection with information
on what that connection is doing.
It wouldn't be very difficult to write a passthrough script for something
like MRTG to graph this data.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborat
In response to Pedro Doria Meunier :
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Thank you Bill for your tip.
>
> As far as the table's name is concerned the only one I can find is
> 'pg_statistic' (under pg_catalog). I'm using PGSQL 8.2.9
try to simulate a vacuum full for testing, or are you
complaining about the side effects of vacuum full?
Quite honestly, I can't figure out what your question is or what you're
trying to do.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
everything before that)
http://www.postgresql.org/docs/8.3/static/largeobjects.html
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
imary key 11432
> (END)
I'm not sure that this problem is related to the other problems. It looks
like your database has become corrupt or your configuration for liferay
is corrupt. You'll probably get better assistance if you take this
particular question to the
# of processes, you may
want to look at other parameters in postgresql.conf related to memory
usage. I'm not familiar with the use of PostgreSQL on Windows, so I can't
offer much advice there.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran
fine. It was a 5G database and half of the servers
were replicating across the country.
We're looking at the upgrade to Slony 2 as a separate step.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ngs must
be different.
Without knowing what problem you're trying to solve, I can't recommend
one or the other, but hopefully the previous paragraphs will help.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list
In response to Jack W :
> On Thu, Jun 25, 2009 at 11:37 AM, Bill Moran wrote:
>
> > In response to Jack W :
> >
> > > I will create several databases on PostGreSQL. All the databases have the
> > > same structure: same number of table/index.
> > > I
"help" for help.
>
> postgres=#
>
> You can also do :
>
> b...@ben-desktop:~$ psql -hlocalhost -Upostgres
> psql (8.4.0)
> Type "help" for help.
>
> postgres=#
>
> Note no password prompt either time!
Does user ben have a .pgpass file?
--
B
r primary key can span multiple columns, i.e.
PRIMARY KEY(jobclock_id, employee_id, machine_id)
Could be more columns.
Keep in mind that this ensures that the combination of all those
columns is unique, which may or may not be what you want.
--
Bill Moran
http://www.potentialtech.com
http://pe
t items.
What's the output of EXPLAIN ANALYZE SELECT DISTINCT field FROM table;?
Does a VACUUM ANALYZE of the table help? Is the query significantly
faster the second time you run it?
> Is this a well known issue?
Not that I'm aware of.
--
Bill Moran
http://www.potentialtech.com
ht
ur usage pattern will dictate that.
> A connection told me it would be better to enable the autovacuum, because it
> does more than the above script. Can anyone verify that?
Autovacuum is smarter -- it won't vacuum tables that don't need it, whereas
the above script vacuums everythi
ache. Using said cache, you can
configure the controller to lie about fsyncs, which make them essentially
free from PostgreSQL's standpoint. Since the cache is backed by a
battery, your concerns about data loss in the event of power failure are
much less. The cache doesn't usually incre
> The client program that receives this result reports that there are
> no rows returned. So where did they go"?
What happens between the INSERT and the SELECT? Are there DELETE,
TRUNCATE, or ROLLBACK statements?
Also, look for a BEGIN statement that is never COMMITed. If the c
lose the connection and reopen it. There's no equivalent.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
u're going to have to show us your code,
or at least a pared down version that exhibits the problem.
[I'm stripping off the performance list, as this doesn't seem like a
performance question.]
--
Bill Moran
http://www.potentialtech.com
--
Sent via pgsql-general mailing list (pgsql
et another arbitrary query filtering technique. I mean, logging only
the most time-consuming queries is already arbitrary enough (as you
already stated).
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-g
COMMIT when the work is complete. Simply leaving a
connection open won't cause this.
If you're not explicitly issuing a BEGIN, then it may be a bug in the
client driver, or a misunderstanding on your part as to how to use the
driver. If you tell the list what client library you're us
ng on
the infrastructure to manage the amount of data so we can do it all the
time (we don't currently have enough disk space).
Overall, it seems like you've decided that you want this feature and nothing
else will do. If that's the case, then just go ahead and write it.
--
B
In response to "Joshua D. Drake" :
> On Mon, 2009-07-20 at 13:24 -0400, Bill Moran wrote:
> > In response to "Greg Sabino Mullane" :
> > >
> > > -BEGIN PGP SIGNED MESSAGE-
emory (upgraded
> yesterday). Would it help to increase the following:
>
> shared_buffers = 512MB
> effective_cache_size = 3GB
>
> Both of these are conservative I think? My data size is about 30 GB
> right now. Vacuum is all autovacuum as you see from set
gets the contract. Those that complain about
"it's not security, it's obscurity" do not get the contract.
I mean, didn't Apple just kill someone for letting their new iPhone
design leak?
--
Bill Moran
http://www.potentialtech.com
--
Sent via pgsql-general mailing list
Scott Marlowe wrote:
>
> On Sat, Jul 25, 2009 at 5:23 AM, Bill Moran wrote:
> > Scott Marlowe wrote:
> >>
> >> On Fri, Jul 24, 2009 at 5:02 PM, Brian A.
> >> Seklecki wrote:
> >> > All:
> >> >
> >> > Any suggestions o
are you really in danger of hitting the wraparound? If you run
the query "SELECT datname, age(datfrozenxid) FROM pg_database;" (as suggested
in the docs) once a day for a few days, does it seems like you're using
up XIDs fast enough to be a danger? If you've got new hardwar
ing that can
play flv files on my FreeBSD desktop machine. I'm pretty sure mplayer
can play mov files ... I guess I'll find out this evening when I take
time to watch them.
In any event, thanks for making the caps. I'm looking forward to watching
them.
--
Bill Moran
http://www
Ok, I'm a bit stumped on getting my group by query to work which
iterates through a number of months that basically a generate_series
provides for me.
Here is what I am using in the from clause (along with other tables) to
generate the series of numbers for the number of months. This seems to
wor
Ok, it is Monday -:) Thanks Tom!
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Monday, August 03, 2009 11:44 AM
To: Bill Reynolds
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] using generate_series to iterate through months
"Bill Reynolds&quo
> queried a database, requiring a full database to be available for unit
> tests is not really an environment I want to have.
Has it occurred to you that testing a DB client when there's no
DB isn't really a very accurate or realistic test?
--
Bill Moran
http://www.potentialtech.com
In response to Paul Taylor :
> Bill Moran wrote:
> > In response to Paul Taylor :
> >
> >> Sam Mason wrote:
> >>
> >>> On Tue, Aug 04, 2009 at 01:37:34PM +0100, Paul Taylor wrote:
> >>>
> >>>
> >>>>
In response to Paul Taylor :
> Bill Moran wrote:
> >
> > Then replace the DB client class with a class that returns fabricated
> > data for the purpose of your test.
> >
> Won't work because I am writing SQL and I want to test the SQL is correct
Well, be
In response to Paul Taylor :
> Bill Moran wrote:
> > In response to Paul Taylor :
> >
> >> Bill Moran wrote:
> >>
> >>> Then replace the DB client class with a class that returns fabricated
> >>> data for the purpose of your test.
&
As an example, the Bacula project requires a database to run, and has a
full suite of testing stuff that multiple people run to help find bugs.
The thing that makes it doable is the fact that the setup process is
documented well enough that anyone who can follow instructions can set
up a testing
m go down. Of course, if auditing is critical to your
scenario, then your priorities are different ...
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Friday, August 07, 2009 12:44 PM, Joshua D. Drake wrote:
> On Fri, 2009-08-07 at 17:36 +0100, Sam Mason wrote:
> > On Fri, Aug 07, 2009 at 11:11:10AM -0500, Wenjian Yang wrote:
> > > We currently installed emacs 23.1 and PostgreSQL 8.4.0 for Windows on a
> > > windows desktop. When issue "sql-po
indows.
As far as tuning, I just went through the config file and tuned everything
logically based on published best practices. Aside from the FSM settings,
I don't think I've had to fine tune anything else, post.
And for those who may want to jump in -- we have investig
bles in my
GIS database to make maps for both of these projects. How do I
reference a table that's in another database? Or should I organize my
PostgreSQL data differently?
Thanks,
- Bill Thoen
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to you
On Mon, 10 Aug 2009 13:49:02 -0400
Vick Khera wrote:
> On Mon, Aug 10, 2009 at 9:46 AM, Bill Moran wrote:
> > We have servers using about 200 connections on average ... it climbs up
> > to 300+ during busy use. I've seen it peak as high as 450, and we've seen
ng when multiple threads want a sequence all at the same time.
I'm rather concerned by the third column, as I'm not sure what his
implementation
approach is, and I'm concerned that he's using a home-brewed locking mechanism
instead of using table locks.
--
Bill
omers' data shouldn't
> be used together but we may occasionally compare customer data. I'll also
> add that each customer should have a fairly significant amount of data.
If you're concerned about future-proofing your design, consider the fact
that it w
class and pg_catalog.pg_tables tables,
do the proper rows come back?
- Bill
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Denis BUCHER
> Sent: Sunday, August 23, 2009 8:55 AM
> To: Wojtek
Unfortunately, the Npgsql driver doesn't
really work very well with SSIS...)
- Bill
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Erwin Brandstetter
> Sent: Wednesday, August 26, 2009 12:10 PM
&g
-> Bitmap Index Scan on Blonidx (cost=0.00..1760.38
> rows=84859 width=0)"
> "Index Cond: (getSpecialLon((B.lon)::numeric) = A.lon)"
> " -> Bitmap Index Scan on Blatidx (cost=0.00..1766.81
> rows=84859 width
;
> " -> Bitmap Index Scan on Blatidx (cost=0.00..672.36
> rows=84859 width=0)"
> "Index Cond: (getSpecialLat((b.lat)::numeric) = a.lat)"
>
> However it's still taking ages to execute (over five minutes - I stopped it
> b
* FROM table_name ORDER BY Event_Date, DESC',
> it includes the actual time (HH:MM:SS) so the order comes out B,A,D,C.
>
> So what I am asking is how do I order only by the date? YYYY-MM-DD?
You could do "ORDER BY event_date::DATE"
--
Bill Moran
http://www.potentialt
rating. But (as Vick stated) DB servers
are usually bottlenecked on how fast they can access the disks, not RAM
or CPU.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
401 - 500 of 1196 matches
Mail list logo