While reading about NoSQL,
MongoDB let's you store and search JSON objects.In that case, you don't
need to have the same "columns" in each "row"
The following ensued. Isn't it cute ?
CREATE TABLE mongo ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );
INSERT INTO mongo (obj) SELECT ('a=>'||n|
I suspect your app is doing lots of tiny single-row queries instead of
efficiently batching things. It'll be wasting huge amounts of time
waiting for results. Even if every query is individually incredibly
fast, with the number of them you seem to be doing you'll lose a LOT of
time if you loop o
why even have multiple rows? just jam it all it there! :-D
LOL
But seriously, when using an ORM to stuff an object hierarchy into a
database, you usually get problems with class inheritance, and all
solutions suck more or less (ie, you get a zillion tables, with assorted
pile of JOINs,
This is a problem I encounter constantly wherever I go. Programmer
selects millions of rows from giant table. Programmer loops through
results one by one doing some magic on them. Programmer submits queries
back to the database. Even in batches, that's going to take ages.
Reminds me of a
Hi Merlin
The analyze command gave the following result:
On the KVP table:
Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180)
(actual time=0.037..0.038 rows=1 loops=1)
Index Cond: (bench_id = '20_20'::text)
Total runtime: 0.057 ms
And on the Hstore table:
Bitma
You wrote
Try to create a btree index on "(bench_hstore->bench_id) WHERE
(bench_hstore->bench_id) IS NOT NULL".
What do you mean exactly?
=> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE
??? IS NOT NULL;
My table's def is:
CREATE TABLE myhstore ( id bigint PRIMARY KEY,
My problem is, that in fact I don't know which tag to index since I'm
running a web admin application where users can enter arbitrary
queries.
For a tag cloud, try this :
- table tags ( tag_id, tag_name )
- table articles ( article_id )
- table articles_to_tags( article_id, tag_id )
now this
If i run 30,000 prepared "DELETE FROM xxx WHERE "ID" = ?" commands it
takes close to 10 minutes.
Do you run those in a single transaction or do you use one transaction per
DELETE ?
In the latter case, postgres will ensure each transaction is commited to
disk, at each commit. Since this in
When 1 client connected postgres do 180 execution per second
This is suspiciously close to 10.000 executions per minute.
You got 10k RPM disks ?
How's your IO system setup ?
Try setting synchronous_commit to OFF in postgresql.conf and see if that
changes the results. That'll give useful inf
When 1 client connected postgres do 180 execution per second
This is suspiciously close to 10.000 executions per minute.
You got 10k RPM disks ?
How's your IO system setup ?
Try setting synchronous_commit to OFF in postgresql.conf and see if that
changes the results. That'll give useful i
Load testing of postgresql 8.4 for OLTP application
suitability showed that throughput of the
database significantly degraded over time from thousands of write
transactions per second to almost zero.
A typical postgres benchmarking gotcha is :
- you start with empty tables
- t
fairly high value (rather than it not
being set at all, as it is right now, and thus inheriting the "200" default
value from vacuum_cost_limit). Does that sound right? (If, what might be a
good value to set?) Or perhaps there is a more foolproof way of doing this
that does not rely upon guesswork?
Any suggestions at all would be most welcome!
Daniel C.
can be purged entirely each day) rather than "delete froms", then
there truly would not be any reason to use "vacuum full". Does that sound
plausible?
Thanks again,
Daniel
On Thu, Jul 7, 2011 at 5:30 PM, Greg Smith wrote:
> On 07/07/2011 04:30 PM, D C wrote:
&
, as it is right now, and thus inheriting the "200" default
value from vacuum_cost_limit). Does that sound right? (If, what might be a
good value to set?) Or perhaps there is a more foolproof way of doing this
that does not rely upon guesswork?
Any suggestions at all would be most welcome!
Daniel C.
I'm running a labour-intensive series of queries on a medium-sized dataset
(~100,000 rows) with geometry objects and both gist and btree indices.
The queries are embedded in plpgsql, and have multiple updates, inserts and
deletes to the tables as well as multiple selects which require the indice
My table is a statistics counters table, so I can live with a partial
data
loss, but not with a full data loss because many counters are weekly and
monthly.
Unlogged table can increase speed, this table has about 1.6 millions of
update per hour, but unlogged with a chance of loss all informatio
It's a fairly tricky problem. I have a number of sensors producing
energy data about every 5 minutes, but at random times between 1 and
15 minutes. I can't change that as that's the way the hardware of the
sensors works. These feed into another unit, which accumulates them
and forwards them in b
It's a fairly tricky problem. I have a number of sensors producing
energy data about every 5 minutes, but at random times between 1 and
15 minutes. I can't change that as that's the way the hardware of the
sensors works. These feed into another unit, which accumulates them
and forwards them in b
That's almost identical to my tables.
You explained your problem very well ;)
I certainly will. Many thanks for those great lines of SQL!
You're welcome !
Strangely I didn't receive the mail I posted to the list (received yours
though).
--
Sent via pgsql-performance mailing list (pgsql-p
Hi
*Requirement :- *
We need to retrieve latest health of around 1.5 million objects for a given
time.
*Implementation :-*
We are storing hourly data of each object in single row. Given below is the
schema :-
*CREATE TABLE health_timeseries (*
* mobid text NOT NULL,
hour bigint NOT
t; On 2017-09-14 13:51, Subramaniam C wrote:
>
>> Hi
>>
>> QUERY :-
>>
>> _select distinct on (health_timeseries.mobid) mobid,
>> health_timeseries.health, health_timeseries.hour from
>> health_timeseries where hour >=(1505211054000/(3600*1000
With this query I am trying to get the latest hour for a given timestamp so
that I can get whole health array of all object for a given hour. So I am
doing DISTINCT on mobid and order by hour and mobid DESC.
On Thu, Sep 14, 2017 at 6:03 PM, Subramaniam C
wrote:
> I created index on morbid
Hi
I wanted to query top 20 rows by joining two tables, one table having
around 1 lac rows and other table having 5 lac rows. Since I am using ORDER
BY in the query so I created compound index with the columns being used in
ORDER BY. Initially index size was 939 MB.
Then I ran EXPLAIN(ANALYZE,BU
Hi
When I try to execute the query from sql command line then that query is
taking only around 1 sec. But when I execute the query using JDBC(Java)
using preparedStatement then the same query is taking around 10 secs.
Can you please let us know the reason and how to fix this issue?
Thanks and Re
I configured cursor_tuple_fraction to 1 but still I am facing the same
issue.
Please help.
On Thu, Sep 28, 2017 at 2:18 PM, Julien Rouhaud wrote:
> On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C
> wrote:
> > Hi
> >
> > When I try to execute the query from sql command
Filter: (("timestamp" >=
'150598950'::bigint) AND ("timestamp" <= '150599040'::bigint))
On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe
wrote:
> https://www.postgresql.org/docs/current/static/auto-explain.html
>
>
> -Message d
not automatically use a cursor, but it does use prepared
> statements which can be slower.
>
>
> Can you provide the query and the jdbc query ?
>
>
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
> On 28 September 2017 at 05:59, Subra
upgrade.
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
> On 28 September 2017 at 12:32, Subramaniam C
> wrote:
>
>> The JDBC version is 9.4-1201-jdbc41.
>>
>> Query :-
>>
>> select count(*) OVER() AS count
AM, Subramaniam C > wrote:
>
>> First output show the output when the query is executed from sql command
>> line. The second output show when it is executed from the application. AS
>> per the output it is clear that the when the query is executed through JDBC
>> its not using
g, as well as CPU, so it's a more effective
means of limiting the impact of large queries. I don't know how other
OS's handle this.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: &q
y more I/O bound than CPU bound. And unlike the
base tables, you generally don't need to read the WAL, so you don't
really need to worry about not being able to quickly scan through the
data without decompressing it.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Giv
ho just posted to -admin about a database that's
doing 340M inserts a day in 300M transactions...
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Li
this purpose,
> but I'm much less sure that the same is true of
>
> a.x > b.y AND a.x < c.z
Well, this could end up being much trickier, since who knows how b and c
are related. Though thinking about it, although I threw out the
row-by-row analysis idea to be glib, that
I is so much better than IDE (unless you
just give each drive it's own dedicated bandwidth).
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "
On Wed, Apr 06, 2005 at 06:35:10PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:
> >> Can anyone suggest a more general rule? Do we need for example to
> >> consider whe
1:43 AM
> To: Joel Fradkin
> Cc: 'PostgreSQL Perform'
> Subject: Re: [PERFORM] Any way to speed this up?
>
> "Joel Fradkin" <[EMAIL PROTECTED]> writes:
> > random_page_cost = 1.2#4# units are one sequential page
> > fetch cost
>
>
ce you'd just
pass that back.
In fact, having such a function (estimate_rows_for_sql(text)) would
probably be very useful to functions that wanted to support returning a
rows estimate.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.
naive, but it seems like you
could just put a compression routine between the log writer and the
filesystem.
> Is this a TODO?
ISTM it's at least worth hacking something together and doing some
performance testing...
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
G
On Thu, Apr 14, 2005 at 10:51:46AM -0500, Matthew Nuzum wrote:
> So if you all were going to choose between two hard drives where:
> drive A has capacity C and spins at 15K rpms, and
> drive B has capacity 2 x C and spins at 10K rpms and
> all other features are the same, the price is
benchmark
different RAID configurations using dbt2. I don't know if this is
something that the lab is setup for or capable of, though.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Wher
b with a bottleneck of one or two very large tables,
> the extra spindles won't help unless you break up the tables and
> glue them together with query magic. But it's still a point to
> consider.
Huh? Do you know how RAID10 works?
--
Jim C. Nasby, Database Consultant
gt; under 64KB=stupid, anything much over 128K/258K=wasteful.
>
> I am eager to find out how PG handles all this.
AFAIK PostgreSQL requests data one database page at a time (normally
8k). Of course the OS might do something different.
--
Jim C. Nasby, Database Consultant
ared drives. Of
course most people doing Oracle clustering are probably using a SAN and
not raw SCSI...
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
You should re-run the function test using SQL as the function language
instead of plpgsql. There might be some performance to be had there.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Whe
lable. So
over time, depending on how frequently a table is vacuumed, it will
settle down to a steady-state size that is greater than it's size after
a vacuum full.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
memory
> (which, with 1500 rows, probably can).
Actually, the planner (at least in 7.4) isn't smart enough to consider
if the sort would fit in memory or not. I'm running a test right now to
see if it's actually faster to use an index in this case.
--
Jim C. Nasby, Database Cons
s say you had a server with 6 separate 15k RPM SCSI disks, what raid
> option would you use for a standalone postgres server?
>
> a) 3xRAID1 - 1 for data, 1 for xlog, 1 for os?
> b) 1xRAID1 for OS/xlog, 1xRAID5 for data
> c) 1xRAID10 for OS/xlong/data
> d) 1xRAID1 for OS, 1xR
version 8.0, btw.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys com
On Tue, Apr 19, 2005 at 11:01:26PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > Actually, the planner (at least in 7.4) isn't smart enough to consider
> > if the sort would fit in memory or not.
>
> Really? Have you read cost
No, this is a single process. And there's known issues with context
storms on Xeons, so that might be what you're seeing.
On Tue, Apr 19, 2005 at 09:37:21PM -0700, Mischa Sandberg wrote:
> Quoting Tom Lane <[EMAIL PROTECTED]>:
>
> > "Jim C. Nasby" <[EMA
No, he's using either COPY or \COPY.
On Wed, Apr 20, 2005 at 12:34:27AM -0400, Greg Stark wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
>
> > What's really odd is that neither the CPU or the disk are being
> > hammered. The box appears to be pre
could use \timing.
In any case, it's not valid to use pgadmin to time things.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Whe
making the wrong choice
here. BTW, changing random_page_cost to 3 or 4 doesn't change the plan.
On Tue, Apr 19, 2005 at 10:40:41PM -0500, Jim C. Nasby wrote:
> On Tue, Apr 19, 2005 at 11:01:26PM -0400, Tom Lane wrote:
> > "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
>
're going to only access the database single-user,
it's just not a valid test case (and by the way, this is true no matter
what database you're looking at. Multiuser access is where you uncover
your real bottlenecks.)
--
Jim C. Nasby, Database Consultant [EMAIL PROT
ld be thinking
wrong.
BTW, the column I'm indexing is a bigint with a low correlation.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: &quo
On Fri, Apr 22, 2005 at 10:08:06PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > I've run some performance tests. The actual test case is at
> > http://stats.distributed.net/~decibel/timing.sql, and the results are at
> > http:/
in that e-mail the new
> code also recycles more pages than before.
>
> Once I've finished it up I'll prepare and post a patch.
>
> --
> Dave Chapeskie
> OpenPGP Key ID: 0x3D2B6B34
>
> ---(end of broadcast)---
> TI
On Sat, Apr 23, 2005 at 01:00:40AM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> >> Feel free to propose better cost equations.
>
> > Where would I look in code to see what's used now?
>
> All the gold is
say
> I'm surprised. :-)
>
>
> --
> Kevin Brown [EMAIL PROTECTED]
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-n
ome sysctl's can only be set in /boot/loader.conf.
hw.ata.wc=0 is an example (which you want to set on any box with IDE
drives if you want fsync to actually do what it thinks it's doing).
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy
UES(
(1,2,3),
(4,5,6),
(7,8,9)
);
I'm not sure how standard that is or if other databases support it.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go t
My recollection is that it does, but it's been a few years...
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go to
ed up hash joins?), if
there's no plans to fix them they should still be removed. If someone
ever really wanted to do something with, the code would still be in CVS.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net T
On Tue, May 10, 2005 at 02:38:41AM +1000, Neil Conway wrote:
> Jim C. Nasby wrote:
> >Having indexes that people shouldn't be using does add confusion for
> >users, and presents the opportunity for foot-shooting.
>
> Emitting a warning/notice on hash-index creation is s
On Tue, May 10, 2005 at 10:14:11AM +1000, Neil Conway wrote:
> Jim C. Nasby wrote:
> >> No, hash joins and hash indexes are unrelated.
> >I know they are now, but does that have to be the case?
>
> I mean, the algorithms are fundamentally unrelated. They share a bit of
bly you don't want to
re-write the entire index every time it looks like a different bucket
size would help.)
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go toda
On Tue, May 10, 2005 at 11:49:50AM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > What's the challange to making it adaptive, comming up with an algorithm
> > that gives you the optimal bucket size (which I would think there's
>
is absolutely no problem at all.
>
> /* Steinar */
> --
> Homepage: http://www.sesse.net/
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
>
--
Jim C. Nasby, Database Consultant [EMAIL
te Dr
> Sarasota, Florida 34243
> Tel. 941-753-7111 ext 305
>
>
>
> [EMAIL PROTECTED]
> www.wazagua.com
> Powered by Wazagua
> Providing you with the latest Web-based technology & advanced tools.
> C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
odes (ie: syncronous).
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: &
ven your experience, you might want to check out Bizgres.
(http://pgfoundry.org/projects/bizgres/) I'm sure your insights would be
most welcome.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Wind
, but that
alone isn't enough to explain the difference.
On Wed, May 11, 2005 at 05:59:10PM +0200, Manfred Koizar wrote:
> On Sun, 24 Apr 2005 17:01:46 -0500, "Jim C. Nasby" <[EMAIL PROTECTED]>
> wrote:
> >> >> Feel free to propose better cost equations.
>
solution livejournal have arrived at is quite
> similar in ways to the way google is set up.
Except that unlike LJ, google stays up and it's fast. Though granted, LJ
is quite a bit faster than it was 6 months ago.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Giv
On Thu, May 12, 2005 at 08:54:48PM +0200, Manfred Koizar wrote:
> On Wed, 11 May 2005 16:15:16 -0500, "Jim C. Nasby" <[EMAIL PROTECTED]>
> wrote:
> >> This is divided by the number of index columns, so the index correlation
> >> is estimated to be 0.
he should be flushed for each
run or not.
Does this sound like a good way to determine actual costs for index
scans (and hopefully other access methods in the future)? If so, what
would be a good way to implement this?
--
Jim C. Nasby, Database Consultant [EMAIL PROTECT
ecause
I don't see that we need to use explain when running queries. In fact,
it's possibly desireable that we don't, because of the overhead it
incurs. We would want to log an explain (maybe analyze) just to make
sure we knew what the optimizer was doing, but I think we shouldn'
On Fri, May 20, 2005 at 04:47:38PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > On Thu, May 19, 2005 at 09:31:47AM -0700, Josh Berkus wrote:
> >> can test our formula for accuracy and precision. However, such a formula
> >>
is huge and tends to swamp other factors out. As I
mentioned in my other email, my tests show explain analyze select * from
table is 5x slower than select count(*) from table.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! w
be able to see that index value were
complete, meaning that there was no reason to hit the heap for that
tuple.
I looked on the TODO but didn't see this, maybe it fell through the
cracks?
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy
Hi All,
We are testing PostgreSQL 8.0.3 on MS Windows for porting an OLTP system
from MS SqlServer.
We got a major performance issue which seems to boil down to the following
type of query:
select DISTINCT ON (PlayerID) PlayerID,AtDate from Player where
PlayerID='0' order by PlayerID
At 19:45 05/06/06, PFC wrote:
Previously, we have also tried to use LIMIT 1 instead of DISTINCT, but
the performance was no better:
select PlayerID,AtDate from Player where PlayerID='0' order by
PlayerID desc, AtDate desc LIMIT 1
The DISTINCT query will pull out all the rows and k
/msg00110.php
on this show-stopper problem for which I still have no clue how to get
around. Suggestions are much appreciated.
Thanks and regards, KC.
At 21:34 05/06/08, George Essig wrote:
On 6/2/05, K C Lau <[EMAIL PROTECTED]> wrote:
...
>
> select DISTINCT ON (PlayerID) PlayerID
his list from a few months ago
has more info.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
Fre
Hi All,
I previously posted the following as a sequel to my SELECT DISTINCT
Performance Issue question. We would most appreciate any clue or
suggestions on how to overcome this show-stopping issue. We are using 8.0.3
on Windows.
Is it a known limitation when using a view with SELECT ... LIMI
rebuilds all the indexes. It's basically the most expensive operation
you can perform on a table. There probably will be some increased
performance from the sort if the table is already mostly in the right
order though.
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give you
store what fields in what
tables/indexes each parameter corresponds to. When you go to execute you
look up the stats relevant to each parameter; you can then cache plans
according to the stats each parameter has. Of course caching all that is
a non-trivial amount of work, so you'd onl
scheduler is much worse).
>
> Here's the result, in transactions per second.
>
> ext3 jfs xfs
> -
> 10 Clients 55 81 68
> 100 Clients 61 100 64
>
BTW, it'd be interesti
00 rows) to
> speed the access and the update of those tables (considering that i will
> have few update but a lot of reading).
2 million rows is nothing unless you're on a 486 or something. As for
your other question, remember the first rule of performance tuning:
don't tune un
nore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>
--
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to
that every 5 mins in cron.
I would suggest leaving an option to have SA vacuum every n emails,
since some people may not want to mess with cron, etc. I suspect that
pg_autovacuum would be able to keep up with things pretty well, though.
--
Jim C. Nasby, Database Consultant
from perl,
> something isn't escaped correctly
I'm not sure who's responsible for DBI::Pg (Josh?), but would it make
sense to add better support for bytea to DBI::Pg? ISTM there should be a
better way of doing this than adding gobs of \'s.
--
Jim C. Nasby, Database Consulta
what I've read 4.2BSD actually took priority into account
when scheduling I/O. I don't know if this behavior is still present in
FreeBSD or the like, though. So depending on the OS, priority could play
a role in determining I/O scheduling.
--
Jim C. Nasby, Database Consultant [EM
> e.g. A table with books with fields holding a comment, table of content
> or example chapters
> or what ever else.
>
> Does anybody have an idea where I can find a database like this or does
> even have something like this?
Most benchmarks (such as dbt* and pgbench) have data ge
anttraffic.com
> >
> > ---(end of broadcast)---
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >subscribe-nomail command to [EMAIL PROTECTED] so that your
> >message can
ngle one of these still begs the question of whether the
> changes will have a *material* impact on performance.
How many of these things are currently easy to change with a recompile?
I should be able to start testing some of these ideas in the near
future, if they only require minor cod
index on both 'completion_time'
> and 'start_time', but can't put a temporal lobe on the details.
>
>
> Mark
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>htt
el query execution, not
swiching to a completely thread-based model.
In any case, there are other ways to enable parallelism without using
threads, such as handing actual query execution off to a set of
processes.
--
Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED]
Pervasi
d
range of OSes is important.
Can anyone recommend hardware as well as vendors? Feel free to reply
off-list.
--
Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461
---(e
ing the SQL and doing what the author actually wanted, wouldn't a
bitmap combination of indexes work here?
Or with an index on (start_time, completion_time), start an index scan
at start_time = SOMEDATE and only include rows where completion_time <
SOMEDATE. Of course if SOMEDATE is near
ry.org/projects/pgcluster/) is a possible solution should
you need clustering/load balancing, but as I mentioned I suspect you
should be ok without it.
--
Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461
101 - 200 of 819 matches
Mail list logo