On 29.04.2008, at 12:55, Greg Smith wrote:
This is the best write-up I've seen yet on quantifying what SSDs are
good and bad at in a database context:
http://www.bigdbahead.com/?p=37
They totally missed "mainly write" applications which most of my
applications are. Reads in a OLTP setup a
On 02.12.2007, at 06:30, Merlin Moncure wrote:
I've been dying to know if anyone has ever done PostgreSQL training at
'the big nerd ranch'.
There are a couple of reviews floating around the web:
http://www.linux.com/articles/48870
http://www.linuxjournal.com/article/7847
I was in the course
Am 23.05.2007 um 09:08 schrieb Andy:
I have a table with varchar and text columns, and I have to search
through these text in the whole table.
An example would be:
SELECT * FROM table
WHERE name like '%john%' or street
like '%srt%'
Anyway, the query planner al
Am 21.05.2007 um 23:51 schrieb Greg Smith:
The standard pgbench transaction includes a select, an insert, and
three updates.
I see. Didn't know that, but it makes sense.
Unless you went out of your way to turn it off, your drive is
caching writes; every Seagate SATA drive I've ever seen do
Am 21.05.2007 um 15:01 schrieb Jim C. Nasby:
I'd be willing to bet money that the drive is lying about commits/
fsync.
Each transaction committed essentially requires one revolution of the
drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.
Yes, that right, but if a lot of the t
On 18.05.2007, at 10:21, Kenneth Marshall wrote:
It is arguable, that updating the DB software version in an enterprise
environment requires exactly that: check all production queries on the
new software to identify any issues. In part, this is brought on by
the
very tuning that you performed
On 12.04.2007, at 15:58, Jason Lustig wrote:
Wow! That's a lot to respond to. Let me go through some of the
ideas... First, I just turned on autovacuum, I forgot to do that.
I'm not seeing a major impact though. Also, I know that it's not
optimal for a dedicated server.
Hmm, why not? Have
On 12.04.2007, at 08:59, Ron wrote:
1= Unless I missed something, the OP described pg being used as a
backend DB for a webserver.
Yep.
I know the typical IO demands of that scenario better than I
sometimes want to.
:-(
Yep. Same here. ;-)
2= 1GB of RAM + effectively 1 160GB HD = p*ss
On 12.04.2007, at 07:26, Ron wrote:
You need to buy RAM and HD.
Before he does that, wouldn't it be more useful, to find out WHY he
has so much IO?
Have I missed that or has nobody suggested finding the slow queries
(when you have much IO on them, they might be slow at least with a
hig
On 04.04.2007, at 08:03, Alexandre Vasconcelos wrote:
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file system or on the database, as large o
On 30.03.2007, at 19:18, Christopher Browne wrote:
2. There are known issues with the combination of Xeon processors and
PAE memory addressing; that sort of hardware tends to be *way* less
speedy than the specs would suggest.
That is not true as the current series of processors (Woodcrest and
On 22.03.2007, at 11:53, Steve Atkins wrote:
As long as you're ordering by some row in the table then you can do
that in
straight SQL.
select a, b, ts from foo where (stuff) and foo > X order by foo
limit 10
Then, record the last value of foo you read, and plug it in as X
the next
time
On 05.03.2007, at 19:56, Alex Deucher wrote:
Yes, I started setting that up this afternoon. I'm going to test that
tomorrow and post the results.
Good - that may or may not give some insight in the actual
bottleneck. You never know but it seems to be one of the easiest to
find out ...
c
On 01.03.2007, at 13:40, Alex Deucher wrote:
I read several places that the SAN might be to blame, but
testing with bonnie and dd indicates that the SAN is actually almost
twice as fast as the scsi discs in the old sun server. I've tried
adjusting just about every option in the postgres config
On 02.03.2007, at 14:20, Alex Deucher wrote:
Ah OK. I see what you are saying; thank you for clarifying. Yes,
the SAN is configured for maximum capacity; it has large RAID 5
groups. As I said earlier, we never intended to run a DB on the SAN,
it just happened to come up, hence the configurat
On 27.01.2007, at 00:35, Russell Smith wrote:
Guess 1 would be that your primary key is int8, but can't be
certain that is what's causing the problem.
Why could that be a problem?
cug
---(end of broadcast)---
TIP 3: Have you checked our exten
On 13.12.2006, at 19:03, Ron wrote:
What I find interesting is that so far Guido's C2D Mac laptop has
gotten the highest values by far in this set of experiments, and no
one else is even close.
This might be the case because I have tested with fsync=off as my
internal harddrive would be a
On 12.12.2006, at 02:37, Michael Stone wrote:
Can anyone else reproduce these results? I'm on similar hardware
(2.5GHz P4, 1.5G RAM) and my test results are more like this:
I'm on totally different hardware / software (MacBook Pro 2.33GHz
C2D) and I can't reproduce the tests.
I have playe
Am 27.11.2006 um 17:05 schrieb AgentM:
There is a known unfortunate limitation on Darwin for SysV shared
memory which, incidentally, does not afflict POSIX or mmap'd shared
memory.
Hmmm. The article from Chris you have linked does not mention the
size of the mem segment you can allocate.
Hi.
After I had my hands on an Intel MacBook Pro (2 GHz Core Duo, 1GB
RAM), I made some comparisons between the machines I have here at the
company.
For the ease of it and the simple way of reproducing the tests, I
took pgbench for the test.
Konfigurations:
1. PowerMac G5 (G5 Mac OS X)
Am 27.11.2006 um 08:04 schrieb Guido Neitzer:
But, be aware of another thing here: As far as I have read about 64
Bit applications on G5, these apps are definitely slower than their
32 bit counterparts (I'm currently on the train so I can't be more
precise here without Google ..
Am 27.11.2006 um 00:25 schrieb Jim C. Nasby:
Got any data about that you can share? People have been wondering
about
cases where drastically increasing shared_buffers makes a difference.
I have tried to compile PostgreSQL as a 64Bit application on my G5
but wasn't successful. But I must ad
Am 27.11.2006 um 04:20 schrieb Brendan Duddridge:
I think the main issue is that we can't seem to get PostgreSQL
compiled for 64 bit on OS X on an Xserve G5. Has anyone done that?
We have 8 GB of RAM on that server, but we can't seem to utilize it
all. At least not for the shared_buffers se
Am 23.11.2006 um 23:37 schrieb Gopal:
hared_buffers = 2# min 16 or
max_connections*2, 8KB each
If this is not a copy & paste error, you should add the "s" at the
beginning of the line.
Also you might want to set this to a higher number. You are setting
about
Am 18.11.2006 um 19:44 schrieb Guido Neitzer:
It might be, that you hit an upper limit in Mac OS X:
[galadriel: memtext ] cug $ ./test
test(291) malloc: *** vm_allocate(size=2363490304) failed (error
code=3)
test(291) malloc: *** error: can't allocate region
test(291) malloc: ***
Am 19.11.2006 um 04:13 schrieb Brian Wipf:
It certainly is unfortunate if Guido's right and this is an upper
limit for OS X. The performance benefit of having high
shared_buffers on our mostly read database is remarkable.
I hate to say that, but if you want best performance out of
Postgre
Hi.
I've sent this out once, but I think it didn't make it through the
mail server ... don't know why. If it is a double post - sorry for it.
Brian Wipf <[EMAIL PROTECTED]> wrote:
> I'm trying to optimize a PostgreSQL 8.1.5 database running on an
> Apple G5 Xserve (dual G5 2.3 GHz w/ 8GB of
Brian Wipf <[EMAIL PROTECTED]> wrote:
> I'm trying to optimize a PostgreSQL 8.1.5 database running on an
> Apple G5 Xserve (dual G5 2.3 GHz w/ 8GB of RAM), running Mac OS X
> 10.4.8 Server.
>
> The queries on the database are mostly reads, and I know a larger
> shared memory allocation will
On 9/23/06, Dave Cramer <[EMAIL PROTECTED]> wrote:
1) The database fits entirely in memory, so this is really only
testing CPU, not I/O which should be taken into account IMO
I don't think this really is a reason that MySQL broke down on ten or
more concurrent connections. The RAM might be, bu
I find the benchmark much more interesting in comparing PostgreSQL to
MySQL than Intel to AMD. It might be as biased as other "benchmarks"
but it shows clearly something that a lot of PostgreSQL user always
thought: MySQL gives up on concurrency ... it just doesn't scale well.
cug
On 9/23/06, [
Hi.
Do you compare apples to apples? InnoDB tables to PostgreSQL? Are all
needed indexes available? Are you sure about that? What about fsync?
Does the benchmark insert a lot of rows? Have you tested placing the
WAL on a separate disk? Is PostgreSQL logging more stuff?
Another thing: have you an
Because there is no MVCC information in the index.
cug
2006/9/12, Piotr Kołaczkowski <[EMAIL PROTECTED]>:
On Tuesday 12 September 2006 12:47, Heikki Linnakangas wrote:
> Laszlo Nagy wrote:
> > I made another test. I create a file with the identifiers and names of
> > the products:
> >
> > psql#
On 13.06.2006, at 12:33 Uhr, Ruben Rubio Rey wrote:
Seems autovacumm is working fine. Logs are reporting that is being
useful.
But server load is high. Is out there any way to stop "autovacumm"
if server load is very high?
Look at the cost settings for vacuum and autovacuum. From the manu
On 13.06.2006, at 8:44 Uhr, Ruben Rubio Rey wrote:
Tonight database has been vacumm full and reindex (all nights
database do it)
Now its working fine. Speed is as spected. I ll be watching that
sql ...
Maybe the problem exists when database is busy, or maybe its
solved ...
Depending on
On 18.05.2006, at 12:42 Uhr, Olivier Andreotti wrote:
I use prepared statements for all requests. Each transaction is about
5-45 requests.
This may lead to bad plans (at least with 8.0.3 this was the
case) ... I had the same problem a couple of months ago and I
switched from prepared state
On 20.04.2006, at 18:10 Uhr, Radovan Antloga wrote:
I have once or twice a month update on many records (~6000) but
not so many. I did not expect PG would have problems with
updating 15800 records.
It has no problems with that. We have a database where we often
update/insert rows with about
On 18.04.2006, at 17:16 Uhr, Tarabas (Manuel Rorarius) wrote:
Is there any way to speed the like's up with a different locale than C
or to get an order by in a different Locale although using the
default C locale?
Sure. Just create the index with
create index __index on (
varchar_pattern_o
On 30.03.2006, at 23:31 Uhr, PFC wrote:
(why do you think I don't like Java ?)
Because you haven't used a good framework/toolkit yet? Come on, the
language doesn't really matter these days, it's all about frameworks,
toolkits, libraries, interfaces and so on.
But, nevertheless, t
On 27.03.2006, at 21:20 Uhr, Brendan Duddridge wrote:
Does that mean that even though autovacuum is turned on, you still
should do a regular vacuum analyze periodically?
It seems that there are situations where autovacuum does not a really
good job.
However, in our application I have made
On 24.03.2006, at 23:54 Uhr, PFC wrote:
bookmark_delta contains very few rows but is inserted/deleted very
often... the effect is spectacular !
I guess I'll have to vacuum analyze this table every minute...
What about using autovacuum?
cug
--
PharmaLine, Essen, GERMANY
Software an
On 10.03.2006, at 10:11 Uhr, NbForYou wrote:
So the only solution is to ask my webhost to upgrade its postgresql?
Seems to be.
The question is will he do that?
You are the customer. If they don't, go to another provider.
After all a license fee is required for
commercial use. And running
On 06.03.2006, at 21:10 Uhr, Jignesh K. Shah wrote:
Like migrate all your postgresql databases to one T2000. You might
see that your average response time may not be faster but it can
handle probably all your databases migrated to one T2000.
In essence, your single thread performance will n
On 23.12.2005, at 15:35 Uhr, Carlos Benkendorf wrote:
I appreciate your suggestion but I think I´m misunderstanding
something, the select statement should return at about 150.000
rows, why 5 rows?
I have looked at the wrong lines of the explain ... statement. Sorry,
my fault. With that ma
On 23.12.2005, at 13:34 Uhr, Carlos Benkendorf wrote:
For some implementation reason in 8.0.3 the query is returning the
rows in the correct order even without the order by but in 8.1.1
probably the implementation changed and the rows are not returning
in the correct order.
You will never
On 01.12.2005, at 17:04 Uhr, Michael Riess wrote:
No. Our database contains tables for we content management systems.
The server hosts approx. 500 cms applications, and each of them has
approx. 30 tables.
Just for my curiosity: Are the "about 30 tables" with similar schemas
or do they dif
On 19.11.2005, at 13:05 Uhr, Alex Wang wrote:
Yes, it's a "queue" table. But I did not perform many insert/delete
before it becomes slow. After insert 10 records, I just do get/
update continuously.
When PostgreSQL updates a row, it creates a new row with the updated
values. So you should
On 12.09.2005, at 14:38 Uhr, Dave Cramer wrote:
The difference between the 7.4 driver and the 8.0.3 driver is the
8.0.3 driver is using server side prepared statements and binding
the parameter to the type in setXXX(n,val).
Would be a good idea when this were configurable.
I found my solut
On 11.09.2005, at 11:03 Uhr, Andreas Seltenreich wrote:
I'm not perfectly sure, but since the index could only be used with a
subset of all possible parameters (the pattern for like has to be
left-anchored), I could imagine the planner has to avoid the index in
order to produce an universal plan
Hi.
I have a performance problem with prepared statements (JDBC prepared
statement).
This query:
PreparedStatement st = conn.prepareStatement("SELECT id FROM
dga_dienstleister WHERE plz like '45257'");
does use an index.
This query:
String plz = "45257";
PreparedStateme
Hi.
I have an interesting problem with the JDBC drivers. When I use a
select like this:
"SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz,
t0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz
like ?::varchar(256) ESCAPE '|'" withBindings: 1:"53111"(plz)>
the existing i
50 matches
Mail list logo