On Mon, Mar 24, 2014 at 12:08 AM, Heikki Linnakangas
wrote:
> On 03/22/2014 02:59 AM, Erik van Zijst wrote:
>>
>> Is there any way I can get postgres to perform the hash calculations
>> on the *result* of the other parts of the where clause, instead of the
>> other way
snowflakes), but how does that affect
our situation? It means you simply would have found another password
for that user that is just as valid. The system will accept it.
> success of the second AND arm
> means that we are on user detkin's row of auth_user.
My password could be '
On Sat, Mar 22, 2014 at 3:56 PM, bricklen wrote:
> On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst
>> I could nonetheless take a stab at it, but life would certainly be
>> easier if I could translate each component independently and leave
>> optimization to the query p
might be present at arbitrary depths.
For example:
(
active eq true
AND
(
password eq "foo"
OR
password eq "bar"
)
)
AND
(
username eq "erik"
OR
email contains "bar"
)
Cu
m the hash calculations
on the *result* of the other parts of the where clause, instead of the
other way around? Or else rewrite the query?
Cheers,
Erik
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ode" to "on".
>
See contrib/pg_archivecleanup:
http://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html
hth,
Erik Rijkers
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
on.)"
postgres=# select provolatile from pg_proc where proname = 'current_user';
provolatile
-
s
So, I think the OP's question is still valid.
Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/
Technically you can then use pgbench on that set of statements, but I
usually just use perl's "Benchmark" module (i'm sure ruby or java
or whatever has a similar tool)
(First, I log statements by loading the application or web server with
statement logging turned on so I'm not "guessing" w
> See ALTER TABLE and CREATE TABLE (and the Index variants).
>
> ALTER TABLE foo SET (fillfactor=90);
I'll try that.
> This will leave on average, 10% of every 8k block empty and allow updates to
> columns to more likely live within the same block.
Good for the items table.
Probably bad for th
quot;, and then i won't have to worry about the system being down on a
table lock.
Seems like "VACUUM FULL" could figure out to do that too depending on
the bloat-to-table-size ratio ...
- copy all rows to new table
- lock for a millisecond while renaming tables
- drop old t
estination = 'syslog'
client_min_messages = notice
should be enought to get it going and for me to see it right? not
sure which setting controls logging of autovac, nor am i sure of a way
to *ask* the server if autovac is running.
On Mon, Jun 1, 2009 at 10:06 AM, Tom Lane wrote:
>
it was all vacuum full...thanks
the other 2 servers truncate and reload that table from time to time
... IE: they are always vacuumed
as the "master" ... that server never does it... hence the bloat
but why wasn't autovac enough to reclaim at least *most* of the space?
that table *does* get up
I have 3 servers, all with identical databases, and each performing
very differently for the same queries.
www3 is my fastest, www2 is the worst, and www1 is in the middle...
even though www2 has more ram, faster CPU and faster drives (by far),
and is running a newer version of postgres. I have be
On Tue, May 26, 2009 at 6:28 PM, Craig James wrote:
> Greg Smith wrote:
>>
>> What I'd love to have is a way to rent a fairly serious piece of dedicated
>> hardware, ideally with multiple (at least 4) hard drives in a RAID
>> configuration and a battery-backed write cache. The cache is negotiable
to
200 connections (although we do have the max_connections set to 500).
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
--
Sent via
for long times
(using the default 300s) causing our free memory to gradually decrease
over the day and resetting our pools would clear it out so there was a
direct cause and effect relationship there. When I dropped the
connection_life_time to 30s the problem went away.
Erik Jones
DBA | E
that down to 20 or 30 seconds.
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
--
Sent via pgsql-performance mailing list (pgsql-perform
.
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make cha
or the query won't
necessarily clear the table's data from memory as that will be in the
shared_buffers. If you really want to flush the data from memory you
need to read in data from other tables of a size total size greater
than your shared_buffers setting.
Erik Jones
DBA | Emma®
g with the data in the paritions?
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
--
Sent via pgsql-performance mailing list (pgs
loads easier to wrap your head around and it's
extremely easy to add/remove tables from replication as it doesn't
deal with "table sets" like Slony does.
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations
of the db cluster? Changes to shared memory
settings require that.
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
--
Sent via p
On Feb 20, 2008, at 10:54 AM, Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
I would suggest leaving out the && which only obfuscate what's
going on here.
PGOPTIONS=... pg_restore ...
would work just as well a
On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
"Douglas J Hunley" <[EMAIL PROTECTED]> writes:
On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:
pg_restore is a postgres client app that uses libpq to connect and,
thus, will pick up anything in your $PGOPTIONS e
On Feb 19, 2008, at 3:58 PM, Dan Langille wrote:
Erik Jones wrote:
On Feb 15, 2008, at 3:55 PM, Dan Langille wrote:
We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing
some playing around
with various settings. So far, we've (I say we, but it's another
guy d
minor) understanding of what's going on, I'd
love to do
something like:
pg_restore -WM $large_value
pg_restore is a postgres client app that uses libpq to connect and,
thus, will pick up anything in your $PGOPTIONS env variable. So,
PGOPTONS="-c maintenance_work_mem=512MB
lity offered by Postgres's use of transaction
logs you'll probably be much better served with a different database
or even a completely different storage scheme than trying to make
Postgres fit that bill.
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.2
or us in the short term.
What have you been using on AIX and why?
I really don't know anything about AIX, but are you sure that these
long running commits are directly correlated with using open_datasync?
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0
On Feb 15, 2008, at 12:42 PM, Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, 15 Feb 2008 12:37:10 -0600
Erik Jones <[EMAIL PROTECTED]> wrote:
(welll, forced
to) migrate to a new system with a sane drive configuration. The
old set up was done horribl
;t
consider the numbers I was seeing then a reliable expectation as that
system was nowhere close to ideal. We've seen much more sane and
consistent numbers on a more normal setup, i.e. without forcedirectio
and with <= 25% system memory.
Erik Jones
DBA | Emma®
[EMAIL PROTEC
;t feasible, possible, or recommended to rewrite all of the
possible generated SQL that could be designed at runtime by the tool.
No, but it is feasible to expect the tool to generate well-formed
queries without redundant clauses. There are plenty that do.
Erik Jones
DBA | Emma®
[EMAIL PROTECTED]
800
you, you could put the SELECT under
EXECUTE in the function. But most likely this is just a one-time
problem.
Is there any way to clear the cached plan manually other than
disconnecting (With the help of some commands/Configuration
settings) ?
Only as of 8.3.
Erik Jones
DBA | Emma®
[EMAIL
t becomes 2G). What
kind of data are in these files? What gain do you foresee in storing
the files directly in the db (as opposed, say, to storing the paths
to the files in the filesystem)?
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.07
of lines of this message after
gruelingly reading your last message and I wouldn't be surprised if
others with more experience and better answers at the ready simply
ignored both as that much text is extremely difficult to follow in
the absence those aforementioned virtues.
Erik Jones
On Dec 7, 2007, at 10:44 AM, Guillaume Cottenceau wrote:
Erik Jones writes:
vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200
40/200 100/1000 150/1000 200/1000 300/1000
VACUUM ANALYZE time54 s112 s188
s109 s 152 s 190 s
it as
practically usable as possible, albeit with low dev priority)
Ok, I won't say what you said not to say. But, I will say that I
don't agree with you're conjecture that VACUUM FULL should be made
more lightweight, it's like using dynamite to knock a whole in a
herent with the
same information like it was at 12:30AM. it does not matter if the
process needs 4 hours to run.
Does the pg_dump create this kind of "consistent backups"? Or do I
need
to do the backups using another program?
Yes, that is exactly what pg_dump does.
Erik Jones
Software Deve
n an ANALYZE
in isolation. Perhaps, if the ANALYZE runs fast enough he can just
lock the table for the run.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in st
going to run.
Most people don't run 8core machines and they especially don't run
32thread Niagra boxes.
Wait! So, what do you check you're email with? :)
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps o
any scripts out there that I could grab to make a quick job, no
brains required of it?
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at h
PostgreSQL on various platforms, with
PostgreSQL winning!
Which is typical for those who aren't in on the FUD :)
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit
anyone who can
run the 8.3 beta to consider adding some tests in this area while
there's still time to correct any issues encountered before the
official release.
Greg, thanks a lot of this. I'd say this should definitely be linked
to from the main site's techdocs s
with wal_buffers and checkpoint_segments?
You're reading data from a file and generating inserts? Can you not
use COPY? That would be the most performant.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations
Note Markus's point
that both queries must be initiated by concurrent connections. Since
Postgres doesn't have any kind of shared transaction mechanism across
connections then this is inherent.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
6
arellel based on the indentation of
the output. Items at the same indentation level under the same
"parent" line will run in parallel
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communi
blems.
Wow. That's inordinately high. I'd recommend dropping that to 32-43MB.
Btw, is there some way to determine up front how many sort or hash
operations will be running in parallel for a given query?
Explain is your friend in that respect.
Erik Jones
Software Developer | Emma®
[E
egards, tom lane
Tom, are there any docs anywhere that explain how to interpret those
per-context memory dumps? For example, when I see an autovacuum
context listed is it safe to assume that the error came from an
autovac operation, etc.?
Erik Jones
Software Developer | Emma®
[EMAIL PRO
p requests) to not sufficiently explain a given topic, shouldn't
it be revised? One thing that might help is a hyperlinked glossary
so that people reading through the documentation can go straight to
the postgres definition of dead tuple, index bloat, etc.
Erik Jones
Software Developer
ed. I believe that's been fixed for 8.3.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
just out of curiousity, is this the work that will
allow standby servers to have selects run on them without stopping
WAL replay?
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market
terval? That would at
least help to reduce the amount of data that needs to be flushed when
Postgres fsyncs.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate &am
On Aug 25, 2007, at 2:58 PM, Erik Jones wrote:
On Aug 24, 2007, at 7:41 PM, Benjamin Arai wrote:
Hi,
I have an application which loads millions of NEW documents each
month
into a PostgreSQL tsearch2 table. I have the initial version
completed
and searching performance is great but my
reate a table from which all of your partitioned tables
inherit. Partition in such a way that you can use constraint
exclusion and then you can treat the parent table like the view you
were suggesting.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292
On Aug 8, 2007, at 3:00 AM, Heikki Linnakangas wrote:
Erik Jones wrote:
Decibel! wrote:
I should mention that if you can handle splitting the
update into multiple transactions, that will help a
lot since it means you won't be doubling the size of
the table.
As I mentioned above, when y
write out one million
new rows. But, if you do the update a quarter million at a time, the
last three updates would be able to re-use many of the rows deleted
in earlier updates.
Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma
ationmaybe we can have a knowledge exchange.
Cheers
Mike
Mike,
If all you need is for your trigger to make a simple query on another
db then you can use dblink or an untrusted version of one of the
available procedural languages such as plperlu or plpythonu.
Erik Jones
Software Developer |
(well,
strictly speaking, your inserts into the interim table would be
blocked by any exclusive locks on it but you shouldn't need to ever
do that anyway).
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
---(end of broadcast)---
t the cache decide when it needs to write and read from disk. If
there are other caveats to direct i/o on Linux I'd love to hear them.
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
e as before they were
being done in terms of the the filesystem's cache page size (as it's
in virtual memory). Basically, we do so many write transactions that
the fs cache was constantly paging.
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
On Apr 5, 2007, at 1:27 PM, Mark Lewis wrote:
On Thu, 2007-04-05 at 13:09 -0500, Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's buffer
cache.
Other DBMS like DB2 and Oracle has provided Direct
On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's
buffer cache.
Other DBMS like DB2 and Oracle has provided Direct I/O option to
elimina
od addition). So, in order to
use direct IO with postgres you'll need to consult your filesystem
docs for how to set the forcedirectio mount option. I believe it can
be set dynamically, but if you want it to be permanent you'll to add
it to your fstab/vfstab file.
erik jones &l
7;t reduce service time too much... As well even this
gain is quickly decreasing with growing workload! So, yes 8K is good
enough and probably the most optimal choice for LOG (as well data)
block size.
Rgds,
-Dimitri
Hey, man, thanks for taking the time to profile that!
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
On Mar 30, 2007, at 4:46 PM, Josh Berkus wrote:
Erik,
You'er welcome! However, I believe our situation is very different
from what you're testing if I understand you correctly. Are you
saying that you're entire database will fit in memory? If so, then
these are very differen
s logical block size to 4K and fragment size to 512 bytes and
then set postgres's BLCKSZ to 512bytes. However, as Ken has just
pointed out, what you gain in space efficiency you lose in
performance so if you're working with a high traffic database this
wouldn't be a good id
block size
and you want the two to match.
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
On Mar 29, 2007, at 5:15 PM, Dimitri wrote:
>>
> Erik,
>
> using 'forcedirectio' simply brings your write operations to the
> *real* volume - means while you need to write 10 bytes you'll
write 10
> bytes (instead of UFS block size (8K)). So it explai
On Mar 29, 2007, at 7:23 PM, Josh Berkus wrote:
Erik,
Wow, thanks for the post.
We've just started testing the option of sizing shared_buffers
bigger than
the database, and using forcedirectio in benchmarks at Sun. So
far, our
experience has been *equal* performance in
On Mar 29, 2007, at 2:19 PM, Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
One very important thing to note here is that the number, or rather
rate, of disk writes has not changed. It's the volume of data in
those writes that has dropped, along with those transaction
On Mar 29, 2007, at 12:41 PM, dimitri k wrote:
On 3/29/07, Erik Jones <[EMAIL PROTECTED]> wrote:
On Mar 29, 2007, at 11:16 AM, Tom Lane wrote:
> Erik Jones <[EMAIL PROTECTED]> writes:
>> We've recently made a couple changes to our system that have
resulted
&
On Mar 29, 2007, at 11:16 AM, Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
We've recently made a couple changes to our system that have resulted
in a drastic increase in performance as well as some very confusing
changes to the database statistics, s
lume of data being written)? Given that we
have the bgwriter on, could it have been the culprit and one of the
changes allowed it to now operate more efficiently and/or correctly?
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
On Mar 21, 2007, at 4:13 PM, Tobias Brox wrote:
[Erik Jones - Wed at 09:31:48AM -0500]
I use cacti (http://cacti.net) which does the same thing that munin
does but in php instead. Here's what I use to db stats to it (again,
php):
I haven't tried cacti, but our sysadm has done a
t;;
exit();
}
$db_stat_sql = "SELECT {$argv[2]}
FROM pg_stat_database
WHERE datname='$db_name';";
if(DB::isError($db_stat = $db->getOne($db_stat_sql))) {
exit();
}
echo "$db_stat\n";
erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)
inspiration) together with pgAgent
(http://www.pgadmin.org/docs/1.4/pgagent.html)
/Mikael
Those are some great functions.
--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)
---(end of broadcast)---
TIP 3: Have you checked our ext
Erik Jones wrote:
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
That's a good idea, but first I'll still need to run it by my
sysadmin wrt space -- our dump files are around 22GB when we can let
them finish these days.
Given that we're now speculating
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
That's a good idea, but first I'll still need to run it by my sysadmin
wrt space -- our dump files are around 22GB when we can let them finish
these days.
Given that we're now speculating about regex problems
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
Guillaume Smet wrote:
Could you set log_min_duration_statement=0 on your server and enable
Heh, unfortunately, setting log_min_duration_statement=0 would be a
total last resort as the last we counted (2 months a
Guillaume Smet wrote:
Erik,
Could you set log_min_duration_statement=0 on your server and enable
logging (tutorial here if you don't know how to do that:
http://pgfouine.projects.postgresql.org/tutorial.html).
You should see which queries are executed in both cases and find the
slow one e
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
I could see this taking an unreasonable amount of time if you had a huge
number of pg_class rows or a very long search_path --- is your database
at all out of the ordinary in those ways?
Well, r
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
...
sigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0
send(4, " Q\0\0\0E5 S E L E C T ".., 230, 0) = 230
<--- Hang is
right here!
sigaction(
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
Hmm... This gets stranger and stranger. When connecting to the
database with the psql client in 8.2's bin directory and using commands
such as \d the client hangs, or takes an extremely long time.
Hangs at what poi
Richard Huxton wrote:
Erik Jones wrote:
Hmm... This gets stranger and stranger. When connecting to the
database with the psql client in 8.2's bin directory and using
commands such as \d the client hangs, or takes an extremely long
time. If we connect to the same 8.2 database with a
ginal Message
Subject:Re: [PERFORM] Slow dump?
Date: Tue, 02 Jan 2007 11:40:18 -0600
From: Erik Jones <[EMAIL PROTECTED]>
To: Tom Lane <[EMAIL PROTECTED]>
CC: pgsql-performance@postgresql.org
References: <[EMAIL PROTECTED]> <[EMAIL PROTECTED]
Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
Hello, we recently migrated our system from 8.1.x to 8.2 and when
running dumps have noticed an extreme decrease in speed where the dump
is concerned (by more than a factor of 2).
That's odd. pg_dump is normally pret
tgres settings:
shared_buffers: 5
work_mem: 8192
maintenance_work_mem: 262144
max_stack_depth: 3048 (default)
There doesn't seem to be any other performance degradation while the
dump is running (which I suppose is good). Any ideas?
--
erik jones <[EMAIL PROTECTED]>
software de
Rafael Martinez wrote:
On Wed, 2006-12-06 at 14:19 -0600, Erik Jones wrote:
Rafael Martinez wrote:
On Wed, 2006-12-06 at 14:55 -0500, Ted Allen wrote:
Stephan Szabo wrote:
On Wed, 6 Dec 2006, Rafael Martinez wrote:
mailstats=# EXPLAIN update
((queue_id)::text = '1GrxLs-0004N9-I1'::text)
Filter: (mailhost = '129.240.10.47'::inet)
(6 rows)
---
Look again at the estimated costs of those two query plans. You haven't
gained
ne
update.
--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
power cable, or an accidental power cycle from exercising the wrong
switch. :) Both are probably more common causes of failure than a total
power outage.
Erik Myllymaki wrote:
I have been in discussion with 3ware support and after adjusting some
settings, the 3ware card in RAID 1 gets better
good points, thanks.
Tom Arthurs wrote:
UPS does not protect against the tech behind the rack unplugging the
power cable, or an accidental power cycle from exercising the wrong
switch. :) Both are probably more common causes of failure than a total
power outage.
Erik Myllymaki wrote:
I
a choice of losing all of your data the next time the
system shuts down uncleanly but being fast, or of running slow, or of
fixing the applications to use chunkier transactions.
-- Mark
On Fri, 2006-04-28 at 13:36 -0400, Vivek Khera wrote:
On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:
When
This is a question that I also posted on Dell hardware forums, and I realize it
probably belongs there more than here. But I am thinking someone might have
some anecdotal information that could help me and this post may help someone
else down the road.
My PowerEdge 1800 (dual 3ghz Xeon, 3GB ra
Yahoo! Sports
Rekindle the Rivalries. Sign up for Fantasy Football
http://football.fantasysports.yahoo.com
---(end of broadcast)---
TIP 8: explain analyze is your friend
here's got to be a better way than a
sequence scan on 10,000,000 records, but I'll be darned if I can find
any way to improve things here.
Thanks for any help you all can offer!!
Erik Norvelle
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
croseconds
#commit_siblings = 5# range 1-1000
#
fsync = false
#wal_sync_method = fsync# the default varies across platforms:
# # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0 # range 0-16
== End Snip ===
Sal
b, refere3a, refere3b),
indethom_sectio_ndx btree (nomeoper, refere1a, refere1b, refere2a, refere2b, refere3a, refere3b, refere4a, refere4b),
verbum_ndx btree (verbum)
Thanks for your assistance!
-Erik Norvelle
The ANALYSE did the trick... Thanks! Will also read through the
archives...
-Erik
On lunes, dici 1, 2003, at 15:04 Europe/Madrid, Andrew Sullivan wrote:
On Mon, Dec 01, 2003 at 02:40:30PM +0100, Erik Norvelle wrote:
it=> explain select * from indethom where clavis < 25;
What's the
to point me to any FAQ or previous message that already answers this question. Thanks in advance!
-Erik Norvelle
99 matches
Mail list logo