le to give me some insight or comments to my assessment--is it accurate?
Any input would be helpful, and I'll try to make necessary architectural
changes to keep this from happening again.
Do you have wal archiving enabled? (if so lets see your archive_command).
Cheers
Mark
--
let's
have that then!
I've certainly observed a 'fear of package installation' on the part of
some folk, which is often a hangover from the 'Big IT shop' mentality
where it requires blood signatures and child sacrifice to get anything
new installed.
regards
Mark
P
table enough now? Also xfs has seen quite a bit of
development in these later kernels, any thoughts on that?
Cheers
Mark
P.s: We are quite keen to move away from ext3, as we have encountered
its tendency to hit a wall under heavy load and leave us waiting for
kjournald and pdflush to cat
We are running 8.3.10 64bit.
This message is a request for information about the "initplan" operation in
explain plan.
I want to know if I can take advantage of it, and use it to initialize
query-bounds for the purpose of enforcing constraint exclusion on a table which
has been range-partition
]
Sent: Sunday, August 01, 2010 7:08 AM
To: Mark Rostron
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] what does "initplan" operation in explain output mean?
Mark Rostron writes:
> This message is a request for information about the "initplan" operation
We are running 8.3.10 64bit.
Compare the plans below.
They all do the same thing and delete from a table named work_active (about
500rows), which is a subset of work_unit (about 50m rows).
I want to introduce range-partitions on work_unit.id column (serial pk), and I
want constraint exclusio
eg suggesting that there was no benefit in having the
latter > 10MB). I wonder about setting shared_buffers higher - how large
is the database?
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 06/08/10 11:58, Alan Hodgson wrote:
On Thursday, August 05, 2010, Mark Kirkwood
wrote:
Normally I'd agree with the others and recommend RAID10 - but you say
you have an OLAP workload - if it is *heavily* read biased you may get
better performance with RAID5 (more effective disks to
On 06/08/10 12:31, Mark Kirkwood wrote:
On 06/08/10 11:58, Alan Hodgson wrote:
On Thursday, August 05, 2010, Mark
Kirkwood
wrote:
Normally I'd agree with the others and recommend RAID10 - but you say
you have an OLAP workload - if it is *heavily* read biased you may get
better perfor
This is weird - is there a particular combination of memberid/answered in
answerselectindex that has a very high rowcount?
First change I would suggest looking into would be to try changing sub-query
logic to check existence and limit the result set of the sub-query to a single
row
Select dist
[mailto:aburn...@bzzagent.com]
Sent: Monday, August 16, 2010 7:20 PM
To: Mark Rostron; pgsql-performance@postgresql.org
Subject: RE: Very poor performance
Thanks Mark,
Yeah, I apologize, I forgot to mention a couple of things.
m.id is the primary key but the biggest problem is that the query loops 626410
suggest setting effective_cache_size
to 15GB (not 15MB)
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
of the 1st point happen here a while ago, symptoms looked
very like what you are describing.
Re index size, you could try indexes like:
some_table(a)
some_table(b)
which may occupy less space, and the optimizer can bitmap and/or them to
work like the compound index some_table(a,b).
regards
Ma
but could possibly be simpler and more flexible.
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 30/09/10 01:09, Tobias Brox wrote:
With the most popular trans type it chose another plan and it took
more than 3s (totally unacceptable):
Try tweeking effective_cache_size up a bit and see what happens - I've
found these bitmap plans to be sensitive to it sometimes.
regards
non overwriting storage manager -
Mysql will update in place and you will not see this.
Try VACUUM FULL on the table and retest.
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
example table (estimated at 2GB) so be able to
be counted by Postgres in about 3-4 seconds...
This assumes a more capable machine than you are testing on I suspect.
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
On 13/10/10 21:44, Mladen Gogala wrote:
On 10/13/2010 3:19 AM, Mark Kirkwood wrote:
I think that major effect you are seeing here is that the UPDATE has
made the table twice as big on disk (even after VACUUM etc), and it has
gone from fitting in ram to not fitting in ram - so cannot be
#x27;Visibility Map').
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hey
Turned on log_min_duration_statement today and started getting timings on sql
statements (version 8.3.10).
Can anyone please tell me how to interpret the (S_nn/C_nn) information in the
log line.
LOG: duration: 19817.211 ms execute S_73/C_74: (statement text) .
Thanks for your tim
as been implemented in the kernel for ages. I guess you were
wanting to stress that *open_datasync* is the new kid, so watch out to
see if he bites...
Cheers
Mark
Question regarding the operation of the shared_buffers cache and implications
of the pg_X_stat_tables|pg_X_stat_indexes stats.
( I am also aware that this is all complicated by the kernel cache behavior,
however, if, for the purpose of these questions, you wouldn't mind assuming
that we don't ha
> >
> > What is the procedure that postgres uses to decide whether or not a
> > table/index block will be left in the shared_buffers cache at the end
> > of the operation?
> >
>
> The only special cases are for sequential scans and VACUUM, which use
> continuously re-use a small section of the b
On 10/11/10 22:10, Mark Kirkwood wrote:
What might also be interesting is doing each INSERT with an array-load
of bind variables appended to the VALUES clause - as this will only do
1 insert call per "array" of values.
This is probably more like what you were expecting:
rows
quot; of values.
Cheers
Mark
execinsert.pl
Description: Perl program
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
gs are done.
Also probably worthwhile is telling us the table definitions of the
tables concerned.
For Postgres - did you run ANALYZE on the database concerned before
running the queries? (optimizer stats are usually updated automatically,
but if you were quick to run the queries after loading the data they
might not have been).
regards
Mark
you have not done so.
Also it would be worthwhile for you to post the output of:
EXPLAIN ANALYZE INSERT INTO drones_history (sample_id, drone_id,
drone_log_notice, drone_temperature, drone_pressure)
SELECT * FROM tmpUpdate;
to the list, so we can see what is taking the time.
Cheers
Mark
On 30/11/10 05:53, Pierre C wrote:
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on
drone_i
DESC;
to show up potentially troublesome amounts of bloat.
regards
Mark
re out of Chicago. If anyone
knows what I'm talking about please share the link. Either way, it seems
that people are actually doing money transactions on FusionIO, so you can
either take that as comforting reassurance or you can start getting really
nervous about the stock market :-)
Reg
On 26/01/11 07:28, Josh Berkus wrote:
One question: in 8.3 and earlier, is the FSM used to track dead_rows for
pg_stat_user_tables?
If I'm understanding you correctly, ANALYZE is the main guy
tracking/updating the dead row count.
regards
Mark
--
Sent via pgsql-performance mailing
vouch for the effectiveness of the procedure. Did anyone play with that?
Any positive or negative things to say about shake?
Why do you feel the need to defrag your *nix box?
Regards,
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to yo
On 01/02/11 07:27, Josh Berkus wrote:
Robert, Mark,
I have not been able to reproduce this issue in a clean test on 9.0. As
a result, I now think that it was related to the FSM being too small on
the user's 8.3 instance, and will consider it resolved.
Right - it might be interesting t
On 01/02/11 10:57, Scott Marlowe wrote:
On Mon, Jan 31, 2011 at 11:27 AM, Josh Berkus wrote:
Robert, Mark,
I have not been able to reproduce this issue in a clean test on 9.0. As
a result, I now think that it was related to the FSM being too small on
the user's 8.3 instance, and
On 31/01/11 17:38, Mladen Gogala wrote:
Mark Felder wrote:
Why do you feel the need to defrag your *nix box?
Let's stick to the original question and leave my motivation for some
other time. Have you used the product? If you have, I'd be happy to
hear about your experien
area already, what's the best way
to put multiple cores to use when running repeated SELECTs with PostgreSQL?
Thanks!
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 02/03/2011 10:54 AM, Oleg Bartunov wrote:
> Mark,
>
> you could try gevel module to get structure of GIST index and look if
> items distributed more or less homogenous (see different levels). You
> can visualize index like http://www.sai.msu.su/~megera/wiki/Rtree_Index
> Also
ctive_cache_size) as
the application dataset gets bigger over time. I would note that this is
*more* likely to happen with hints, as they lobotomize the optimizer so
it *cannot* react to dataset size or distribution changes.
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-perfor
On 04/02/11 11:08, Josh Berkus wrote:
I don't think that's actually accurate. Can you give me a list of
DBMSes which support hints other than Oracle?
DB2 LUW (Linux, Unix, Win32 code base) has hint profiles:
http://justdb2chatter.blogspot.com/2008/06/db2-hints-optimizer-selection.html
--
Se
On 04/02/11 13:49, Jeremy Harris wrote:
On 2011-02-03 21:51, Mark Kirkwood wrote:
The cases I've seen in production typically involve "outgrowing"
optimizer parameter settings: (e.g work_mem, effective_cache_size) as
the application dataset gets bigger over time.
An argument i
mitted, or because they are not yet vacuumed.
Would somebody in the know please confirm the above understanding for my
own piece of mind?
Thanks,
mark
--
Mark Mielke
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscrip
s).
Part of our case is likely fairly common *today*: many servers are
multi-core now, but people don't necessarily understand how to take
advantage of that if it doesn't happen automatically.
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
T
munity steps in to help you solve it (and
it'd bet it will solved be very quickly indeed).
Best wishes
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
not to disable autovacuum, making dead
rows insignificant in the grand scheme of things. I haven't specifically
noticed any performance problems here - PostgreSQL is working great for
me as usual. Just curiosity...
Cheers,
mark
--
Mark Mielke
--
Sent via pgsql-performance m
O
If Oracle can patch their query planner for you in 24 hours, and you
can apply patch with confidence against your test then production
servers in an hour or so, great. Til then I'll stick to a database
that has the absolutely, without a doubt, best coder support of any
project I've ever used.
Hi
My question is: Was there any major optimizer change between 8.3.10 to
8.3.14?
I'm getting a difference in explain plans that I need to account for.
We are running production pg8.3.10, and are considering upgrading to 8.4.x
(maybe 9.0), because we expected to benefit from some of th
I found the difference.
Random_page_cost is 1 in the production 8.3.10, I guess weighting the decision
to use "index scan".
Thanks for the replies, gentlemen.
> If you diff the postgresql.conf files for both installs, what's different?
In the list below, 8.3.10 parameter value is in the clear, (
> It would be easier to suggest what might be wrong if you included "EXPLAIN
> ANALYZE" output instead of just EXPLAIN.
> It's not obvious whether 8.3 or 8.4 is estimating things better.
Thanks for reply man
Turns out random_page_cost was set low in the 8.3.10 version - when I reset it
to 4(dfl
nning.
The strange thing is that this started after my database grew by about 25%
after a large influx of data due to user load. I'm wonder if there is a
tipping
point or a config setting I need to change now that the db is larger that
is
causing all this to happen.
Thanks,
Mark
--
Sent
us searches,
are others planning to eventually apply the KNN work to US zipcode
searches?
Sample EXPLAIN output and query times are below.
Mark
EXPLAIN ANALYZE SELECT zipcode,
lon_lat <-> '(-118.412426,34.096629)' AS radius
FROM zipcodes ;
-
oordinates based on the lat/long pairs
much like the map projections used to present a curved surface on a flat
map? Given that's OK to be be a few miles off, it seems we have some
leeway here.
Recommendations?
Mark
EXPLAIN ANALYZE
SELECT zipcode,
cube_distance(
http://search.cpan.org/dist/PDL/
And a Wikipedia page on various calculation possibilities:
http://en.wikipedia.org/wiki/Geographical_distance#Flat-surface_formulae
Further suggestions welcome.
Thanks,
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
I tried again to use KNN for a real-world query, and I was able to get
it to add an approximately 6x speed-up vs the cube search or
earthdistance methods ( from 300 ms to 50ms ).
I had to make some notable changes for the KNN index to be considered.
- Of course, I had to switch to using basic po
On 02/17/2011 03:17 PM, Oleg Bartunov wrote:
> Mark,
>
> we investigating pgsphere http://pgsphere.projects.postgresql.org/, if
> we could add KNN support.
Great, thanks Oleg.
I'll be happy to test it when something is ready.
Mark
--
Sent via pgsql-performance ma
ystem:
$ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
regards
Mark
--prefix=your-chosen-install-prefix-here
$ make
$ make install
$ make check
The last step runs the regression test.
regards
Mark
P.s: this discussion really belongs on pg-general rather than
performance, as it is about building and installing postgres rather than
performance, *when* you have i
much faster if it did an index scan on each of the child
tables and merged the results.
I can achieve this manually by rewriting the query as a union between
queries against each of the child tables. Is there a better way? (I'm
using PostGreSQL 8.4 with PostGIS 1.4).
Regards,
Mark Tho
On 04/03/2011 16:07, Robert Haas wrote:
On Fri, Mar 4, 2011 at 6:40 AM, Mark Thornton wrote:
I can achieve this manually by rewriting the query as a union between
queries against each of the child tables. Is there a better way? (I'm using
PostGreSQL 8.4 with PostGIS 1.4).
Can you pos
short time (at least in my current tests).
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
months plus a single
partition for 'ancient history', but then you have to transfer the
content of the oldest month to ancient each month and change the
constraint on 'ancient'.
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
months plus a single
partition for 'ancient history', but then you have to transfer the
content of the oldest month to ancient each month and change the
constraint on 'ancient'.
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
ck and make sure RH backported whatever the
fix was to their current RHEL4 kernel.
Thanks,
Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED]
If this is a query that will be executed more than once, you can also
avoid incurring the planning overhead multiple times by using PREPARE.
-- Mark Lewis
On Wed, 2006-01-11 at 18:50 -0500, Jean-Philippe Côté wrote:
> Thanks a lot for this info, I was indeed exceeding the genetic
> optim
just once. Basically, I'm not clear on the
definition of "surrounding query" in the following exerpt from the Postgreql
documentation:
A STABLE function cannot modify the database and is guaranteed to return the
same results given the same arguments for all calls within a single
other users may need to wait for the connection, and another connection
won't do.
3. If this is a busy web site, you might end up with potentially many
thousands of open cursors. I don't know if this introduces an
unacceptable performance penalty or other bottleneck in the serv
nough". In PostgreSQL, historical
rows are kept in the tables themselves and periodically vacuumed, so
there is no such guarantee, which means that you would need to either
implement a lot of complex locking for little material gain, or just
hold the cursors in moderately long-running transact
be a little careful about whether to use '>' or '>='
depending on whether 'id' is unique or not (to continue using '>' in the
non unique case, you can just save and use all the members of the
primary key too).
Cheers
Mark
Tom Lane wrote:
Mark Kirkwood <[EMAIL PROTECTED]> writes:
SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;
Suppose this displays records for id 1 -> 10020.
When the user hits next, and page saves id=10020 in the session state
and executes:
SELECT ... FROM table WHER
ue for 8.2.
It's called 'pg_freespacemap' and is available for 8.1/8.0 from the
Pgfoundry 'backports' project:
http://pgfoundry.org/projects/backports
Cheers
Mark
---(end of broadcast)---
TIP 1: if posting/reading through
want to upgrade as soon as possible, and refer to the
on-line docs about what to do with your FSM settings.
-- Mark Lewis
On Mon, 2006-01-30 at 23:57 +0100, Emmanuel Lacour wrote:
> Hi everybody,
>
> I have the following problem, on a test server, if I do a fresh import
> of pro
I used to build
a 'go for coffee' task into the build and test cycle.
Cheers
Mark
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Machine 1: $2000
Machine 2: $2000
Machine 3: $2000
Knowing how to rig them together and maintain them in a fully fault-
tolerant way: priceless.
(Sorry for the off-topic post, I couldn't resist).
-- Mark Lewis
On Wed, 2006-02-15 at 09:19 -0800, Craig A. James wrote:
> Jeremy Hai
uld always use f(x)=0 as the
default sortKey function which would degenerate to the exact same sort
behavior in use today.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
could see doing it for char(n)/varchar(n) where n<=4 in SQL_ASCII though.
In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit
sortKey as elsewhere suggested). The sorting key doesn't need to be a
one-to-one mapping.
-- Mark Lewis
--
des; the same value will always have the same hash, but you're not
guaranteed that the hashcodes for two distinct values will be unique.
-- Mark
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subsc
, the plan is prepared each time. I know there is some minimal overhead of preparing the plan each time, but it seems like it's minor compared to the saving's you'll get.
- Mark
n a text variable and then Executing it.
Prior to that, however, you might try just recreating the function. The plan may be re-evaluated at that point.
- Mark
h 2
disk RAID0 does reads @110MB/s).
cheers
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
re comparing like to like, and as such terrible results on such a
simple test are indicative of something 'not right'.
regards
Mark
P.s. FWIW - I'm quoting a test from a few years ago - the (same) machine
now has 4 RAID0 ata disks and does 175MB/s on the same test
=50
50 records in
50 records out
409600 bytes transferred in 24.067298 secs (170189442 bytes/sec)
Ok - didn't quite get my quoted 175MB/s, (FWIW if bs=32k I get exactly
175MB/s).
Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably
co
Luke Lonergan wrote:
Mark,
Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably
competitive on (MB/s)/$
Not sure - the machines I cite are about $10K each. The machine you tested
was probably about $1500 a few years ago (my guess), and with a 5:1 ratio in
:
CREATE INDEX table_name ON table (name varchar_pattern_ops);
cheers
Mark
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
pgrade to 8.1.(3), then the planner can consider paths that
use *both* the indexes on srcobj and dstobj (which would probably be the
business!).
Cheers
Mark
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
ers yet. It's not for load balancing, just
active/passive fault tolerance.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
l get considerably improved concurrent filesystem
access on your dual Xeon.
Regards
Mark
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that
00M for Postgres to cache relation
pages, and informing the planner that it can expect ~200M available from
the disk buffer cache. To give a better recommendation, we need to know
more about your server and workload (e.g server memory configuration and
usage plus how close you g
rite operations. However 'Buf' is restricted to a
fairly small size (various sysctls), so really only provides a lower
bound on the file buffer cache activity.
Sorry to not really answer your question Scott - how are Linux kernel
buffers actually defined?
Cheers
Mark
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Mark Kirkwood wrote:
I think Freebsd 'Inactive' corresponds pretty closely to Linux's
'Inactive Dirty'|'Inactive Laundered'|'Inactive Free'.
Hmmm - on second thoughts I think I've got that wrong :-(, since in
Linux all the file buffe
800MB.
I was going to recommend higher - but not knowing what else was running,
kept it to quite conservative :-)... and given he's running java, the
JVM could easily eat 512M all by itself!
Cheers
Mark
---(end of broadcast)---
TIP 2:
e they can be operated on - a relatively cheap operation).
So its really all about accounting, in a sense - whether pages end up in
the 'Buf' or 'Inactive' queue, they are still cached!
Cheers
Mark
---(end of broadcast)
mparison
I guess you could use hdparm (-t or -T flags do a simple benchmark).
Though iozone or bonnie++ are probably better.
Cheers
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Can you post an explain analyze for the delete query? That will at
least tell you if it is the delete itself which is slow, or a trigger /
referential integrity constraint check. Which version of PG is this?
-- Mark Lewis
On Wed, 2006-03-29 at 12:58 -0500, Eric Lauzon wrote:
> Greeti
lot more standards compliant).. but
sheesh - what a difference!
Well yes - however, to be fair to the Mysql guys, AFAICS the capture and
display of index stats (and any other optimizer related data) is not
part of any standard.
Cheers
Mark
---(end of broadcast)
snc(2) on a preexisting file
is not changed by softupdates being on or off.
Cheers
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
ectio
(which is the default I think).
I suspect that making a *separate* filesystem for the pg_xlog directory
and mounting that logging + forcedirectio would be a nice way to also
get performance while keeping the advantages of logging + file
buffercache for the *rest* of the postgres components.
you only
want it on $PGDATA/pg_xlog. The usual way this is accomplished is by
making a separate filsystem for pg_xlog and symlinking from $PGDATA.
Did you try the other option of remounting the fs for $PGDATA without
logging or forcedirectio?
Cheers
Mark
---(e
Chris Mair wrote:
(but note the other mail about wal_sync_method = fsync)
Yeah - looks good! (is the default open_datasync still?). Might be worth
trying out the fdatasync method too (ISTR this being quite good... again
on Solaris 8, so things might have changed)!
Cheers
Mark
ecently brought out some Opteron systems,
they are hiding them here:
http://www.supermicro.com/Aplus/system/
The 4U's have 8 SATA/SCSI drive bays - maybe still not enough, but
better than 6!
Cheers
Mark
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
nterrogate the os file
buffer cache, that's a different story - tho I've been toying with doing
a utility for Freebsd that would do this).
Cheers
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
standard tool is lacking in this
regard...).
Cheers
Mark
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
301 - 400 of 951 matches
Mail list logo