-- Start a new shell, and connect again and do exactly the same thing
-- as the above.
-- It should hang until you either do END/ROLLBACK in the first
-- connection.
Thanks for any help,
John
=:->
signature.asc
Description: OpenPGP digital signature
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
>>I'm having an odd case where my system is locking such that if I insert
>>into a table during a transaction, if I start a new connection and
>>transaction, it blocks while trying
map a really big file, and access every byte.
But if your kernel is smart enough, it could certainly deallocate pages
after you stopped accessing them, so I can't say for sure that you can
flush the memory cache. Usually, I believe these methods are sufficient.
John
=:->
signature.asc
Description: OpenPGP digital signature
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
...
>
> This is a known problem, solved in 8.1. A workaround for previous
> releases is to defer FK checks until commit:
So I don't know exactly what the fix was, but I just teste
n. Even if you create a view per user, if your PHP
layer has the right to look at other tables/views, it doesn't really help.
Good luck,
John
=:->
>
> Thanks for any reply :-)
>
> ---
> Petr Kavan
> Database Development
cod;
(You may or may not want order by, or group by, try the different
combinations.)
It might be possible to have the planner realize that all you want is
unique rows, just doing a group by doesn't give you that.
John
=:->
>
>Thanks for help.
>
>Stéphane COEZ
>
>
>
Steinar H. Gunderson wrote:
>On Sun, Aug 14, 2005 at 07:27:38PM -0500, John Arbash Meinel wrote:
>
>
>>My guess is that this is part of a larger query. There isn't really much
>>you can do. If you want all 3.2M rows, then you have to wait for them to
>>be pulled
One thing you are probably hitting is a lot of sequential scans on the
main table.
If you are doing mostly inserting, make sure you are in a transaction,
and think about doing a COPY.
There is a lot more that can be said, we just need to have more
information about what you want.
John
=:->
>
> Ulrich
>
>
>
signature.asc
Description: OpenPGP digital signature
st restore back to the 5-minutes ago point, or does it get
more thoroughly messed up?
For some people, a 5-minute old restore would be okay, as long as you
still have transaction safety, so that you can figure out what needs to
be restored.
John
=:->
signature.asc
Description: OpenPGP digital signature
t start until the first actually
finished.
Writing large buffers hides all of these seek/spin based latencies, so
you can get really good throughput. But a lot of DB action is small
buffers randomly distributed, so you really do need low seek time, of
which RAID10 is probably better than RAID5.
John
=:->
signature.asc
Description: OpenPGP digital signature
on.
But small writes followed by an fsync do favor RAID10 over RAID5.
John
=:->
signature.asc
Description: OpenPGP digital signature
Tom Lane wrote:
> John A Meinel <[EMAIL PROTECTED]> writes:
>
>>Alvaro Herrera wrote:
>>
>>>I've been asked this a couple of times and I don't know the answer: what
>>>happens if you give XLog a single drive (unmirrored single spindle), and
&g
ntial scan, or
something else is going on. 5 sec is a really long time for the type of
query you are doing.
Oh, and can you run the win32 psql client to see if it might be ODBC
which is causing the problem?
John
=:->
>
> Query for the search criteria is
> *select * from tbmstdeb
Couldn't you use "INTERSECT" then? To only get the rows that *both*
queries return?
John
=:->
>
> Servus
> Manfred
>
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
>
signature.asc
Description: OpenPGP digital signature
= 'C' # locale for monetary formatting
> lc_numeric = 'C'# locale for number formatting
> lc_time = 'C' # locale for time formatting
>
John
=:->
signature.asc
Description: OpenPGP digital signature
erver may need to follow several
pointers in a list to reach the right version. Version records need to
be kept in the version store only as long as there are there are
operations that might require them.
John
=:->
signature.asc
Description: OpenPGP digital signature
Jeremiah Jahn wrote:
>On Wed, 2005-08-17 at 21:21 -0500, John A Meinel wrote:
>
>
>>Jeremiah Jahn wrote:
>>
>>
>>>I just put together a system with 6GB of ram on a 14 disk raid 10 array.
>>>When I run my usual big painful queries, I get very litt
7;::text = (id)::text)
>-> Index Scan using case_speed on case_data
> (cost=0.00..5.29 rows=3 width=53) (actual time=3.049..3.058 rows=1
> loops=4906)
> Index Cond: (('IL081025J'::text =
> (case_data.court_ori)::text) AND ((case_data.case_id)::text =
> ("outer".case_id)::text))
John
=:->
signature.asc
Description: OpenPGP digital signature
If you post you
query, a lot of times people here can help optimize your query. (But
make sure to explain what you are trying to do, so the optimizations
make sense.)
John
=:->
signature.asc
Description: OpenPGP digital signature
Jeremiah Jahn wrote:
> Sorry about the formatting.
>
> On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
>>
...
>>The expensive parts are the 4915 lookups into the litigant_details (each
>>one takes approx 4ms
Ron wrote:
> At 01:18 PM 8/19/2005, John A Meinel wrote:
>
>> Jeremiah Jahn wrote:
>> > Sorry about the formatting.
>> >
>> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>> >
>> >>Jeremiah Jahn wrote:
>> >>
&
Jeremiah Jahn wrote:
> On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
...
>>
>>Well, in general, 3ms for a single lookup seems really long. Maybe your
>>index is bloated by not vacuuming often enough. Do you tend to get a
memory usage? Can you give the output
of that command, just to make sure you are reading it correctly.
John
=:->
signature.asc
Description: OpenPGP digital signature
, I may be wrong about whether the indexes are cached, but I sure
would expect them to be.
What is the time for a cached query on your system (with normal nested
loops)? (give the EXPLAIN ANALYZE for the *second* run, or maybe the
fourth).
I'm glad that we aren't seeing something
Ron wrote:
At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:
On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:
> Jeremiah Jahn wrote:
> > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
> >
>
> > it's cached alright. I'm getting a read rate of
Jeremiah Jahn wrote:
On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:
Ron wrote:
At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:
Well, since you can get a read of the RAID at 150MB/s, that means that
it is actual I/O speed. It may not be cached in RAM. Perhaps you could
try the same
it for you.
>
>
> 12GB is my max. I may run with it for a while and see.
If your working set truly is 10GB, then you can get a massive
performance increase even at 12GB. If your working set is 10GB and you
have 6GB of RAM, it probably is always swapping out what it
ing.
This is important from a planner issue. Because the planner can then
expect that the OS is doing its job and caching the tables, so index
scans are cheaper than they would be otherwise.
John
=:->
>
> So kindly help me in pointing me to the correct
> parameter to set.
>
> It
, then deleting from the
referenced table, will cause a sequential scan on the referring table
for *each* deleted row. (IIRC).
John
=:->
>
> Thanks for your (and anyone else's) help!
> Martin Nickel
signature.asc
Description: OpenPGP digital signature
sense. This may or may not be your problem, without
knowing more about you setup. But the symptoms seem similar.
John
=:->
signature.asc
Description: OpenPGP digital signature
istic_id IS NOT NULL
>
Why can't you do:
DELETE FROM statistics_sasme s JOIN temp_table t ON (s.statistic_id =
t.statistic_id);
Or possibly:
DELETE FROM statistics_sasme s
WHERE s.id IN (SELECT t.statistic_id FROM temp_table t);
I'm not sure how delete exactly works with joins, but the IN form should
be approximately correct.
John
=:->
signature.asc
Description: OpenPGP digital signature
t;
The recent *cheap* version of a ramdisk had battery backup for 16 hours.
(Very expensive ramdisks actually have enough battery power to power a
small hard-drive to dump the contents into).
I'm guessing for a RAID controller, the time would be in the max 1 day
range.
John
=:->
signature.asc
Description: OpenPGP digital signature
fewer tables which house more rows each?
If all data is on a single drive, you are nowhere near needing a cluster
to improve your database. What you need is a 14-drive RAID array. It's
probably cheaper than 4x powerful machines, and will provide you with
much better performance. An
ageous, as
when postgres reads a page, it only reads a single stripe. So if it were
reading a series of pages, each one would come from a different disk.
I may be wrong about that, though.
John
=:->
>
> Am i right ?
signature.asc
Description: OpenPGP digital signature
S
> on C.SupplierId = S.SupplierId
>
> left outer join Price P
> on C.PriceId = P.PriceId;
>
> Any ideas why it's slower?
You really have to post the results of "EXPLAIN ANALYZE" not just
explain. So that we can tell what the planner is expecting, versus what
reall
nt to show us your schema.
John
=:->
signature.asc
Description: OpenPGP digital signature
Does the psql executable have any ability to do a "fetch many", using a
server-side named cursor, when returning results? It seems like it tries to
retrieve the query entirely to local memory before printing to standard out.
Specifically, I've tried running the following command on my desktop, w
On 07/29/10 2:58 PM, Dino Vliet wrote:
Dear postgresql list,
I have some troubles generating data for a analysis task at hand.
I have a table (table A) containing 5 million records and 28 number of
attributes. This table is 461MB big if I copy it to a csv file.
I want to create another
oad:
http://www.fusionio.com/products/iodrive/ - BEST in slot currently IMHO.
http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) - not a
bad alternative.
There are other SSD controllers on the market but I have experience with both
so I can recommend both as well.
- J
over 600MB/sec on these for a whopping cost of $400 and a little
of my time.
SSD's can be very useful, but you have to be aware of the shortcomings and how
to overcome them.
- John
-Original Message-
From: Marti Raudsepp [mailto:ma...@juffo.org]
Sent: Thursday, December 09, 2010
How can you tell when your indexes are starting to get bloated and when you
need to rebuild them. I haven't seen a quick way to tell and not sure if it's
being tracked.
___
| John
they would like to see done please let me
know.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Przemek Wozniak
Sent: Thursday, December 23, 2010 11:38 AM
To: pgsql-performance@postgresql.org
Subject
ve attached the randwrite performance test
below, as you can see going from 8K -> 32K -> 1M blocksize the drives really
start to move.
Thanks,
- John
[v025...@athenaash05 /]$ fio --filename=/fusionIO/export1/test1 --size=25G
--bs=8k --direct=1 --rw=randwrite --numjobs=8 --runtime=30 --gr
This has gotten a lot better with the 2.x drivers as well.
I'm completely aware of the FusionIO and it's advantages/disadvantages.. I'm
currently getting the following pgbench results but still only hitting the
array for about 800MB/sec, short of the 3GB/sec that it's capable of. This is
simpl
Test,
Sorry trying to fix why my email is getting formatted to bits when posting to
the list.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Strange, John W
Sent: Tuesday, January 04, 2011 1:01 PM
To
d the word fusionIO :)
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Lars
Sent: Tuesday, January 18, 2011 4:57 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Migrating to Postgresql an
segments
temporarily during bulk data loads, the number of checkpoints that are required
can be reduced.
- John
[ 2011-02-15 09:34:30.549 GMT ] :4d404b6e.47ef LOG: checkpoint starting: xlog
[ 2011-02-15 09:34:43.656 GMT ] :4d404b6e.47ef LOG: checkpoint complete:
wrote 36135 buffers (0.4%); 0 t
from pg_stat_activity, and check what the largest objects are based on
relpages from pg_class.
- check index scans/table scans from pg_statio tables if you have
track_activities on in the .conf file.
- John
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Beha
Scott, are you really moving that much data through memory, 70-80GB/sec is the
limit of the new intel 7500 series in a best case scenario.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Scott Marlowe
If it's a HP box you can also turn this off via the bios via your RBSU:
Starting with HP ProLiant G6 servers that utilize Intel® Xeon® processors,
setting the HP Power Profile
Option in RBSU to Maximum Performance Mode sets these recommended additional
low-latency options
for minimum BIOS late
This can be resolved by partitioning the disk with a larger write spare area so
that the cells don't have to by recycled so often. There is a lot of
"misinformation" about SSD's, there are some great articles on anandtech that
really explain how the technology works and some of the differences b
Just some information on our setup:
- HP DL585 G6
- 4 x AMD Opteron 8435 (24 cores)
- 256GB RAM
- 2 FusionIO 640GB PCI-SSD (RAID0)
- dual 10GB ethernet.
- we have several tables that we store calculated values in.
- these are inserted by a compute farm that calculates the results and stores
the
provide I would greatly appreciate it.
I know that the COPY performance was bumped up in 8.1 but I'm stuck on
this 8.0 box for a while longer.
Thanks,
John E. Vincent
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
to be short connections, then leave the max low.
The other possibility is having the webserver use connection pooling, so
it uses a few long lived connections. But even then, you could limit it
to something like 10-20, not 120.
John
=:->
signature.asc
Description: OpenPGP digital signature
sting like
INFO: free space map: 167 relations, 423 pages stored; 2912 total pages
needed
DETAIL: Allocated FSM size: 1000 relations + 2 pages = 186 kB
shared memory.
(Yes, mine is done on a very static table.)
John
=:->
signature.asc
Description: OpenPGP digital signature
is it still slow? (Sometimes it takes a bit
of work to get the right indexes loaded into ram, but then it is faster.)
Just some guesses,
John
=:->
signature.asc
Description: OpenPGP digital signature
query twice in a row, is it still slow? (Sometimes it takes a bit
of work to get the right indexes loaded into ram, but then it is faster.)
Just some guesses,
John
=:->
signature.asc
Description: OpenPGP digital signature
ithout a vacuum full, but there might
also be something about rebuild index, or maybe dropping and re-creating
the index.
John
=:->
signature.asc
Description: OpenPGP digital signature
Pallav Kalva wrote:
John A Meinel wrote:
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally
wrong. I used to run vacuumdb on the whole database daily once and
when i posted the same
SpaceBallOne wrote:
Thanks for the reply John,
There are approximately 800 rows total in our job table (which stays
approximately the same because 'completed' jobs get moved to a
'job_archive' table).The other jobs not shown by the specific query
could be on backorder statu
dex scan may seem faster, but in actual production, that
index may not be cached, depending on what other queries are done.
John
=:->
For completeness sake I also did a select ordernumber without any ordering.
That only took 98 second for 1104379 record (11222 record per second,
compariable wi
lot of extra
care that they are included in both the dump and the restore. I believe
it is better to create your own "id" per table (say SERIAL or BIGSERIAL).
John
=:->
signature.asc
Description: OpenPGP digital signature
ALYZE.
John
=:->
signature.asc
Description: OpenPGP digital signature
ll realize ahead of
time that there won't be that many rows, and can use indexes, etc. But
even if it doesn't use an index scan, if you have a query that doesn't
use a lot of rows, then you won't need a lot of disk space.
John
=:->
explain an
saction.
BEGIN;
DELETE FROM detail WHERE ...
DELETE FROM summary WHERE collect_date = '2005-02-05';
COMMIT;
Regards
Gaetano Mendola
John
=:->
signature.asc
Description: OpenPGP digital signature
should probably post your postgres
version, and more information about how much CPU load there is while
your load is running.
John
=:->
signature.asc
Description: OpenPGP digital signature
Alex wrote:
Thanks John.
Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm
SCSI Disks, 4GB of memory.
Disks are pretty fast and memory should be more than enough. Currently
we dont have many concurrent connections.
Well, you didn't mention Opteron before (it makes a diffe
they
are in memory (which also makes index scans much cheaper).
Also, this query may sort itself out in time. As the tables grow, the
relative fraction that you desire probably decreases, which makes index
scans more attractive.
John
=:->
signature.asc
Description: OpenPGP digital signature
you would see, but what does "explain analyze select
count(*) from blah;" say. If it is an index scan, you have your machine
mistuned. select count(*) always grabs every row, and this is always
cheaper with a sequential scan.
John
=:->
signature.asc
Description: OpenPGP digital signature
time, you might
want to resend with an attachment. I know I had trouble reading your
explain analyze.
John
=:->
signature.asc
Description: OpenPGP digital signature
John Allgood wrote:
I think maybe I didn't explain myself well enough. At most we will
service 200-250 connections across all the 9 databases mentioned. The
database we are building is for a trucking company. Each of the
databases represents a different division. With one master database
John Allgood wrote:
This some good info. The type of attached storage is a Kingston 14 bay
Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think
the way it is being explained that I should build a mirror with two
disk for the pg_xlog and the striping and mirroring the rest and put
John Allgood wrote:
Here is a summary about the cluster suite from redhat. All 9 databases
will be on the primary server the secondary server I have is the
failover. They don't actually share the partitions at the same time.
When you have some type of failure the backup server takes over. Onc
John Allgood wrote:
Hello Again
In the below statement you mention putting each database on its own
raid mirror.
"However, sticking with your arrangement, it would seem that you might be
able to get some extra performance if each database is on it's own raid,
since you are fairly likely
le), rather than just one join against 3months of data.
Any takes on which approach makes most sense from a performance and/or
maintenance point of view and are there other options I may have missed?
Sven Willenberger
If you can get away with it 2 is the best.
John
=:->
signature.asc
Descri
t would be possible, but there are a lot of design issues with
a system like this. You can't go into it thinking that you can design a
multi billion row database the same way you would design a million row db.
John
=:->
signature.asc
Description: OpenPGP digital signature
probably get a good enough tps.
But you're right, fsync=false could certainly give you the performance,
though a power outage means potential *real* corruption. Not just
missing transactions, but duplicated rows, all sorts of ugliness.
John
=:->
signature.asc
Description: OpenPGP digital signature
Sven Willenberger wrote:
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per
Markus Schaber wrote:
Hi, John,
John Arbash Meinel schrieb:
I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion
iggest "non-bug" performance improvements are from tuning the SQL.
But if postgres isn't using the right indexes, etc, you can probably fix
that.
John
=:->
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrie
is also the possibility that you are having problems with
cross-column correlation, or poor distribution of a column. Postgres
doesn't keep cross-column statistics, so if 2 columns are correlated,
then it mis-estimates selectivity, and might pick the wrong plan.
In general your query looks decent, we just need to figure out what is
going on.
John
=:->
signature.asc
Description: OpenPGP digital signature
ust need to switch to QueryPerformanceCounter()
[/Frequency].
John
=:->
signature.asc
Description: OpenPGP digital signature
post another explain
analyze with one (or several) different subselect forms.
John
=:->
signature.asc
Description: OpenPGP digital signature
n you just put a limit on the query? And then change
the client app to recognize when the limit is reached, and either give a
link to more results, or refine query, or something like that.
John
=:->
signature.asc
Description: OpenPGP digital signature
Jeremiah Jahn wrote:
On Thu, 2005-03-03 at 11:46 -0600, John A Meinel wrote:
...
Not really, about 2% of the returned rows are thrown away for security
reasons based on the current user, security groups they belong to and
different flags in the data itself. So the count for this is generated
on
.
But remember, the biggest bottleneck is almost *always* the I/O. So put
more & faster disks into the system first.
John
=:->
Here is the new explain analyze.
Sort (cost=1996.21..1996.26 rows=17 width=165) (actual
time=297.000..297.000 rows=39 loops=1)
Sort Key: ss.date
-> Merge Right Joi
e is your "60s"
statement. With that in hand, I think you can do a lot of caching
optimizations.
John
=:->
signature.asc
Description: OpenPGP digital signature
k, so we'll be able to replace our server within the next 2
months. It'll be a neccessity because we are signing on some big
clientsnow and they'll have 40 or 50 users for a single company. If
they are all logged in at the same time, that's a lot of queries.
Sure. Just
John Arbash Meinel wrote:
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'
Tom Lane wrote:
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
There is. I beleive QueryPerformanceCounter has sub-mirosecond
resolution.
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor
Tom Lane wrote:
John A Meinel <[EMAIL PROTECTED]> writes:
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor/instrument.c in particular.
Or we modify the win32
0.125
141 0.141
156 0.156
Which is about 1/0.016 = 62.5 clocks per second.
I'm pretty sure this is slightly worse than what we want. :)
It might be better on other platforms, but on win32 clock() is most
definitely *not* what you want.
John
=:->
signature.asc
Description: OpenPGP digital signature
Tom Lane wrote:
John A Meinel <[EMAIL PROTECTED]> writes:
Dave Held wrote:
There is always clock().
My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and
it had a resolution of 55clocks / s. When I just did this:
The other problem is it measures p
ou need it to find all of
the barId=412 first, since apparently that is more selective than the limit.
It really sounds like the postgres statistics are out of date. And
either you haven't run vacuum analyze recently, or you need to keep
higher statistics on either one or both
nction. So you don't have that per-row
overhead.
For an alternate accurate view. Try:
# \timing
# explain analyze select count(1) from objects where class = 'Picture';
# explain analyze select class_get_number_of_objects('Picture');
\timing will also give you the t
what pg does.
Why is this?
(This is with pg 8.0.1 on a PC running FC3 with 1GB ram…if it matters)
If you think there is truly a performance problem, try attaching the
results of "explain analyze" in which we might be able to tell you that
your statistics inaccurate (run vacuum an
ir definition,
sometimes postgres can flatten them out and optimize the query, and
sometimes it can't. Order by is one of the big culprits for bad queries
involving views.
John
=:->
signature.asc
Description: OpenPGP digital signature
d as to provide insight on how to better
optimize this query, I will, again, be deeply grateful.
Thanks in advance.
terrakit
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
John
=:->
signature.asc
Description: OpenPGP digital signature
eneral I would recommend RAID1, because that is the safe bet. If
your db is the bottleneck, and your data isn't all that critical, and
you are read heavy, I would probably go with RAID1, if you are write
heavy I would say 2 independent disks.
John
=:->
signature.asc
Description: OpenPGP digital signature
performance gains might be small, and you potentially lose everything.
But your update strategy seems dead on.
Do I grok it?
Thanks again,
John
=:->
signature.asc
Description: OpenPGP digital signature
301 - 400 of 427 matches
Mail list logo