e used in the USING subclause but not outside it.
The manual does seem to hint at this as
"after level if any, you can write a format (which must be a simple string
literal, not an expression)"
Anyway, RAISE EXCEPTION USING message = v_msg, errcode = v_sqlstate; works a
treat!
Many thanks Tom & Pavel.
Mike
I’m trying to get dynamic version of the RAISE command working so that I can
use a table of custom application error messages and codes for use by all
developed plpgsql functions. In this way the customer error codes and message
are not hard coded into code and are defined consistently in one pl
On Tue, Apr 25, 2017 at 12:53 PM, Tom Lane wrote:
> Yeah. The core problem here is that the parser has to disambiguate the
> || operator: is it "anyarray || anyelement" or "anyarray || anyarray"?
>
<...>
> Peeking at the contents of the literal would make the behavior very
> unpredictable/da
x27;c'] || 'd'::TEXT;
?column?
---
{a,b,c,d}
(1 row)
The assumption that the second argument is an array constant seems
surprising.
__
*Mike Blackwell | Technical Analyst, Distribution Services/Roll
le, order does
matter...it all ties together nicely, making it easier for other developers
to follow an identical pattern across all of the database objects.
All of that said, the notion of embedding Tetris functionality into a
codebase makes me smile, for some reason...
Mike Sofen
--
Sent
ans are created, etc. ORMs are a shortcut to getting an app talking to
data, but aren't a substitute for a proper, scalable data tier. IMO...being a
data specialist... :-)
Mike Sofen (Synthetic Genomics)
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chang
tailored the
model to match.
Mike Sofen (Synthetic Genomics)
g all table names and references to them, or
double-quoting all identifiers.
Mike
,
this is all real time web app stuff.
This is a model that could work for anyone dealing with large objects (text or
binary). The nice part is, the original 25TB of data storage drops to 5TB – a
much more manageable number, allowing for significant growth, which is on the
horizon.
Mike
isn’t skilled in sql,
the requests you’ve made won’t assist them at all.
Mike Sofen (Synthetic Genomics)
small - up to 100k rows or less, and
processed in under 100ms. But even when there was a network outage and we
had to do a catch up load with millions of rows, it ran very quickly. IOWs,
the double write overhead was very modest, especially with modern disk
performance.
Mike Sofen (Synthetic G
), and the fully
comprehensive, with a very modern looking UI.
In contrast, there are the over-priced dinosaurs with old ugly UIs. A while
back I reviewed some of the modeling tools, and none did it for me, I went
ahead and got another license to xcase.
Mike Sofen (Synthetic Genomics)
From: Pavel StehuleSent: Tuesday, September 27, 2016 9:18 PM
2016-09-28 6:13 GMT+02:00 Pavel Stehule mailto:pavel.steh...@gmail.com> >:
Hi
2016-09-27 23:03 GMT+02:00 Mike Sofen mailto:mso...@runbox.com> >:
Hi gang,
how to view the state of a transaction in flight, seeing how ma
t eliminate my need: how to view the state of a transaction in
flight, seeing how many rows have been read or inserted (possible for a
transaction in flight?), memory allocations across the various PG processes,
etc.
Possible or a hallucination?
Mike Sofen (Synthetic Genomics)
From: Tim Uckun Sent: Saturday, September 03, 2016 2:37 AM
Does anybody use an IDE for doing heavy duty stored proc development? PGadmin
is decent but I am looking for something better.
I have been using the Datagrip app (from Jetbrains), from its beta release up
through now v 2016.2 and lov
server
that we are demoing that costs ~$100k, as long as our batch sizes don’t exceed
available memory – that’s where the larger Cisco pulls ahead. The $620/mo is
the on-demand price, btw…the reserved price is much lower.
$100k/ $620 = 161 months of operation before cost parity.
Mike S
ign pattern.
In my mind primary keys are supposed to be static, stable, non-volatile...aka
predictable. It feels like an alien invading my schema, to contemplate such an
activity. I hope PG never supports that.
Postgres allows developers incredible freedom to do really crazy things. That
d
From: George Neuner Sent: Tuesday, August 30, 2016 5:54 PM
>Mike Sofen wrote: So in this scenario, I'm using
>BOTH bigserials as the PK and uuids as AKs in the core tables. I
>reference the bigints for all joins and (have to) use the uuids for the
>filters. It's
s are also design insurance for me in case I need to shard, since
I'll need/want that uniqueness across servers.
Mike Sofen
onster loaner Cisco UCS
server. Should have that posted to the Perform list later this week.
Mike Sofen (USA)
window,
allowing me directly edit/clone without leaving the editor.
My coding efficiency using this model is quite high...the overhead of using
git is trivial.
For rollbacks, we can simply point to the prior stored proc version and
recompile those. For DDL rollbacks, I have to code those scripts
e has said, just buy a 1 TB Samsung EVO 850 for $300 (USD) and
call it a day. :)
Mike
on
securing PHI on relational databases, you'll find lots of details around
data access roles, documentation, processes, data obfuscation, etc.
Mike
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mai
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G. Johnston
Sent: Wednesday, June 15, 2016 1:31 PM
To: Durgamahesh Manne
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] regarding schema only migration from sqlserver to
postgres w
o disk constantly. And since you're working on a single row at
time, it will take forever.
Convert the cursor into a normal query and you should see BIG (10-1000x)
gains in speed. A cursor can always be converted to normal
sql...always...it's not always easy but it's always worth the
es like windowing functions.
So I guess you’d say I’m in the entirely opposite camp, since it’s proven to be
such an effective solution architecture for many applications that leverage
relational database engines.
Mike Sofen (San Diego, CA USA)
a...where there is zero tolerance for slack db
design that could cause scalability or performance issues. My stored
functions are...relatively simple.
Mike Sofen (San Diego, CA USA)
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
eir peers soon lose
interest in collaborating with them, if you catch my drift…
Mike Sofen
re this table directly (requiring
a change in app code that was filling it), or use it to fill a proper table
that already has everything decomposed from the long full_path string via
post-processing after the insert. A third consideration would be to archive
off older/unneeded rows to a history table to reduce row counts. This is about
proper structure.
Mike Sofen
From: Jayadevan M Sent: Tuesday, April 26, 2016 6:32 AM
Hello,
I have a python script. It opens a cursor…
Thanks,
Jayadevan
save, oh, perhaps 10 years of wasted effort and 12 million emails. This
is as close to bandaids on bandaids on steroids that it comes. Really –
rethink your solution model.
Mike
From: drum.lu...@gmail.com <mailto:drum.lu...@gmail.com>Sent: Tuesday,
April 19, 2016 7:40 PM
Just for
27;t seem prevalent in PG...instead I see people using functions within
functions within functions, the cascading impact of which becomes very hard to
unravel.
Mike Sofen
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
d get the behaviors
sorted out, then it should become obvious what needs fixing.
Mike
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I have a large table with numerous indexes which has approximately doubled
in size after adding a column - every row was rewritten and 50% of the
tuples are dead. I'd like to reclaim this space, but VACUUM FULL cannot
seem to finish within the scheduled downtime.
Any suggestions for reclaiming th
mal sql set-based operations that
are readable, understandable, maintainable and very fast/scalable.
When I see row by row operations (looping or cursors) in what should be a real
time query…that’s my alarm bell that perhaps the code has wandered off a valid
solution path.
Mike
', 'g')
, '\s+', ' ', 'g')) as q
FROM public.pg_stat_statements
WHERE dbid IN (SELECT oid FROM pg_database WHERE datname =
current_database())
order by query
Thanks again,
Mike.
On 28/10/2015 22:43, Marc Mamin wrote:
', '\/\*.+\*\/
is */ <-- cannot get a regex to do this
FROM to_clean ORDER BY q
Im now thinking it may be better to do in a pgsql function as I think if
the comments are in queries then they need to be ignored.
Has anyone done anything like this?
Thanks,
Mike.
--
Sent via pgsql-general mailing list
1 with 102
table(s) from provider 1
2015-08-24 06:50:33 UTC ERROR remoteWorkerThread_1_1: error at end of COPY
IN: ERROR: invalid memory alloc request size 1970234207
CONTEXT: COPY sl_log_1, line 97033:
[image: Clutch Holdings, LLC] <http://www.clutch.com> Mike James |
Manager of Infra
Thanks for the responses
For anyone searching in the future I'll answer Tom's questions and list the
boneheaded fix that it ended up actually being (really painful as I've been
fighting this for a week).
1) According to amazon they run stock postgres as far as the query planner
is concerned.
2) Y
Hi there,
I'm having an issue with query performance between 2 different pgsql
environments.
Ther first is our current production postgres server with is running 9.3.5
on Centos 5 x64. The second system is Amazon's RDS postgres as a service.
On our local DB server we have a query that executes
this helps
it would be possible to separate them by tablespaces.
Regards
Sven
--
Mike
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
The database is not crashing thankfully. We are waiting for the errors to
come back to turn up logging in the hopes of creating the reproducible set.
I will do my best to provide a reproducible test case. Is there any more
information I can supply in the meantime that would help?
exes) or a problem with
an inadequate fill factor setting. It doesnt look like there is a
specified fill factor for this index and I'm not sure what the gist default
is.
CREATE INDEX index_nodes_on_tree_path ON nodes USING gist (tree_path)
The table in question has about 94k rows, an example of the widest
tree_path tuple is 69066.69090.69091.69094
Any advice is appreciated, happy new year!
Mike
> which are way more reliable? In that case, you do have a 1:1 lookup
> and you shouldn't have a problem.
I was unaware of the different versions of IDNA. I basically started using
the Perl module IDNA::Punycode in my project and assumed that this was the
only type. Seems like I need
possibly interesting for collaboration, let me know & I'll try
> to put together the relevant people.
Those functions would be very useful to me. I know a bit of C, but probably not
enough to produce an acceptable patch. If there are people who would also find
these functions useful, an
t;
> I'm not for knowing the rules of punycode but I'm not seeing what value
> lower() provides here...
Case insensitive matching. So that "EXAMPLE.COM" = "example.com"
--
Mike Cardwell https://grepular.com https://emailprivacytester.com
OpenPGP Key
inal and creating an index on the punycode version.
This is exactly the same method that we commonly use for performing case
insensitive text searches using lower() indexes.
--
Mike Cardwell https://grepular.com https://emailprivacytester.com
OpenPGP Key35BC AF1D 3AA2 1F84 3DC3 B0CF 70A5 F5
as input, I would just do:
WHERE lower(punycode_encode(hostname)) =
lower(punycode_encode('any-representation'))
There doesn't need to be any extra table storage for the punycode encoded
version.
--
Mike Cardwell https://grepular.com https://emailprivacytester.com
OpenPGP Key35
speed.
I'm new to Postgres, and to this list, so if there is a better way for me to
submit this suggestion or solve my problem, please point me in the right
direction.
[1] http://www.unicode.org/Public/UNIDATA/CaseFolding.txt
Regards,
--
Mike Cardwell https://grepular.com https://emailprivac
-
pg_catalog | now | timestamp with time zone | | normal
| invoker | stable | pgres | internal | now | current
transaction time
it's the timestamp at the start of the transaction- so the planner should
have a set value for all rows.
Am I mi
ms
(7 rows)
The types match:
cloud_test2=# select pg_typeof(now() - interval '24 hours');
pg_typeof
--
timestamp with time zone
Is there something I'm missing?
Thanks!
Mike
Is there a simple notation for comparing most columns in the new and old
records in a pl/pgsql trigger function? Something like
(new.b, new.c, new.d) = (old.b, old.c, old.d)
works to compare all the columns except 'a', but is fragile in that it
needs to be updated any time a column is added to t
check_postgres.pl (--action=autovac_freeze) recently complained that we
needed to run VACUUM FREEZE. Doing so generated a boatload of WAL files -
perhaps on the order of the of the database itself.
Is VACUUM FREEZE something that is normally handled by autovac? If so, how
would we approach findi
I need to get an idea of how much WAL space will be required during a long
(many hours) pg_basebackup over a relatively slow network connection. This
is for a server that's not yet running PITR / streaming.
Any thoughts?
* *
http://meta.stackoverflow.com/questions/270574/an-experiment-stack-overflow-tv?cb=1
Yea looks like Postgres has it right, well.. per POSIX standard anyway.
JavaScript also has it right, as does Python and .NET. Ruby is just weird.
On Thu, Jul 24, 2014 at 1:57 PM, Tom Lane wrote:
> Mike Christensen writes:
> > I'm curious why this query returns 0:
> > S
, David G Johnston <
david.g.johns...@gmail.com> wrote:
> Mike Christensen-2 wrote
> > I'm curious why this query returns 0:
> >
> > SELECT 'AAA' ~ '^A{,4}$'
> >
> > Yet, this query returns 1:
> >
> > SELECT 'AAA'
I'm curious why this query returns 0:
SELECT 'AAA' ~ '^A{,4}$'
Yet, this query returns 1:
SELECT 'AAA' ~ '^A{0,4}$'
Is this a bug with the regular expression engine?
Sounds like you just have to wait until it finishes..
On Mon, Jul 7, 2014 at 12:56 PM, Prabhjot Sheena <
prabhjot.she...@rivalwatch.com> wrote:
> Hello
>We are using postgresql 8.3 database for last 5 yrs for this
> production database and its running fine. This is our critical database
We have a need to check certain text fields to be sure they'll convert
properly to EBCDIC. A check constraint with a convert() was the initial
thought, but there doesn't seem to be a default conversion from UTF8 to
EBCDIC. Does anyone have an implementation they'd care to share, or
suggestions on
Oh. The CREATE CAST command. Wow, I was totally unaware of this entire
feature!
On Tue, Jan 28, 2014 at 3:36 PM, Mike Christensen wrote:
> How do you create casts in Postgres?
>
>
> On Tue, Jan 28, 2014 at 3:24 PM, Andrew Sullivan wrote:
>
>> On Tue, Jan 28, 2014 at 0
How do you create casts in Postgres?
On Tue, Jan 28, 2014 at 3:24 PM, Andrew Sullivan wrote:
> On Tue, Jan 28, 2014 at 02:55:03PM -0800, Mike Christensen wrote:
>
> > I'd be curious as to what types of bugs were caused by these implicit
> > casts..
>
> Typically, t
es incompatible with certain ORMs
out there, which is a bummer. I'm wondering if these ambiguities you speak
of could be solved in other ways. Such as implicitly cast iff the
intention is not ambiguous, otherwise raise some sort of "ambiguous" error
or default to some behavior.
Mike
reted as correct (like an implicit cast between numeric and enum and
string and enum) then we wouldn't have these issues..
Mike
On Tue, Jan 28, 2014 at 1:37 PM, John R Pierce wrote:
> On 1/28/2014 1:20 PM, Tom Lane wrote:
>
>> I think you can fix it by explicitly casting you
see if that reveals any noticeable performance difference.
Thanks again
Mike
On Mon, Jan 13, 2014 at 7:11 PM, Michael Paquier
wrote:
> On Tue, Jan 14, 2014 at 1:50 AM, Mike Broers wrote:
> > Hello, I am in the process of planning a 9.3 migration of postgres and I
> am
> >
what objects are identified by postgres as corrupt or not
corrupt?
Are there any other features of the checksum I am missing besides the log
entry?
Thanks
Mike
Thanks! Got it working after messing around for a while..
I decided to check out EF for a new project I'm working on. So far, I
think I like NHibernate better. A lot more control, and works flawlessly
with Postgres and Npgsql.
Mike
On Thu, Dec 12, 2013 at 8:34 AM, Francisco Figueire
dding 'Npgsql 2.0.14.1' to EFTest.
Successfully added 'Npgsql 2.0.14.1' to EFTest.
However, this doesn't have it. I've also tried installing the beta from:
http://pgfoundry.org/frs/download.php/3494/Npgsql2.0.13.91-bin-ms.net4.5Ef6.zip
No luck there. Any ideas?
Mike
replication at the time of the crash have prevented this
from cascading or was it already too late at that point?
Thanks again for the input, its been very helpful!
Mike
On Mon, Nov 25, 2013 at 12:20 PM, Mike Broers wrote:
> Thanks Shaun,
>
> Im planning to schedule a time to do the vacu
forward.
I'll update the list if I uncover anything interesting in the process
and/or need more advice, thanks again for your input - its much appreciated
as always. Nothing like a little crash corruption to get the blood flowing!
Mike
On Mon, Nov 25, 2013 at 10:29 AM, Shaun Thomas wrote:
>
needs to be run in production.
On Thu, Nov 21, 2013 at 5:09 PM, Kevin Grittner wrote:
> Mike Broers wrote:
>
> > Is there anything I should look out for with vacuum freeze?
>
> Just check the logs and the vacuum output for errors and warnings.
>
> --
>
ck in 2007, there was an issue
with the quantum support for postgres. Perhaps that's still the case
with the version I am using.
On Fri, Nov 22, 2013 at 10:28 AM, Mike Kienenberger wrote:
> I wanted to make sure that it wasn't a permission configuration
> problem in postgres fi
I wanted to make sure that it wasn't a permission configuration
problem in postgres first, since all of the other databases have
worked without a similar issue.
On Fri, Nov 22, 2013 at 9:54 AM, Adrian Klaver wrote:
> On 11/22/2013 05:46 AM, Mike Kienenberger wrote:
>>
>> Has
Has anyone successfully connected and browsed a postgres database
using the Eclipse QuantumDB plugin?
I can connect and execute sql, but the existing table list is always
empty as if no meta information is ever provided to the browser
plugin. At first, I thought it might be a permission problem
for with vacuum freeze?
Much appreciated,
Mike
On Thu, Nov 21, 2013 at 4:51 PM, Kevin Grittner wrote:
> Mike Broers wrote:
>
> > Thanks for the response. fsync and full_page_writes are both on.
>
> > [ corruption appeared following power loss on the machine hosing
> >
I am planning on running the reindex in actual production tonight during
our maintenance window, but was hoping if that worked we would be out of
the woods.
On Thu, Nov 21, 2013 at 3:56 PM, Kevin Grittner wrote:
> Mike Broers wrote:
>
> > Hello we are running postgres 9.2.5 on RHEL
see if there is a way to force vacuum to continue on error, worst case I
might have to script a table by table vacuum script I guess.. If anyone
has a better suggestion for determining the extent of the damage Id
appreciate it.
On Thu, Nov 21, 2013 at 2:10 PM, Mike Broers wrote:
> Hello we
ion but Im not sure about other tables.
Any suggestions for how to handle the tuple concurrently updated error? Or
if a reindex is likely to help with the unexpected chunk error?
Thanks
Mike
Oooh can we make the handle an elephant trunk? (Ok, now I'm sure I'm adding
all sorts of expense - but hey you'll save so much money using Postgres you
can afford an expensive coffee mug!)
On Thu, Sep 12, 2013 at 5:30 AM, Andreas 'ads' Scherbaum <
adsm...@wars-nicht.de> wrote:
> On 09/10/2013 10
How about something incredibly cheesy like
SELECT * FROM Mug;
On Mon, Sep 9, 2013 at 8:22 AM, Karsten Hilbert wrote:
> Inside of the mug:
>
> - runs of 0's and 1's = data
> - neatly aligned or in compartments/boxes/shelved ?
>
> Outside of mug:
>
> 10 elephants
>
>
I understand from one of our developers there may be issues using VIEWs
with Entity Framework and npgsql. Can anyone with some experience using
PostgreSQL in a .NET environment comment?
__
*Mike Blackwell | Technical
You passed in:
22/1/2013
Which is 22 divided by 1, divided by 2013 - which is an integer..
On Tue, Jul 9, 2013 at 10:17 AM, giozh wrote:
> ok, it works. But why on error message i had that two unknown data type? if
> was an error on date type, why it don't signal that?
>
>
>
> --
> View this
Ah ok that makes sense. The FAQ wasn't exactly clear about that.
On Mon, Jul 8, 2013 at 9:38 PM, Tony Theodore wrote:
>
> On 09/07/2013, at 2:20 PM, Mike Christensen wrote:
>
>
> PERFORM MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' );
>
4-04',2,'f' );
I get the error:
ERROR: syntax error at or near "PERFORM"
SQL state: 42601
Character: 1
Is the FAQ out of date or was this feature removed? I'm using 9.2.1.
Thanks!
Mike
d by a Rails app.
We'll see what else we can come-up with.
Thanks again.
On Wed, Jun 5, 2013 at 9:16 AM, Tom Lane wrote:
> Mike Summers writes:
> > Other than the tests in the original post do you have any suggestions?
>
> If you're speaking of
>
> http://www.p
Thanks Scott, interesting.
Other than the tests in the original post do you have any suggestions?
Thanks for your time.
>From what I'm reading the View is frozen when it's created, including it's
plan, and the usual solution is to use a set returning function... is this
not true?
I've double checked all schemas and the view is only defined once.
Thanks.
It appears that the culprit is a cached query plan, the tables in the UNION
have changed and no long match however the View does not throw a "each
UNION query must have the same number of columns" error.
Is there a way to force the View's query plan to be updated on each access?
I have a VIEW that does not appear to take advantage of the WHERE when
given the opportunity:
db=# explain select * from best_for_sale_layouts;
QUERY PLAN
A
On Thu, May 23, 2013 at 2:51 PM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:
> On 05/23/2013 02:36 PM, Oscar Calderon wrote:
>
>> Hi, this question isn't technical, but is very important for me to know.
>> Currently, here in El Salvador our company brings PostgreSQL support, but
>> Ora
Ah, gotcha! I guess whatever sample I was originally copying from used
hostaddr for some reason.. Thanks for the clarification, Tom!
On Wed, May 15, 2013 at 6:08 AM, Tom Lane wrote:
> Mike Christensen writes:
> > Though I'm a bit curious why there's a host and host
Though I'm a bit curious why there's a host and hostaddr. Why can't it
just resolve whatever you give it?
On Tue, May 14, 2013 at 9:31 PM, Mike Christensen wrote:
> Excellent! Thanks so much.
>
>
> On Tue, May 14, 2013 at 9:25 PM, Adrian Klaver wrote:
>
&
Excellent! Thanks so much.
On Tue, May 14, 2013 at 9:25 PM, Adrian Klaver wrote:
> On 05/14/2013 09:17 PM, Mike Christensen wrote:
>
>> If I have this:
>>
>> CREATE OR REPLACE VIEW Link.Foo AS
>>select * from dblink(
>> 'hostaddr=123.
If I have this:
CREATE OR REPLACE VIEW Link.Foo AS
select * from dblink(
'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
password=secret',
'select * from Foo') as ...
Then it works. However, if I do:
CREATE OR REPLACE VIEW Link.Foo AS
select * from dblink(
'hostaddr=db.d
According to
http://www.postgresql.org/docs/9.2/static/storage-file-layout.html
"When a table or index exceeds 1 GB, it is divided into gigabyte-sized
segments. The first segment's file name is the same as the filenode;
subsequent segments are named filenode.1, filenode.2, etc."
I was wondering w
According to
http://www.postgresql.org/docs/9.2/static/storage-file-layout.html
"When a table or index exceeds 1 GB, it is divided into gigabyte-sized
segments. The first segment's file name is the same as the filenode;
subsequent segments are named filenode.1, filenode.2, etc."
I was wondering w
Perfect thanks Bruce that worked.
I just extern'd PostPortNumber in my module and everything seems to be
working.
--Mike
> SHOW PORT;
>
test=> SELECT setting FROM pg_settings WHERE name = 'port';
setting
-
5432
Both of these are from a query context. This is in a C module, I suppose I
could run a query but there has to be a direct C function to get this data.
Hi There,
I'm having a bit of an issue finding a C function to fetch the
configured server port from a C module.
We have written a C module to allow for remote clients to call a function
to run pg_dump/pg_restore remotely but create files locally on the db
server.
Currently it works fine if th
1 - 100 of 1197 matches
Mail list logo