Re: [GENERAL] Table Bloat still there after the Vacuum

2010-04-26 Thread Chris . Ellis
pgsql-general-ow...@postgresql.org wrote on 04/26/2010 03:43:03 PM:

> Hi All -
> 
>   I have a table bloated with following details
> rows:29431 pages:516039 shouldbe:534 (966.4X) wasted size:4223016960(3 
GB) * 
> 
>   I did  a vacuum on the database and also I did 
> vacuumdb full on the table. Still there is no change. Can you please
> suggest if there is any other operation that can be done to take 
> care of the issue
> 
> Thanks for the help
> 
> Regards

Try a CLUSTER (http://www.postgresql.org/docs/8.4/static/sql-cluster.html) 
that will physically rewrite the table. 
However note it will require an Access Exclusive lock on the table 
preventing any other activity on the table.

Chris Ellis
**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**
Help prevent the spread of swine flu. CATCH IT. BIN IT. KILL IT.
**



Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-10-27 Thread Chris . Ellis
Note that most data stored in the TOAST table is compressed.

IE a Text type with length greater than around 2K will be stored in the 
TOAST table.  By default data in the TOAST table is compressed,  this can 
be overriden.

However I expect that compression will reduce the performance of certain 
queries.

http://www.postgresql.org/docs/8.3/interactive/storage-toast.html

Out of interested, in what context did you want compression?




Ron Mayer <[EMAIL PROTECTED]> 
Sent by: [EMAIL PROTECTED]
27/10/2008 07:34

To
小波 顾 <[EMAIL PROTECTED]>
cc
"pgsql-general@postgresql.org" 
Subject
Re: [GENERAL] Are there plans to add data compression feature to 
postgresql?






You might want to try using a file system (ZFS, NTFS) that
does compression, depending on what you're trying to compress.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



Re: [GENERAL] combine SQL SELECT statements into one

2010-02-01 Thread Chris . Ellis
Hi

pgsql-general-ow...@postgresql.org wrote on 02/01/2010 07:36:55 AM:

> Good Evening, Good Morning Wherever you are whenever you may be reading 
this. 
> 
>
snip
> 
> count1 |  count2  | count3
> ---
>  2  2  4 
> 
> Can this be done with ONE SQL STATEMENT? touching the database only ONE 
time?

You can do the following:

SELECT 
(SELECT COUNT(distinct model) FROM inventory WHERE modified >= 
'2010-02-01') AS "COUNT_1",
(SELECT COUNT(distinct model) FROM inventory WHERE modified >= 
'2010-01-20') AS "COUNT_2",
(SELECT COUNT(distinct model) FROM inventory WHERE modified >= 
'2010-01-01') AS "COUNT_3"
;

PostgreSQL allows sub-queries in the select list as long as the sub-query 
returns one column

Job done

> Please let me know. 
> 
> Thanx> :)
> NEiL 
> 

Chris Ellis
**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**
Help prevent the spread of swine flu. CATCH IT. BIN IT. KILL IT.
**



Re: [GENERAL] combine SQL SELECT statements into one

2010-02-01 Thread Chris . Ellis
> > -Original Message-
> > From: chris.el...@shropshire.gov.uk 
> > [mailto:chris.el...@shropshire.gov.uk] 
> > Sent: Monday, February 01, 2010 4:08 AM
> > To: neilst...@yahoo.com
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: combine SQL SELECT statements into one
> > 
> > 
> > Hi 
> > 
> > pgsql-general-ow...@postgresql.org wrote on 02/01/2010 07:36:55 AM:
> > 
> > > Good Evening, Good Morning Wherever you are whenever you 
> > may be reading this. 
> > > 
> > > 
> > snip 
> > > 
> > > count1 |  count2  | count3
> > > ---
> > >  2  2  4 
> > > 
> > > Can this be done with ONE SQL STATEMENT? touching the 
> > database only ONE time? 
> > 
> > You can do the following: 
> > 
> > SELECT 
> > (SELECT COUNT(distinct model) FROM inventory WHERE 
> > modified >= '2010-02-01') AS "COUNT_1",
> >(SELECT COUNT(distinct model) FROM inventory WHERE 
> > modified >= '2010-01-20') AS "COUNT_2",
> >(SELECT COUNT(distinct model) FROM inventory WHERE 
> > modified >= '2010-01-01') AS "COUNT_3" 
> > ; 
> > 
> > PostgreSQL allows sub-queries in the select list as long as 
> > the sub-query returns one column 
> > 
> > Job done 
> > 
> > > Please let me know. 
> > > 
> > > Thanx> :)
> > > NEiL
> > > 
> > 
> > Chris Ellis 
> > 
> > **
> > 
> > 
> > If you are not the intended recipient of this email please do 
> > not send it on
> > 
> > to others, open any attachments or file the email locally. 
> > 
> > Please inform the sender of the error and then delete the 
> > original email.
> > 
> > For more information, please refer to 
> > http://www.shropshire.gov.uk/privacy.nsf
> > 
> > **
> > 
> > 
> > Help prevent the spread of swine flu. CATCH IT. BIN IT. KILL IT.
> > 
> > **
> > 
> > 
> 
> Original poster asked for the sql that will touch inventory table only
> once.
> 
> Your statement (with 3 subqueries) will do it 3 times.
> Igor Neyman
> 

---
I think you will find that the poster asked to touch the DATABASE not the 
TABLE only once:

'Can this be done with ONE SQL STATEMENT? touching the database 
only ONE time?'

While the sugested query might not me as optimised as possible, it 
demonstrates a possible method of folding multiple select statements into 
one select statement.  This seemed 
main purpose of this post.  I made the assumption that the intent was to 
reduce the overhead and latency caused from sending multiple statements.
 
Chris Ellis

Re: [Fwd: Re: [GENERAL] fulltext search stemming/ spelling problems]

2010-04-09 Thread Chris . Ellis
Hi Corin

It looks like you have it working correctly, however you are expecting the 
FTS to do a task it does not.  The FTS will not automattically correct a 
spelling error.
If the FTS auto corrected search tokens, this would likely lead to 
undesirable results.  I believe you are approaching the problem in the 
wrong manner,  a system should 
not assume the input is incorrect, a user may wish to search for 'gitar'. 
Additionally an incorrect term will expand into a range of possible terms, 
user interaction is 
needed to pick the desired search term.

Therefore your application will need to provide the spell checking 
support.  There are many libraries available like GNU Aspell 
http://aspell.net/ or http://jazzy.sourceforge.net/.
I use Jazzy to provide spelling correction within the search engine I have 
developed, for example:

http://search.shropshire.gov.uk/RICE/index.jsc?p=0&q=gitar&simple=Search&simplebutton=Search

The system checks tokenised input queries by using Jazzy.  So a search for 
'gitar' will offer the user a choice to search for 'guitar'.

Obviously, I do not fully know what you are trying to implement and 
therefore can only tell you how I approached a similar problem.

I hope that is of some use.

Chris Ellis

pgsql-general-ow...@postgresql.org wrote on 04/09/2010 02:53:19 PM:

> Hi,
> 
> nobody here who knows how to get the postgre fulltext working with 
> ispell and stemming? :-(
> 
> So that when I search for 'gitar' also records containing 'guitar', 
> 'guitars', ... will be found.
> 
> Any help would be really appreciated! :)
> 
> Thanks,
> Corin

> 
> - Message from Corin  on Thu, 08 Apr 2010 
> 22:54:38 +0200 -
> 
> To:
> 
> Oleg Bartunov , pgsql-general@postgresql.org
> 
> Subject:
> 
> Re: [GENERAL] fulltext search stemming/ spelling problems
> 
> On 08.04.2010 21:27, Oleg Bartunov wrote:
> > it means, that (from 
> > http://www.postgresql.org/docs/current/static/textsearch-
> dictionaries.html#TEXTSEARCH-ISPELL-DICTIONARY) 
> >
> >
> > 12.6.5. Ispell Dictionary
> >
> > The Ispell dictionary template supports morphological dictionaries, 
> > which can normalize many different linguistic forms of a word into the 

> > same lexeme. For example, an English Ispell dictionary can match all 
> > declensions and conjugations of the search term bank, e.g., banking, 
> > banked, banks, banks', and bank's.
> I already read this but I don't know how to solve my problems with this 
> information.
> 
> SELECT ts_lexize('english_ispell','guitar');
> {guitar}
> (1 line)
> 
> SELECT ts_lexize('english_ispell','bank');
> {bank}
> (1 line)
> 
> SELECT ts_debug('english_ispell','bank');
> (asciiword,"Word, all 
> ASCII",bank,"{english_ispell,english_stem}",english_ispell,{bank})
> (1 line)
> 
> SELECT plainto_tsquery('english_ispell','bank');
> 'bank'
> (1 line)
> > Regards,
> > Oleg
> It would be very nice if you (or anyone else) could provide me with 
> concrete instructions or any howto. What can I do to find the error in 
> my setup? What output should I expect from the above comments if 
> everything worked correctly?
> 
> Thanks,
> Corin
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**
Help prevent the spread of swine flu. CATCH IT. BIN IT. KILL IT.
**



Re: [GENERAL] field with Password

2009-02-04 Thread Chris . Ellis
You should always salt your password hashes.

Ie randomly generate a salt string, the store this and the password hash:

insert into auth (user_id, salt, password) values 
(1,'blah',md5('blah' + 'test')) ;

then to check the password

select true from auth where user_id = 1 and password = md5( salt + 
'test') ;


I tend to set a trigger function to auto generate a salt and hash the 
password.



If you want to be really secure, use both a md5 and sha1 hash, snice it 
has been proved you can generate hash collisions so you could use:

insert into auth (user_id, salt, password) values 
(1,'blah',md5('blah' || 'test') || sha1('blah' || 'test')) ;

then to check the password

select true from auth where user_id = 1 and password = md5( salt 
|| 'test')  || sha1( salt || 'test') ;

Chris Ellis





"Raymond C. Rodgers"  
Sent by: pgsql-general-ow...@postgresql.org
04/02/2009 14:34

To
Iñigo Barandiaran 
cc
pgsql-general@postgresql.org
Subject
Re: [GENERAL] field with Password






Iñigo Barandiaran wrote: 
Thanks! 


Ok. I've found http://256.com/sources/md5/ library. So the idea is to 
define in the dataBase a Field of PlainText type. When I want to insert a 
new user, I define a password, convert to MD5 hash with the library and 
store it in the DataBase. Afterwards, any user check should get the 
content of the DataBase of do the inverse process with the library. Is it 
correct? 

Thanks so much!! 

Best, 

Well, you can use the built-in md5 function for this purpose. For 
instance, you could insert a password into the table with a statement 
like:

insert into auth_data (user_id, password) values (1, md5('test'));

And compare the supplied password with something like:

select true from auth_data where user_id = 1 and password = md5('test');

You don't need to depend on an external library for this functionality; 
it's built right into Postgres. Personally, in my own apps I write in PHP, 
I  use a combination of sha1 and md5 to hash user passwords, without 
depending on Postgres to do the hashing, but the effect is basically the 
same.

Raymond

**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



[GENERAL] Server Performance

2009-03-31 Thread Chris . Ellis
Hi

Been having interesting times with an IBM x3650 with 8 15k RPM 73GB drives 
in RAID 10 and a ServRAID 8K controller with Write-Back cache enabled 
(battery installed and working).  Currently getting a pgbench score of 4.7 
transactions per second!  After playing with the postgresql configuration 
file, I'm certain that this is not a postgresql problem.  I have tried two 
different Linux distro's upon the server both with the same problems.  I'm 
fairly certain that this is a problem with the hardware configuration / 
setup, however I'm still waiting for IBM to contact me!

Initially I started with the OS on a RAID 1 array and a 6 drive RAID 10 
array for postgresql.  With this setup I got 3tps, altering the RAID 
configuration to a single 8 drive array, running both the OS and 
postgresql.  I was able to reach 700tps, however after upgrading to the 
latest RAID controller firmware this has now fallen back to 4tps.

Benchmarking another server I have access to, 4 15k 73GB RPM disks with a 
Dell Perc 5/i controller. I consistently get a pgbench score of 1400tps. 
Therefore taking a linear extrapolation I expect the IBM x3650 to manage 
~3000tps.  Additionally my Laptop with a 5400 RPM sata disk was able to 
score ~200tps.

I have two of these IBM x3650's running the following configurations:

1)  IBM x3650
IBM ServRAID controller (Rebranded Adaptec card, using the aacraid 
driver)
2 15k RPM 73GB RAID 1  (OS array)
6 15k RPM 73GB RAID 10 (Postgresql data array)
2 quad core 3.0GHz Intel Xeons
8 GB ram
SuSE Linux Enterprise Server 10 (2.6.16 kernel)
Postgresql 8.3.4 (compiled from source)

2)
IBM x3650
IBM ServRaid controller (Rebranded Adaptec card, using the aacraid 
driver)
8 15k RPM 73GB RAID 10 (OS and Postgres data array)
2 quad core 3.0GHz Intel Xeons
8 GB ram
Mandriva 2009 Free (2.6.27.19 kernel)
Postgresql 8.3.7

As I said, I have the same problem on both machines, I'm expecting that 
this is caused by the low quality RAID controllers IBM has floged us.


I'm interested to find out whether any one out there has had similar 
problems with IBM ServRAID controllers, or IBM hardware in general?

What SAS RAID controllers are people using?

What RAID configurations are people using?

What SAS RAID controllers would anyone recommend purchasing?


Any information is gratefully received


Chris Ellis
Shropshire Council
chris.el...@shropshire.gov.uk



**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



Re: [GENERAL] Server Performance

2009-03-31 Thread Chris . Ellis
Scott Marlowe  wrote on 31/03/2009 15:16:01:

> On Tue, Mar 31, 2009 at 3:37 AM,   wrote:
> >
> > Hi
> >
> > Been having interesting times with an IBM x3650 with 8 15k RPM 73GB 
drives
> > in RAID 10 and a ServRAID 8K controller with Write-Back cache enabled
> > (battery installed and working).  Currently getting a pgbench score of 
4.7
> > transactions per second!  After playing with the postgresql 
configuration
> > file, I'm certain that this is not a postgresql problem.  I have tried 
two
> > different Linux distro's upon the server both with the same problems. 
 I'm
> > fairly certain that this is a problem with the hardware configuration 
/
> > setup, however I'm still waiting for IBM to contact me!
> >
> > Initially I started with the OS on a RAID 1 array and a 6 drive RAID 
10
> > array for postgresql.  With this setup I got 3tps, altering the RAID
> > configuration to a single 8 drive array, running both the OS and 
postgresql.
> >  I was able to reach 700tps, however after upgrading to the latest 
RAID
> > controller firmware this has now fallen back to 4tps.
> >
> > Benchmarking another server I have access to, 4 15k 73GB RPM disks 
with a
> > Dell Perc 5/i controller. I consistently get a pgbench score of 
1400tps.
> >  Therefore taking a linear extrapolation I expect the IBM x3650 to 
manage
> > ~3000tps.  Additionally my Laptop with a 5400 RPM sata disk was able 
to
> > score ~200tps.
> 
> SNIP
> 
> > What SAS RAID controllers are people using?
> >
> > What RAID configurations are people using?
> >
> > What SAS RAID controllers would anyone recommend purchasing?
> 
> I am using an Areca 1680 series controller.  16 SAS 15k5 disks.  2
> RAID-1, 12 RAID-10, 2 hot spares.  512Meg bbu.  RHEL 5.2 I can sustain
> around 3000 tps with pgbench and 30 minute runs.

Thanks for the Info.

> 
> I'd call IBM and ask them to come pick up their boat anchors.

My sentiments exactly, unfortunately, I seem stuck with them :(

Chris Ellis
**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



Re: [GENERAL] Server Performance

2009-03-31 Thread Chris . Ellis
Scott Marlowe  wrote on 31/03/2009 15:53:34:

> On Tue, Mar 31, 2009 at 8:21 AM,   wrote:
> >
> > Scott Marlowe  wrote on 31/03/2009 15:16:01:
> >
> >> I'd call IBM and ask them to come pick up their boat anchors.
> >
> > My sentiments exactly, unfortunately, I seem stuck with them :(
> 
> Can you at least source your own RAID controllers?

Yes I will be, I never really did trust IBM and I certainly don't now!

I just need to choose the correct RAID card now, good performance at the 
right price.

Chris Ellis

**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



Re: [GENERAL] Server Performance

2009-04-01 Thread Chris . Ellis
Stefan Kaltenbrunner  wrote on 01/04/2009 
06:53:07:

> chris.el...@shropshire.gov.uk wrote:
> > 
> > Scott Marlowe  wrote on 31/03/2009 15:53:34:
> > 
> >  > On Tue, Mar 31, 2009 at 8:21 AM,   
wrote:
> >  > >
> >  > > Scott Marlowe  wrote on 31/03/2009 
15:16:01:
> >  > >
> >  > >> I'd call IBM and ask them to come pick up their boat anchors.
> >  > >
> >  > > My sentiments exactly, unfortunately, I seem stuck with them :(
> >  >
> >  > Can you at least source your own RAID controllers?
> > 
> > Yes I will be, I never really did trust IBM and I certainly don't now!
> > 
> > I just need to choose the correct RAID card now, good performance at 
the 
> > right price.
> 
> you are jumping to conclusions too quickly - while the 8k is not the 
> worlds fastest raid card available it is really not (that) bad at all. 
> we have plenty of x3650 in production and last time I tested I was 
> easily able to get >>2000tps even on an untuned postgresql install and 
> with fwer disks.

Could you provide any more information upon your configurations if 
possible, please?

> 
> So I really think you are looking at another problem here (be it 
> defective hardware or a driver/OS level issue).

Hardware is always a possiblity, finally managed to get hold of IBM too.
I have tried two different Linux distro's, with different kernels, My 
current Mandriva test using a fairly upto date kernel.
I may try a custom kernel.
 
> is your SLES10 install updated to the latest patch levels available and 
> are you running the recommended driver version for that version of SLES?

Yes

> 
> 
> Stefan

**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



Re: [GENERAL] Looking for advice on database encryption

2009-04-17 Thread Chris . Ellis
> What are folks doing to protect sensitive data in their databases?
> 
> We're running on the assumption that the _really_ sensitive data
> is too sensitive for us to just trust the front-end programs that
> connect to it.
> 
> The decision coming down from on-high is that we need to encrypt
> certain fields.  That's fine, looked at pgcrypto, but found
> the requirement to use pgp on the command line for key management
> to be a problem.
> 
> So we're trying to implement the encryption in the front-end, but
> the problem we're having is searching on the encrypted fields.  Since
> we have to decrypt each field to search on it, queries that previously
> took seconds now take minutes (or worse).
> 
> We've tested a number of cryptographic accelerator products.  In
> case nobody else has tried this, let me give away the ending: none
> that we've found are any faster than a typical server CPU.
> 
> So, it's a pretty open-ended question, since we're still pretty open
> to different approaches, but how are others approaching this problem?
> 
> The goal here is that if we're going to encrypt the data, it should
> be encrypted in such a way that if an attacker gets ahold of a dump
> of the database, they still can't access the data without the
> passphrases of the individuals who entered the data.

Take the performance hit, If people on high want the data encrypted, then 
they have to suffer the perfromance penalty, however bad.

Could you not write some server extensions to encrypt / decrypt the data 
server side, coupled with a custom index implementation?

Can you use a global server side key or do you need fine grained 
encryption?

Is a database the correct tool for the job if you want this level of 
encryption and granularity?

Also, how secure are you communication channels, what stops me snooping 
the data in transit, ARP posioning and other techniques etc.

Chris Ellis

**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**



Re: [GENERAL] Looking for advice on database encryption

2009-04-17 Thread Chris . Ellis
 > > Take the performance hit, If people on high want the data encrypted, 
then 
> > they have to suffer the performance penalty, however bad.
> 
> As reasonable as that sounds, I don't think it's true.  We've already
> brainstormed a dozen ways to work around the performance issue (creative
> hashing, backgrounding the decryption and using ajax to display the
> results as they're decrypted ...)
> 
> Problem is that all of these methods complicate things in the
> application.  I was hoping there were better approaches to the
> solution, but I'm starting to think that we're already on the
> right path.
> 
> > Could you not write some server extensions to encrypt / decrypt the 
data 
> > server side, coupled with a custom index implementation?
> 
> Not sure how the index implementation would work.  The server-side
> encryption doesn't really help much ... it's difficult to add more
> DB servers in order to improve throughput, but adding more web
> servers fits easily into our load balanced setup.  In any event,
> the addition of processing cores (not matter where) doesn't speed
> up the decryption of individual items, it only allows us to do more
> in parallel.

Move all DB calls to stored procedures, let the stored procedures handle 
the encryption / decryption with a given key.
If your communication channels are secure then this is just as secure as 
decrypting the data in the application.

This also allows DB's to be clustered, with the likes of PL/Proxy.

You could create a custom datatype to hold the encrypted data, then 
functions to access it.

> > Can you use a global server side key or do you need fine grained 
> > encryption?
> > 
> > Is a database the correct tool for the job if you want this level of 
> > encryption and granularity?

> The global side key puts us in pretty much the same situation that
> filesystem encryption does, which is not quite as strong as we're
> looking for.

> I've considered the possibility of using something other than the
> DB, but I can't think of any storage method that gains us anything over
> the DB.  Also, if we use something different than the DB, we then have
> to come up with a way to replicated it to the backup datacenter.  If
> we put the data in the DB, slony is already set up to take care of that.

File system, Leave the replication upto the SAN.  Store your data in flat 
files which are encrypted with each key, an Index per user etc.

> 
> > Also, how secure are you communication channels, what stops me 
snooping 
> > the data in transit, ARP posioning and other techniques etc.
> 
> We do what we can.  Everything is transferred over HTTPS, and we log and
> monitor activity.  We're constantly looking for ways to improve that
> side of things as well, but that's a discussion for a different forum.
> 
> -- 
> Bill Moran
> http://www.potentialtech.com
> http://people.collaborativefusion.com/~wmoran/
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

**
If you are not the intended recipient of this email please do not send it on
to others, open any attachments or file the email locally. 
Please inform the sender of the error and then delete the original email.
For more information, please refer to http://www.shropshire.gov.uk/privacy.nsf
**