Hello Everyone,

We have deployed our Tapestry application to production. After the
deployment we started getting exceptions when the Search engine spiders try
to access our domains with incomplete or invalid URLs. 
eg: below is a Valid URL to one of our public pages of the website. But if
you chop of the last parameter completely / even partially we get
Exceptions.
http://www.testsite.com/external.svc;jsessionid=DD4CD6B63F6C5D7C8DCEE358701C2F48?page=ArticleListPage&sp=l3

We tried to include the robots.txt also in the application, but still the
requests keep coming in and generate the exceptions.

What configuration are we missing here. Anyone any hints / pointers ?

Thanks,
Sunil M
-- 
View this message in context: 
http://www.nabble.com/Incomplete-URL-requests-resulting-in-Excceptions-tf3567555.html#a9966053
Sent from the Tapestry - User mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to