Hi Peter

It looks like Google's infrastructure for crawling the web can't access any
URLs at all from forum.openoffice.org, including the homepage. Sometimes
this is due to a firewall or abuse protection system recognizing these
requests as malicious. Over time, as we attempt to update the pages in the
search results by crawling URLs from the site, if we see that we can't
access them at all, they generally get removed from our search results, In
practice, this means that users won't be able to find your pages in Google
Search. Sometimes websites do that on purpose, if they don't want to be
found in search, I suspect it's more of an accident here. A simple way to
test is to use  https://search.google.com/test/mobile-friendly to check
URLs from your site (better would be to use
https://support.google.com/webmasters/answer/9012289 , though that would
require verification of the site in Google Search Console first).

Hope this helps!
John




On Tue, May 12, 2020 at 10:33 AM Peter Kovacs <pe...@apache.org> wrote:

> Hello Mr Mueller,
>
>
> The forum.openoffice.org is our support Forum. When people have issues
> they are often directed to this page for solutions.
>
> Do you have a list of URLs googlebot has not able to crawl? We can then
> check if the behavior is intended or not and we can tell you the reason for
> this measurement.
>
> I am not particular skilled in google search engine. I do not understand
> the sentence:
>
> This will cause those pages to drop out of Google's search results, and
> will prevent new pages from being picked up for Search.
>
> Can you explain this in an example please?
>
>
> Thanks for the support.
>
> All the best
>
> Peter
>
>
> Am 11.05.20 um 13:37 schrieb John Mueller:
>
> Dear webmaster of forum.openoffice.org
>
> I'm an analyst at Google in Switzerland. We wanted to bring your attention
> to a critical issue with your website, and how it's available for Google's
> web search.
>
> In particular, Googlebot has been unable to crawl URLs from
> https://forum.openoffice.org/ . This will cause those pages to drop out
> of Google's search results, and will prevent new pages from being picked up
> for Search. If you're not aware of this issue, you may be accidentally
> blocking these pages from Google Search due to a server issue. If you need
> to block Googlebot from crawling pages on your website, we'd recommend
> using the robots.txt file instead.
>
> Should you need to recognize IP addresses of Googlebot requests, you can
> use a reverse IP lookup to do so:
> https://support.google.com/webmasters/answer/80553
>
> Should you have any questions, feel free to contact me directly. For
> verification purposes, we are sending a copy of this message to your site's
> Search Console account.
>
> Thank you,
> John Mueller (joh...@google.com)
> Webmaster Trends Analyst
>
>
>
>
> --
>
> John Mueller, He/Him, Search Relations Team - go/search-rel
> <https://goto.google.com/search-rel>
> WTA is now Search-Rel (info
> <https://sites.google.com/corp/google.com/search-rel/Home/reorg-2020-01>)
>
> *Time-critical? Resend with "URGENT" in the subject.*
>
> Google Switzerland GmbH
> Gustav-Gull-Platz 1, 3. Stock
> 8004 Zurich, Switzerland
>
> Identifikationsnummer:
> CH-020.4.028.116-1
>
>

-- 

John Mueller, He/Him, Search Relations Team - go/search-rel
<https://goto.google.com/search-rel>
WTA is now Search-Rel (info
<https://sites.google.com/corp/google.com/search-rel/Home/reorg-2020-01>)

*Time-critical? Resend with "URGENT" in the subject.*

Google Switzerland GmbH
Gustav-Gull-Platz 1, 3. Stock
8004 Zurich, Switzerland

Identifikationsnummer:
CH-020.4.028.116-1

Reply via email to