Skip to content Skip to sidebar Skip to footer

Google's Crawler Won't Understand Own Maps. How To Workaround?

I found strange words, (have, here, imagery, sorry) that were supposed not to be on my site, being taken as keywords by the crawler from Google It seems like Google is having erro

Solution 1:

Unfortunately i saw this a lot too...

My assumption is that googlebot won't fully evaluate all js code on a page, but will use heuristics as well. Thus getting no imagery (which gets indexed). Based on this assumption I did the following:

  1. Create a div with a "random" ID (for the map) and style="display: none;"

  2. Create a noscript tag with an img tag in it with the SAME "random" ID (i used a static map image as fallback here, which is also good as a no-js fallback)

  3. Create a (custom) javascript function where the unique ID must be passed to initialize your map AND toggle the display style to block on the map-element.

So far, none of the maps "sorry we have no imagery" gets indexed.

Hope it helps

Solution 2:

perhaps you can add a bit more specific meta tags such as

<metaname="geo.region"content="US-WA" /><metaname="geo.placename"content="Snohomish" /><metaname="geo.position"content="-57.954900;-34.917000" />

also adding what Matt Rowles meta description and some of the word filters in Google Webmasters.

Solution 3:

This answer won't help you remove the words of the crawled pages, but it might prevent them from being added after the next crawl.

Since your problem might be related to the crawler not being able to load a valid map. It's not exactly clear why it can't. The map provider might be blocking googlebots.

Anyway if it's not too hard, I'd have a look here:

https://support.google.com/webmasters/answer/1061943?hl=en

Create a list of user agents written here:

I'll use 'Googlebot' as an example, but you should use a list with every blocked user agents.

if (navigator.userAgent !== 'Googlebot') {
   // load the map and other stuff
} else {
   // show a picture where the map should be or do nothing.
}

Google bot executes JS so it should work preventing errors in case the google bot can't load it.

One thing you could do is to change your browser's useragent to 'Googlebot' and load your page. If the map provider is preventing any browser with this user agent, you should see exactly what the googlebot sees. The other problem is that googlebot might also have some timeout to prevent loading too much data and it won't load images.

Adding guards might help preventing google bot to actually load the map if the problem is really in the map.

Solution 4:

1) Perhaps setting your Meta Description inside your <head> tags will supercede this:

<metaname="description"content="This is an example of a meta description. This will often show up in search results.">

2) If the meta tag doesn't work, I would also suggest that this is possibly due to the very first thing in the <body> being rendered (or rather, attempted by the looks of your screenshot) is a Maps display prior to any other content being loaded.

For example, if you place a <div> or <p> tag with some introduction content about your website before the Map in your <body>, you may avoid this. However, I am not 100% sure about this you will have to test and see the results (keep us posted).

If you plan on doing this and want a) the Google crawler to still pick it up and b) wish to hide the actual block of words itself from viewers (style="display: none;" or style="position: absolute; left:-9999px;"), do so at your own discretion (more info here).

Solution 5:

Did you try to add spider meta tags , it really helps a lot try this out in the head section.

<metaname="robots"content="index, follow">

The spider will now index your whole website, also will not only index the first webpage of your website but also all your other webpages.

Also try to make you description more unique! much more powerful but not to overdose those keys.

thanks

Post a Comment for "Google's Crawler Won't Understand Own Maps. How To Workaround?"