Does PageRank take into account cross-browser compatibility?

Today's question comes from landlord in Colorado who asks, does PageRank take into account cross browser compatibility?

If a site isn't compatible with certain browsers, does that make a difference for Googlebot?

The answer is is no on both counts.

So I've mentioned this in another video.

But let me just reiterate page rank is based on the number of people who link to you and how reputable they are the links that come to your site.

It is completely independent of the content of your site.

So PageRank doesn't take into account cross browser compatibility because it doesn't take into account the content of the websiteor the web page.

It only takes into account the links.

That's the essence of page rank.

It looks at what we think, our opinion of the reputation of links.

So now the next question, if a site isn't compatible with certain browsers, does that make a difference for Googlebot?

Well, let's play it through.

Suppose Googlebot comes to your site and Googlebot says I would like to crawl a page from your site.

Please give it to me.

So I may index it.

We take that page and we look at it and we look for textual content on that page.

But we're almost always crawling as Google bot.

Maybe we'll crawl as Google bot, mobile or Ads bot or Google

Image bot or something like that.

But we try to provide very nice descriptive ways that you can tell that Google is coming to your site unless we're doing a spam check or something like that or someone's coming to your site to sort of see whether you're cloaking something like that.

So Googlebot comes to your page.

It tells you it's Googlebot and it tries to index the page that it gets.

So it really doesn't have much of a notion of how do things render differently for a mobile browser versus Internet Explorer?

Six versus Netscape, two versus Firefox four or whatever.

We're just going to take a look at the textual content andtry to make sure that we index it.

Now, if you want to make sure that you don't get in trouble in terms of cloaking or anything like that, you want to make sure that you return the same page to Googlebot that you return to regular users.

So just make sure that you don't have any special code that's doing if Google bot or checking if the user agent is Googlebot or the IP addresses from Google.

If you're not doing anything special for Google, you're just doing whatever you would normally do for your users, then you're not going to be clogging, and you shouldn't be in any trouble as far as that goes.

So Google doesn't look into cross browser site compatibility or things like that.

And in fact, Google tries to be relatively Liberal and expecting even somewhat broken HTML because not everybody writes perfect HTML.

That doesn't mean the information on the page isn't good. There were some studies that showed that 40% of all web pages had at least some sort of syntactic error.

But if we threw out 40% of all pages, you'd be missing 40% of all the content on the web.

So Google tries to interpret content even if it's not syntactically valid, even if it's not well formed, even if it doesn't validate.

For all these sort of reasons, we have to take the web as it is and try to return the best page to users, even if the results that we see are kind of noisy.

So historically, we haven't provided any sort of penalty by saying, oh, you didn't validate validate or it's not clean.

Html we don't have any sort of factor.

To the best of my knowledge, that looks at compatibility with certain browsers or cross browser compatibility of a site.

Hope that helps.