If I don't need to block crawlers, should I create a robots.txt file

Today's question comes from Pennsylvania.

Corey S. asks, is it better to have a blank robots.txt file, a robots.txt file that contains user-agent star disallow with nothing disallowed, or no robots.txt file at all?

Really good question, Corey.

I would say any of the first two.

So not having a robots.txt file is a little bit risky.

Not very risky at all, but a little bit risky.

Because sometimes when you don't have a file, your webhost will fill in the 404 page, and that could have various weird behaviors.

And luckily, we are able to detect that really, really well, so even that is only a 1% risk.

But if possible, I would have a robots.txt file.

Whether it's blank or whether you specifically say, user-agent star disallow nothing, which means everybody is allowed to crawl anything they want, is pretty equal.

We'll treat those syntactically as being exactly the same.

For me, I'm a little more comfortable with user-agent star and then disallow colon, just so you're being very specific that yes, you're allowed to crawl everything.

If it's blank then-- People were smart enough to make the robots.txt file, but it would be great to have just that indicator that says, OK, here's exactly what the behavior is that's spelled out.

Otherwise, it could be, well, maybe somebody deleted everything in the file by accident.

So given a choice between all of them, I would have a robots.txt file that did have a user-agent star and told exactly what was allowed and disallowed.

I'm feeling that having a blank one is perfectly fine.

If you don't have one at all, there's just that little, tiny bit of risk that your webhost might do something strange or unusual, like return a you don't have permission to read this file, which things get a little strange at that point.

So that's just some very quick advice about how to set up a robots.txt file, assuming you're totally happy if Googlebot crawls your content.