Can I disallow crawling of my CSS and JavaScript files

CUTTS: A fun question from SEOmofo in Simi Valley, they ask "If I externalize all CSS style definitions and Java scripts and disallow all user agents from accessing these external files (via robots.txt), would this cause problems for Googlebot?

Does Googlebot need access to these files?" I personally would recommend not blocking that, so for example, the WhiteHouse recently rolled out a new robot.txt, and you know, I think they blocked the images, directory or, or CSS or JavaScript or something like that.

You really don't need to do that and in fact sometimes it can be very helpful  if we think something spammy is going on with

JavaScript, you know, is somebody doing the sneaky redirect or something like that. So my personal advise would be to let Googlebot go ahead and crawl that and then, you know, it's not like, these files are huge anyway, so it doesn't consume a lot of bandwidth.

So, my personal advise, just go ahead and let Googlebot have access to all that stuff and then most of the time we won't ever fetch it but in the rare occasion when we're doing a quality check on behalf of someone or we receive a spam report, then we can go ahead and fetch that and, and make sure that your site is clean and not having any sorts of problems.