Crawling is the process of sending HTTP request to the server and downloading the response body (HTML) sent by the server. Your robots.txt file can control the crawling of your site in some degree.
If you website is blocked from crawling, that means Googlebot and other search engine crawlers can’t see what is on your website. So crawling is critical for your website.
When it’s come to crawling, robots.txt is an important part of it. You can somewhat control crawling with rules written on robots.txt file.