Help > Forum > Search Engines > Why are some pages blocked in the robots.txt file?
Why are some pages blocked in the robots.txt file?
The robots.txt file we have specified is optimized to ensure the highest ranking possible in search engines. It tells search engines not to index certain pages which would negatively impact placement in search engines. For example, pages with very little content (like the Compose New Topic page), URLs that are slightly different that point to the same content (this would be counted as duplicate content by search engines), and transactional member-only pages (like following a topic, editing your profile, etc.). This ensures that search engines primarily see the highly relevant and content-rich pages of your forum so that it ranks your forum higher in the search results.
Other pages listed in the robots.txt file are private pages which are only accessible to logged in members. It is not possible for search engines to access those pages anyway. By restricting them at the robots.txt level, it boosts your ranking with the search engine because it doesn't keep seeing the log in page over and over, thinking that your forum doesn't have much high quality content. It also reduces unnecessary page views generated by the search engine crawler which would end up costing money and wouldn't result in better search engine positioning or ad delivery anyway (since the log in page is disregarded by the search engine crawler).
Contexual advertising networks, like Google Adsense, also prefer that you don't use pages with little or duplicate content for contexual ad targeting. Our robots.txt file ensures that only highly relevant pages are used for contextual advertising so that advertisers prefer advertising on your forum, resulting in greater long-term revenue.