Counting a rel=”nofollow” trait on a connection forestalls Google’s crawler from following the connection which, thus, keeps them from finding, slithering, and ordering the objective page. While this technique may function as a transient arrangement, it’s anything but a feasible long haul google inverted index.
The defect with this methodology is that it expects all inbound connections to the URL will incorporate a rel=”nofollow” characteristic. The website admin, in any case, has no real way to forestall other sites from connecting to the URL with a followed interface. So the odds that the URL will at last get crept and filed utilizing this strategy is very high.
Utilizing robots.txt to forestall Google ordering
Another basic strategy used to forestall the ordering of a URL by Google is to utilize the robots.txt record. A forbid order can be added to the robots.txt document for the URL being referred to. Google’s crawler will respect the order which will keep the page from being crept and filed. At times, in any case, the URL can in any case show up in the SERPs.
Now and then Google will show a URL in their SERPs however they have never recorded the substance of that page. In the event that enough sites connect to the URL, Google can regularly deduce the subject of the page from the connection text of those inbound connections. Therefore they will show the URL in the SERPs for related hunts. While utilizing a forbid order in the robots.txt record will keep Google from slithering and ordering a URL, it doesn’t ensure that the URL won’t ever show up in the SERPs.