WinHost Web Hosting
Doorblocks
WinHost Web Hosting
I am not able to understand, why my incoming mails are bouncing.
Regards,
Hi,
I want to activate catch all option on my email mailboxes.
Hi,
Get logged into your control panel, from the Quick Access menu, click on the e-mail icon. A new form will appear, click on MAIL BOX icon next to mail box where you want to enable Catch all option. In the right hand side of the form, you would be able to see the mailbox properties. Click on the rounded OFF button, which is a toggle button, by clicking it gets ON.
Hi,
How can I delete catch all mailbox?
To delete a Catch All mailbox, first switch Catch All OFF.
But, What is this Catch All mailbox option?
A Robots.txt file is a vital part of any webmasters battle against getting banned or punished by the search engines if he or she designs different pages for different search engine’s.
The robots.txt file is just a simple text file as the file extension suggests. It’s created using a simple text editor like notepad or WordPad, complicated word processors such as Microsoft Word will only corrupt the file.
You can insert certain code in this text file to make it work. This is how it can be done.
User-Agent: (Spider Name) Disallow: (File Name)
The User-Agent is the name of the search engines spider and Disallow is the name of the file that you don’t want that spider to index.
You have to start a new batch of code for each engine, but if you want to list multiply disallow files you can one under another. For example –
User-Agent: Slurp (Inktomi’s spider)
Disallow: xyz-gg.html Disallow: xyz-al.html
Disallow: xxyyzz-gg.html Disallow: xxyyzz-al.html
The above code disallows Inktomi to spider two pages optimized for Google (gg) and two pages optimized for AltaVista (al). If Inktomi were allowed to spider these pages as well as the pages specifically made for Inktomi, you may run the risk of being banned or penalized. Hence, it’s always a good idea to use a robots.txt file.
The robots.txt file resides on your webspace, but where on your webspace? The root directory! If you upload your file to sub-directories it will not work. If you wanted to disallow all engines from indexing a file, you simply use the * character where the engines name would usually be. However beware that the * character won’t work on the Disallow line.
Here are the names of a few of the big engines:
Excite – ArchitextSpider AltaVista – Scooter Lycos – Lycos_Spider_(T-Rex) Google – Googlebot Alltheweb – FAST-WebCrawle
Be sure to check over the file before uploading it, as you may have made a simple mistake, which could mean your pages are indexed by engines you don’t want to index them, or even worse none of your pages might be indexed.
Another advantage of the Robots.txt file is that by examining it, you can get information on what spiders, or agents have accessed your web pages. This will give you a list of all the host names as well as agent names of the spiders. Moreover, information of very small search engines also gets recorded in the text file. Thus, you know what Search Engines are likely to list your website.
You must be logged in to post a comment.