CarrotSticks is an online multiplayer game that improves math skills and understanding for 1st - 5th graders as they practice and compete with other students around the world!
Kids Love it!Thanks: winmani
Posted by Babu SEO Tips at 3:34 AM 0 comments
There are 3 Major types in SEO
Posted by Babu SEO Tips at 2:47 AM 0 comments
Labels: SEO Hat, Types of HATs in SEO
We have three Google webmaster tools updates in 2011 till now. Following are three updates were live on Feb, 2011. Following are the details:
Posted by Babu SEO Tips at 2:38 AM 0 comments
Search Engine Optimization (SEO) is the most trending word in online marketing. Google always keeps changing their algorithms and SEO Professionals will break their heads to run with the latest algorithms. Good SEO Professionals always suggest using White hat methods as they will help to stay longer in search engine results. Here are few white hat SEO tips to follow according to the latest search engine algorithms in 2011.
Posted by Babu SEO Tips at 2:36 AM 0 comments
Possible "Walls" for SE Spiders:
The key to ensuring that a site's contents are fully crawlable is to provide direct, HTML links to to each page you want the search engine spiders to index. Remember that if a page cannot be accessed from the home page (where most spiders are likely to start their crawl) it is likely that it will not be indexed by the search engines. A sitemap (which is discussed later in this guide) can be of tremendous help for this purpose.
Measuring Relevance and Popularity
Modern commercial search engines rely on the science of information retrieval (IR). That science has existed since the middle of the 20th century, when retrieval systems powered computers in libraries, research facilities and government labs. Early in the development of search systems, IR scientists realized that two critical components made up the majority of search functionality:
Relevance - the degree to which the content of the documents returned in a search matched the user's query intention and terms. The relevance of a document increases if the terms or phrase queried by the user occurs multiple times and shows up in the title of the work or in important headlines or subheaders.
Popularity - the relative importance, measured via citation (the act of one work referencing another, as often occurs in academic and business documents) of a given document that matches the user's query. The popularity of a given document increases with every other document that references it.
These two items were translated to web search 40 years later and manifest themselves in the form of document analysis and link analysis.
In document analysis, search engines look at whether the search terms are found in important areas of the document - the title, the meta data, the heading tags and the body of text content. They also attempt to automatically measure the quality of the document (through complex systems beyond the scope of this guide).
In link analysis, search engines measure not only who is linking to a site or page, but what they are saying about that page/site. They also have a good grasp on who is affiliated with whom (through historical link data, the site's registration records and other sources), who is worthy of being trusted (links from .edu and .gov pages are generally more valuable for this reason) and contextual data about the site the page is hosted on (who links to that site, what they say about the site, etc.).
Link and document analysis combine and overlap hundreds of factors that can be individually measured and filtered through the search engine algorithms (the set of instructions that tell the engines what importance to assign to each factor). The algorithm then determines scoring for the documents and (ideally) lists results in decreasing order of importance (rankings).
As search engines index the web's link structure and page contents, they find two distinct kinds of information about a given site or page - attributes of the page/site itself and descriptives about that site/page from other pages. Since the web is such a commercial place, with so many parties interested in ranking well for particular searches, the engines have learned that they cannot always rely on websites to be honest about their importance. Thus, the days when artificially stuffed meta tags and keyword rich pages dominated search results (pre-1998) have vanished and given way to search engines that measure trust via links and content.
The theory goes that if hundreds or thousands of other websites link to you, your site must be popular, and thus, have value. If those links come from very popular and important (and thus, trustworthy) websites, their power is multiplied to even greater degrees. Links from sites like NYTimes.com, Yale.edu, Whitehouse.gov and others carry with them inherent trust that search engines then use to boost your ranking position. If, on the other hand, the links that point to you are from low-quality, interlinked sites or automated garbage domains (aka link farms), search engines have systems in place to discount the value of those links.
The most well-known system for ranking sites based on link data is the simplistic formula developed by Google's founders - PageRank. PageRank, which relies on log-based calculations, is described by Google in their technology section:
PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important."
PageRank is derived (roughly speaking), by amalgamating all the links that point to a particular page, adding the value of the PageRank that they pass (based on their own PageRank) and applying calculations in the formula (see Ian Rogers' explanation for more details).
PageRank, in essence, measures the brute link force of a site based on every other link that points to it without significant regard for quality, relevance or trust. Hence, in the modern era of SEO, the PageRank measurement in Google's toolbar, directory or through sites that query the service is of limited value. Pages with PR8 can be found ranked 20-30 positions below pages with a PR3 or PR4. In addition, the toolbar numbers are updated only every 3-6 months by Google, making the values even less useful. Rather than focusing on PageRank, it's important to think holistically about a link's worth.
Here's a small list of the most important factors search engines look at when attempting to value a link:
These are only a few of the many factors search engines measure and weight when evaluating links.
Posted by Babu SEO Tips at 1:31 AM 0 comments
The first basic truth you need to learn about SEO is that search engines are not humans. While this might be obvious for everybody, the differences between how humans and search engines view web pages aren't. Unlike humans, search engines are text-driven. Although technology advances rapidly, search engines are far from intelligent creatures that can feel the beauty of a cool design or enjoy the sounds and movement in movies. Instead, search engines crawl the Web, looking at particular site items (mainly text) to get an idea what a site is about. This brief explanation is not the most precise because as we will see next, search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevancy, and retrieving.
First, search engines crawl the Web to see what is there. This task is performed by e piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way. Having in mind the number of pages on the Web (over 20 billion), it is impossible for a spider to visit a site daily just to see if a new page has appeared or if an existing page has been modified. Sometimes crawlers will not visit your site for a month or two, so during this time your SEO efforts will not be rewarded. But there is nothing you can do about it, so just keep quiet.
What you can do is to check what a crawler sees from your site. As already mentioned, crawlers are not humans and they do not see images, Flash movies, JavaScript, frames, password-protected pages and directories, so if you have tons of these on your site, you'd better run the Spider Simulator below to see if these goodies are viewable by the spider. If they are not viewable, they will not be spidered, not indexed, not processed, etc. - in a word they will be non-existent for search engines.
Posted by Babu SEO Tips at 1:44 AM 0 comments
Labels: Babu Natesan, Search engine optimization, SEO