Monday, April 25, 2011

KIDS maths - website

CarrotSticks is an online multiplayer game that improves math skills and understanding for 1st - 5th graders as they practice and compete with other students around the world!

Kids Love it!
Whether your child prefers competing with others or practicing on their own, they'll love designing their own characters and earning points by solving problems. CarrotSticks takes the friction out of motivating your child to go the extra mile with math by making it social and fun!

Rigorous Practice
Playing CarrotSticks means solving math problems, step by step, in a uniform way. Through repeated practice, CarrotSticks kids build valuable speed and "muscle memory," both of which are key to success in mathematics.

Social AND Safe
CarrotSticks makes math fun by making it social, but because CarrotSticks doesn't allow full names or "free chat," you can rest assured that your child is safe online while playing CarrotSticks.

Conceptual Understanding
CarrotSticks kids can practice on their own or compete with their peers, solving elementary math problems. Each problem category is divided into 25 levels designed by doctoral students at the Stanford School of Education to emphasize specific skills and concepts.

The Student is in Control
CarrotSticks kids can choose to practice on their own, or they can go head-to-head with their peers in realtime matches. In both cases, students choose the problem difficulty level right for them, building skills and confidence at their own pace.

Thanks: winmani

Types of HATS in SEO

There are 3 Major types in SEO

  1. White Hat SEO
  2. Grey Hat SEO
  3. Black Hat SEO
1. White Hat SEO:

White Hat SEO means completely following the ethical methods to acquire good rankings in search engines for a website. If a website really providing good information to visitors then white hat SEO works very well
and gives the best results. Now you have question that “what are white hat SEO methods?” All the methods which aren’t trying to cheat or manipulate search engines or visitors are called white hat SEO methods. I hope only few sites like Wikipedia, Google etc... follows white hat SEO methods.

2. Grey Hat SEO:

Grey Hat SEO means following combination of ethical and unethical methods to get fine rankings in Search Engines for a website. Most of the SEOs use Grey Hat SEO Methods to attain rankings. Grey Hat SEO works fine most of the time and most of the professionals recommend it. Grey Hat SEO methods aren’t unethical, it includes little stuffing of keywords in the site and back links etc. Now a day’s these methods are quite common while promoting website. Most of Corporate websites, Personal Blogs etc … follows Grey Hat SEO Methods.

3. Black Hat SEO:

Black Hat SEO stands to follow completely unethical which are used to cheat, manipulate search engines as well as users. In the past years search engines aren’t able to recognize these methods and difficult to find out and filter the websites which use Black Hat SEO techniques. Search engines are became smarter now and able to find out and blocks websites which use black hat techniques. Search Engine Quality Professional are working hardly and updating regularly the SE algorithms to give the best results to the visitors.

Thanks: SEO Daily Update

Google Webmaster Tools Updates in 2011

We have three Google webmaster tools updates in 2011 till now. Following are three updates were live on Feb, 2011. Following are the details:

  1. Google Panda Update goes Global for English Users - Apr 11th, 2011
  2. Now Pagespeed online with Mobile Support - Mar 31st, 2011
  3. Detecting Mobile User Agent - Mar 30th, 2011
  4. Introducing Google Social +1 button - Mar 28th, 2011
  5. Video Sitemaps Help - Mar 24th, 2011
  6. Google Panda / Farmer Update - Feb 24th, 2011
  7. Making Websites Mobile Friendly - Feb 22nd, 2011
  8. Advanced features in Webmaster tools with analytics data - Feb 18th, 2011
  9. Single Login for Analytics and Webmaster Tools - Feb 07, 2011
  10. Webmaster Tools Search Queries Update - Feb 03, 2011
1. Advanced features in Webmaster tools with analytics data:

Two weeks before Webmaster tools team has released a feature to merge analytics data with Webmaster tools data. As we expected Google team started releasing more advanced features by combining data of GA and GWT.


Now we can differentiate search queries information based on search type, location and impressions vs clicks. Have a look at above screen shot for clear understanding. This kind of data segmentation will help SEO team for better understanding about search queries, so that they can deliver quality results.

2. Analytics to Webmaster Tools - Feb 07, 2011

Another update from webmaster team that linking Google analytics to webmaster tools. Many SEO’s have a common questions that “Is it a good idea?”. Yes, it is. Linking analytics to webmaster tools will give more accurate impressions vs clicks count in webmaster tools. It helps SEO professionals to find out the best keywords which are converting well in delivering results.

In future we can expect many changes in webmaster tools relevant to analytics data. We might see like updating current sections and including new sections in webmaster tools. Let us see the future changes in webmaster tools dashboard.

Thanks: Analytics to Webmaster Tools

SEO Process Tips to Rank in 2011

Search Engine Optimization (SEO) is the most trending word in online marketing. Google always keeps changing their algorithms and SEO Professionals will break their heads to run with the latest algorithms. Good SEO Professionals always suggest using White hat methods as they will help to stay longer in search engine results. Here are few white hat SEO tips to follow according to the latest search engine algorithms in 2011.

  1. Research and pick the right keywords which drive quality traffic, leads and conversions
  2. Analyze competitors who are running with success in similar business and checkout the methods they are following and the keywords they are optimizing
  3. Pick the right Domain extension based geo – targeted business like ‘.in’ for India, ‘.com’ for USA, ‘.co.uk’ for UK etc.
  4. Plan a user& search engine friendly design & navigation with good link architecture
  5. Hire a professional content writer to write good quality content to catch user as well as search engine attention
  6. Plan different sub sections with relevant content for most targeted keywords
  7. Make sure that site has no broken links, timed out urls, high loading time etc. Tools like Xenu and extensions like Pagespeed, Yslow will help us to develop technically clean site.
  8. Use robots.txt file to block all unnecessary pages which are indexing by the search engines.
  9. Create Xml & HTML sitemap for users and search engines
  10. Submit Xml sitemap to search engines webmaster tools account and fix the errors if any
  11. Build only quality back links using White hat link building methods.
  12. Analyze traffic and plan or change the strategies to drive quality traffic which will convert to sales
Thanks : SEO daily Update

Monday, March 22, 2010

What is SEO?

SEO is the active practice of optimizing a web site by improving internal and external aspects in order to increase the traffic the site receives from search engines. Firms that practice SEO can vary; some havea highly specialized focus while others take a more broad and general approach. Optimizing a web site for search engines can require looking at so many unique elements that many practitioners of SEO (SEOs) consider themselves to be in the broad field of website optimization (since so many of those elements intertwine).

This guide is designed to describe all areas of SEO - from discovery of the terms and phrases that will generate traffic, to making a site search engine friendly to building the links and marketing the unique value of the site/organization's offerings.

Why does my company/organization/website need SEO?
The majority of web traffic is driven by the major commercial search engines - Yahoo!, MSN, Google & AskJeeves (although AOL gets nearly 10% of searches, their engine is powered by Google's results). If your site cannot be found by search engines or your content cannot be put into their databases, you miss out on the incredible opportunities available to websites provided via search - people who want what you have visiting your site. Whether your site provides content, services, products or information, search engines are a primary method of navigation for almost all Internet users.
Search queries, the words that users type into the search box which contain terms and phrases best suited to your site carry extraordinary value. Experience has shown that search engine traffic can make (or break) an organization's success. Targeted visitors to a website can provide publicity, revenue and exposure like no other. Investing in SEO, whether through time or finances, can have an exceptional rate of return.

Why can't the search engines figure out my site without SEO help?
Search engines are always working towards improving their technology to crawl the web more deeply and return increasingly relevant results to users. However, there is and will always be a limit to how search engines can operate. Whereas the right moves can net you thousands of visitors and attention, the wrong moves can hide or bury your site deep in the search results where visibility is minimal. In addition to making content available to search engines, SEO can also help boost rankings, so that content that has been found will be placed where searchers will more readily see it. The online environment is becoming increasingly competitive and those companies who perform SEO will have a decided advantage in visitors and customers.

How Search Engines Operate
Search engines have a short list of critical operations that allows them to provide relevant web results when searchers use their system to find information.

Crawling the Web
Search engines run automated programs, called "bots" or "spiders" that use the hyperlink structure of the web to "crawl" the pages and documents that make up the World Wide Web. Estimates are that of the approximately 20 billion existing pages, search engines have crawled between 8 and 10 billion.

Indexing Documents
Once a page has been crawled, it's contents can be "indexed" - stored in a giant database of documents that makes up a search engine's "index". This index needs to be tightly managed, so that requests which must search and sort billions of documents can be completed in fractions of a second.

Processing Queries
When a request for information comes into the search engine (hundreds of millions do each day), the engine retrieves from its index all the document that match the query. A match is determined if the terms or phrase is found on the page in the manner specified by the user. For example, a search for car and driver magazine at Google returns 8.25 million results, but a search for the same phrase in quotes ("car and driver magazine") returns only 166 thousand results. In the first system, commonly called "Findall" mode, Google returned all documents which had the terms "car" "driver" and "magazine" (they ignore the term "and" because it's not useful to narrowing the results), while in the second search, only those pages with the exact phrase "car and driver magazine" were returned. Other advanced operators (Google has a list of 11) can change which results a search engine will consider a match for a given query.

Ranking Results
Once the search engine has determined which results are a match for the query, the engine's algorithm (a mathematical equation commonly used for sorting) runs calculations on each of the results to determine which is most relevant to the given query. They sort these on the results pages in order from most relevant to least so that users can make a choice about which to select.

Although a search engine's operations are not particularly lengthy, systems like Google, Yahoo!, AskJeeves and MSN are among the most complex, processing-intensive computers in the world, managing millions of calculations each second and funneling demands for information to an enormous group of users.

Speed Bumps & Walls
Certain types of navigation may hinder or entirely prevent search engines from reaching your website's content. As search engine spiders crawl the web, they rely on the architecture of hyperlinks to find new documents and revisit those that may have changed. In the analogy of speed bumps and walls, complex links and deep site structures with little unique content may serve as "bumps." Data that cannot be accessed by spiderable links qualify as "walls."

Possible "Speed Bumps" for SE Spiders:

  • URLs with 2+ dynamic parameters; i.e. http://www.url.com/page.php?id=4&CK=34rr&User=%Tom% (spiders may be reluctant to crawl complex URLs like this because they often result in errors with non-human visitors)
  • Pages with more than 100 unique links to other pages on the site (spiders may not follow each one)
  • Pages buried more than 3 clicks/links from the home page of a website (unless there are many other external links pointing to the site, spiders will often ignore deep pages)
  • Pages requiring a "Session ID" or Cookie to enable navigation (spiders may not be able to retain these elements as a browser user can)
  • Pages that are split into "frames" can hinder crawling and cause confusion about which pages to rank in the results.


Possible "Walls" for SE Spiders:

  • Pages accessible only via a select form and submit button
  • Pages requiring a drop down menu (HTML attribute) to access them
  • Documents accessible only via a search box
  • Documents blocked purposefully (via a robots meta tag or robots.txt file - see more on these here)
  • Pages requiring a login
  • Pages that re-direct before showing content (search engines call this cloaking or bait-and-switch and may actually ban sites that use this tactic)

The key to ensuring that a site's contents are fully crawlable is to provide direct, HTML links to to each page you want the search engine spiders to index. Remember that if a page cannot be accessed from the home page (where most spiders are likely to start their crawl) it is likely that it will not be indexed by the search engines. A sitemap (which is discussed later in this guide) can be of tremendous help for this purpose.

Measuring Relevance and Popularity

Modern commercial search engines rely on the science of information retrieval (IR). That science has existed since the middle of the 20th century, when retrieval systems powered computers in libraries, research facilities and government labs. Early in the development of search systems, IR scientists realized that two critical components made up the majority of search functionality:

Relevance - the degree to which the content of the documents returned in a search matched the user's query intention and terms. The relevance of a document increases if the terms or phrase queried by the user occurs multiple times and shows up in the title of the work or in important headlines or subheaders.

Popularity - the relative importance, measured via citation (the act of one work referencing another, as often occurs in academic and business documents) of a given document that matches the user's query. The popularity of a given document increases with every other document that references it.

These two items were translated to web search 40 years later and manifest themselves in the form of document analysis and link analysis.

In document analysis, search engines look at whether the search terms are found in important areas of the document - the title, the meta data, the heading tags and the body of text content. They also attempt to automatically measure the quality of the document (through complex systems beyond the scope of this guide).

In link analysis, search engines measure not only who is linking to a site or page, but what they are saying about that page/site. They also have a good grasp on who is affiliated with whom (through historical link data, the site's registration records and other sources), who is worthy of being trusted (links from .edu and .gov pages are generally more valuable for this reason) and contextual data about the site the page is hosted on (who links to that site, what they say about the site, etc.).

Link and document analysis combine and overlap hundreds of factors that can be individually measured and filtered through the search engine algorithms (the set of instructions that tell the engines what importance to assign to each factor). The algorithm then determines scoring for the documents and (ideally) lists results in decreasing order of importance (rankings).

As search engines index the web's link structure and page contents, they find two distinct kinds of information about a given site or page - attributes of the page/site itself and descriptives about that site/page from other pages. Since the web is such a commercial place, with so many parties interested in ranking well for particular searches, the engines have learned that they cannot always rely on websites to be honest about their importance. Thus, the days when artificially stuffed meta tags and keyword rich pages dominated search results (pre-1998) have vanished and given way to search engines that measure trust via links and content.

The theory goes that if hundreds or thousands of other websites link to you, your site must be popular, and thus, have value. If those links come from very popular and important (and thus, trustworthy) websites, their power is multiplied to even greater degrees. Links from sites like NYTimes.com, Yale.edu, Whitehouse.gov and others carry with them inherent trust that search engines then use to boost your ranking position. If, on the other hand, the links that point to you are from low-quality, interlinked sites or automated garbage domains (aka link farms), search engines have systems in place to discount the value of those links.

The most well-known system for ranking sites based on link data is the simplistic formula developed by Google's founders - PageRank. PageRank, which relies on log-based calculations, is described by Google in their technology section:

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important."

PageRank is derived (roughly speaking), by amalgamating all the links that point to a particular page, adding the value of the PageRank that they pass (based on their own PageRank) and applying calculations in the formula (see Ian Rogers' explanation for more details).

PageRank, in essence, measures the brute link force of a site based on every other link that points to it without significant regard for quality, relevance or trust. Hence, in the modern era of SEO, the PageRank measurement in Google's toolbar, directory or through sites that query the service is of limited value. Pages with PR8 can be found ranked 20-30 positions below pages with a PR3 or PR4. In addition, the toolbar numbers are updated only every 3-6 months by Google, making the values even less useful. Rather than focusing on PageRank, it's important to think holistically about a link's worth.

Here's a small list of the most important factors search engines look at when attempting to value a link:

  • The Anchor Text of Link - Anchor text describes the visible characters and words that hyperlink to another document or location on the web. For example in the phrase, "CNN is a good source of news, but I actually prefer the BBC's take on events," two unique pieces of anchor text exist - "CNN" is the anchor text pointing to http://www.cnn.com, while "the BBC's take on events" points to http://news.bbc.co.uk. Search engines use this text to help them determine the subject matter of the linked-to document. In the example above, the links would tell the search engine that when users search for "CNN", SEOmoz.org thinks that http://www.cnn.com is a relevant site for the term "CNN" and that http://news.bbc.co.uk is relevant to "the BBC's take on events". If hundreds or thousands of sites think that a particular page is relevant for a given set of terms, that page can manage to rank well even if the terms NEVER appear in the text itself (for example, see the BBC's explanation of why Google ranks certain pages for the term "Miserable Failure").
  • Global Popularity of the Site - More popular sites, as denoted by the number and power of the links pointing to them, provide more powerful links. Thus, while a link from SEOmoz may be a valuable vote for a site, a link from bbc.co.uk or cnn.com carries far more weight. This is one area where PageRank (assuming it was accurate), could be a good measure, as it's designed to calculate global popularity.
  • Popularity of Site in Relevant Communities - In the example above, the weight or power of a site's vote is based on its raw popularity across the web. As search engines became more sophisticated and granular in their approach to link data, they acknowledged the existence of "topical communities"; sites on the same subject that often interlink with one another, referencing documents and providing unique data on a particular topic. Sites in these communities provide more value when they link to a site/page on a relevant subject rather than a site that is largely irrelevant to their topic.
  • Text Directly Surrounding the Link - Search engines have been noted to weight the text directly surrounding a link with greater important and relevant than the other text on the page. Thus, a link from inside an on-topic paragraph may carry greater weight than a link in the sidebar or footer.
  • Subject Matter of the Linking Page - The topical relationship between the subject of a given page and the sites/pages linked to on it may also factor into the value a search engine assigns to that link. Thus, it will be more valuable to have links from pages that are related to the site/pages subject matter than those that have little to do with the topic.

These are only a few of the many factors search engines measure and weight when evaluating links.

Monday, November 30, 2009

Search engine optimization (SEO)

Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site from search engines via "natural" or un-paid ("organic" or "algorithmic") search results as opposed to search engine marketing (SEM) which deals with paid inclusion. Typically, the earlier (or higher) a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, video search and industry-specific vertical search engines. This gives a web site web presence.

1. How Search Engines Work

The first basic truth you need to learn about SEO is that search engines are not humans. While this might be obvious for everybody, the differences between how humans and search engines view web pages aren't. Unlike humans, search engines are text-driven. Although technology advances rapidly, search engines are far from intelligent creatures that can feel the beauty of a cool design or enjoy the sounds and movement in movies. Instead, search engines crawl the Web, looking at particular site items (mainly text) to get an idea what a site is about. This brief explanation is not the most precise because as we will see next, search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevancy, and retrieving.

First, search engines crawl the Web to see what is there. This task is performed by e piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way. Having in mind the number of pages on the Web (over 20 billion), it is impossible for a spider to visit a site daily just to see if a new page has appeared or if an existing page has been modified. Sometimes crawlers will not visit your site for a month or two, so during this time your SEO efforts will not be rewarded. But there is nothing you can do about it, so just keep quiet.

What you can do is to check what a crawler sees from your site. As already mentioned, crawlers are not humans and they do not see images, Flash movies, JavaScript, frames, password-protected pages and directories, so if you have tons of these on your site, you'd better run the Spider Simulator below to see if these goodies are viewable by the spider. If they are not viewable, they will not be spidered, not indexed, not processed, etc. - in a word they will be non-existent for search engines.