“Serving our end users is at the heart of what we do and remains our number one priority” – Founders’ Letter from Google, 2004
Eventually, computer scientists developed small programs that went out scoured the web and looked at what exists on a web page. They gave them the nickname “spiders” because they are constantly “crawling” over the web to discover new content. These spiders gobble up the information on a web page and pass it back to a massive hard drive in a process called “indexing”. When a website has been indexed, the search engine knows what content is on that page and when a user enters a keyword phrase, the search engine says, “Aha, I know a bunch of pages that have that type of content, let me serve that up for you.”
The user is then presented with a page of links that the search engine thinks most closely resembles what they are looking to find. This page is called the Search Engine Result Pages or “SERPs” and is what shows up with blue links and descriptions underneath.
Search Engine Algorithms
An algorithm is a process used by computers to solve a specific problem. Search engines employ constantly evolving algorithms to best figure out what a user might be wanting when they type in a keyword phrase (commonly referred to as “queries”) into a search box.
The difficult challenge search engines must solve is determining the user’s intention.
Consider this seemingly simple question – if a user types in “Paris” what might they be looking for?
- Could it be Paris, as in the city in France?
- Or is it Paris Hilton?
If you add up the number of infinite variations on a narrow keyword query like this, the challenge of serving up relevant results becomes mind-boggling difficult.
So, the search engines use an ever changing variety of factors to figure out what a user is really looking to see using statistical distributions. Google rose to prominence by being especially good at this through its PageRank algorithm.