Wednesday 29 June 2011

10 Basic SEO Tips Every Web Developer Should Follow

Search Engine Optimization (SEO) is one of the most important ingredients for getting large amounts of traffic to your website besides high-quality content. Because once your articles are a few weeks old, the majority of the traffic to those articles will come from search engines. That's why it's important to rank high in search engines.

In this article I'm going to show you 10 basic SEO tips that you should keep in mind when writing a new article or developing a new website to make sure your blog ranks high in the search engines.


1. Place Your Keywords In the Title Tag

The title tag is one of the most important things for ranking high for particular keywords. That's why it's very important to think of a title that contains important keywords and grabs the attention of people. Because when people search on search engines like Google, the title of your page will be displayed in the results and people will click on links that are the most attractive to them first. Remember that it's also very important and rewarding to have a unique title, so before you use the title, Google it and see if the title already appears in the top 10 results.

Please keep in mind that Google only displays the first 70 characters(about 8 words) of your page's title. Google does not even index the keywords in your page's title after the 12th or so word. Yahoo and MSN don't even index pages that have a page title that's too long. Source: Google – How Many Words In A Page Title Tag?

Note: WordPress users should use the All in One SEO Pack plugin for WordPress. This plugin gives you the ability to easily place the title of your post in the beginning of your <title> tag.


2. Optimize Your Robots.txt

Having an optimized robots.txt is important because this file can control which pages of your websites get crawled by the search engine spiders. For those of you who use WordPress, I recommend preventing search engine spiders from crawling wp-admin and plugin folders.

Take a look at my my Robots.txt to get an idea of how it works. Its code is pretty much self explanatory but if you want a better understand of it take a look a the links mentioned below.

Here are some more resources about the robots.txt file:

    * Beginners Guide To Robots.txt Files
    * Robots Text File - robots.txt


3. Proper Use of the ALT Attribute

The <alt> attribute should be used to describe an image, not to spammed with keywords that have nothing to do with the image.
Why? Because while it's true that search engine spiders cannot read images, they can however read the <alt> attribute, and having it spammed with keywords that are totally irrelevant to the image will make Google aware of what to derank you for.

Here's some more information about the alt attribute:

    * Don’t Optimize Your ALT Tags For Google!
    * Why you should care with the image ALT attribute


4. Anchor Text of Links

The anchor text is the text inside your <a> tags that links to other pages. Wether they're incoming or outgoing links, it's important for the anchor text to contain a description of where the link is taking the visitor. This is even more important for search engines because it tells its crawlers where the link is going.

So avoid using words like "Click Here" as an anchor text as it can hurt both your website and the website you're linking to. One more thing to keep in mind is to keep the characters in your anchor text under 55.


5. Quality of Inbound and Outbound Links

While most web developers know that having many web sites link to your can boost your rankings, a lot of them don't know that the quality of the websites linking to your websites plays a vital role in your website's page rank. Getting links for random dofollow blogs has a way smaller impact than if a website with a high pagerank links to your website in an article.

Who you link to matters, as Google warned webmasters that they can be penalized if they link to spam websites. Linking to really bad websites (malware injecting websites etc.) can easily get your site removed from Google's search engine results pages(SERPs). One thing to keep in mind is that Google's spiders might not follow all of your links if you have over 100 unique outbound links on a single page.

More articles about linking:

    * Free Link Building Tips To Market Your Website Online
    * How To Get Quality Links The Competition has

6. Proper Use of Header Tags

<h1>, <h2> tags etc. should be used when appropriate. Don't be using <h2> tags as a paragraph or using <div> class="title"</div> to display the title of your page or something.

Here's a few things to keep in mind when using header tags:

    * only use one <h1> tag per page
    * you can use as many <h2> - <h6> tags as you like in your pages
    * search engine crawlers only read basic html

More articles about header tags:

    * How To Use Tags – Proper Use Of H1, H2, H3, H4, H5, H6 Elements
    * How to optimize keywords in header tags

7. Importance of the First Paragraph

Your first paragraph is the most important paragraph in your page. This is because it's the first paragraph search engines scan and they assume that it's somewhat of a summary of the page/article. That's why it's essential to have the most important keywords of your page in the first sentence or two.

If you're using the All in One SEO Pack plugin for WordPress, the plugin will automatically take the content from your first paragraph and place it in the meta description, which is the text that search engines display under the title of your page.


8. Ensure Your Site is Accessible

DevMoose

    * Home
    * CSS
    * JS
    * Tweaks
    * WordPress

10 Basic SEO Tips Every Web Developer Should Follow
Posted by JP on January 26th, 2010 in Blogging | 64 Comments

15 Basic SEO Tips Every Web Developer Should Implement

Search Engine Optimization (SEO) is one of the most important ingredients for getting large amounts of traffic to your website besides high-quality content. Because once your articles are a few weeks old, the majority of the traffic to those articles will come from search engines. That's why it's important to rank high in search engines.

In this article I'm going to show you 10 basic SEO tips that you should keep in mind when writing a new article or developing a new website to make sure your blog ranks high in the search engines.

1. Place Your Keywords In the Title Tag

The title tag is one of the most important things for ranking high for particular keywords. That's why it's very important to think of a title that contains important keywords and grabs the attention of people. Because when people search on search engines like Google, the title of your page will be displayed in the results and people will click on links that are the most attractive to them first. Remember that it's also very important and rewarding to have a unique title, so before you use the title, Google it and see if the title already appears in the top 10 results.

Please keep in mind that Google only displays the first 70 characters(about 8 words) of your page's title. Google does not even index the keywords in your page's title after the 12th or so word. Yahoo and MSN don't even index pages that have a page title that's too long. Source: Google – How Many Words In A Page Title Tag?

Note: WordPress users should use the All in One SEO Pack plugin for WordPress. This plugin gives you the ability to easily place the title of your post in the beginning of your <title> tag.
2. Optimize Your Robots.txt

Having an optimized robots.txt is important because this file can control which pages of your websites get crawled by the search engine spiders. For those of you who use WordPress, I recommend preventing search engine spiders from crawling wp-admin and plugin folders.

Take a look at my my Robots.txt to get an idea of how it works. Its code is pretty much self explanatory but if you want a better understand of it take a look a the links mentioned below.

Here are some more resources about the robots.txt file:

    * Beginners Guide To Robots.txt Files
    * Robots Text File - robots.txt

3. Proper Use of the ALT Attribute

The <alt> attribute should be used to describe an image, not to spammed with keywords that have nothing to do with the image.
Why? Because while it's true that search engine spiders cannot read images, they can however read the <alt> attribute, and having it spammed with keywords that are totally irrelevant to the image will make Google aware of what to derank you for.

Here's some more information about the alt attribute:

    * Don’t Optimize Your ALT Tags For Google!
    * Why you should care with the image ALT attribute

4. Anchor Text of Links

The anchor text is the text inside your <a> tags that links to other pages. Wether they're incoming or outgoing links, it's important for the anchor text to contain a description of where the link is taking the visitor. This is even more important for search engines because it tells its crawlers where the link is going.

So avoid using words like "Click Here" as an anchor text as it can hurt both your website and the website you're linking to. One more thing to keep in mind is to keep the characters in your anchor text under 55.
5. Quality of Inbound and Outbound Links

While most web developers know that having many web sites link to your can boost your rankings, a lot of them don't know that the quality of the websites linking to your websites plays a vital role in your website's page rank. Getting links for random dofollow blogs has a way smaller impact than if a website with a high pagerank links to your website in an article.

Who you link to matters, as Google warned webmasters that they can be penalized if they link to spam websites. Linking to really bad websites (malware injecting websites etc.) can easily get your site removed from Google's search engine results pages(SERPs). One thing to keep in mind is that Google's spiders might not follow all of your links if you have over 100 unique outbound links on a single page.

More articles about linking:

    * Free Link Building Tips To Market Your Website Online
    * How To Get Quality Links The Competition has

6. Proper Use of Header Tags

<h1>, <h2> tags etc. should be used when appropriate. Don't be using <h2> tags as a paragraph or using <div> class="title"</div> to display the title of your page or something.

Here's a few things to keep in mind when using header tags:

    * only use one <h1> tag per page
    * you can use as many <h2> - <h6> tags as you like in your pages
    * search engine crawlers only read basic html

More articles about header tags:

    * How To Use Tags – Proper Use Of H1, H2, H3, H4, H5, H6 Elements
    * How to optimize keywords in header tags.

7. Importance of the First Paragraph

Your first paragraph is the most important paragraph in your page. This is because it's the first paragraph search engines scan and they assume that it's somewhat of a summary of the page/article. That's why it's essential to have the most important keywords of your page in the first sentence or two.

If you're using the All in One SEO Pack plugin for WordPress, the plugin will automatically take the content from your first paragraph and place it in the meta description, which is the text that search engines display under the title of your page.
8. Ensure Your Site is Accessible

An accessible website should ensure that its content can be successfully delivered as often as possible. The functionality of pages, the uptime of the server, validity of HTML elements are all parts of site accessibility. If these points are either ignored or faulty, both your visitors and search engines will select other websites to visit.

Use tools like:

    * Are My Sites Up?
    * Link Validation Spider

To know if your site contains broken links or when your site is down.

9. Use SEO Friendly URLs

The URL of a page should be descriptive so both the users and search engines have an idea what the page they're being linked to consists of. SEO Friendly URLs also have the benefit of serving as their own anchor text when copy pasted to forums etc. due to URL having the keywords of the page in its URL. These type of URLs usually rank higher in search engines than pages with a URL like example.com/p=981489894.

Furthermore, when using Dynamic URLs try to keep your dynamic parameters under 2 or else the chance is high than it might not get crawled by search engine spiders unless they're perceived as very important(ie. have a large amount of inbound links).

More articles about SEO Friendly URLs:

    * Dynamic URLs vs Static URLs
    * Optimize Your URL For Search Engines


10. Site Usability

If your website is cluttered with ads and has a terrible layout, accessibility, navigation and low quality of content, chances are that you'll get less people linking back to your site than one that has better usability than yours, thus resulting in lower page ranking.
If your website was designed in the 90s you seriously need consider hiring a web designer to redesign your website.

Take a look at the following example which I took from SeoMOZ.org.
These examples clearly show the impact a decent design has on the way users perceive the website. Users are more likely to buy from Haworth's Furniture because the site just gives a more trustworthy vibe than Workplace Office's website. Furthermore, people are more likely to tell a friend about Haworth's website because it just has better usability and seems more trustworthy than Workplace Office's website.

While a nice design doesn't have a direct impact on the rankings of a page, it will indirectly influence it due to users linking back to the site, trusting it etc.

10 Basic SEO Tips Every Web Developer Should Follow

Tuesday 28 June 2011

What is off page seo?


 SEO stands for search engine optimization. Off page SEO  is doing things off site to improve your sites search engine rankings. The only thing you can do off site to incraese your rankings is build up more links. More links will generally lead to better Google PageRank and better search engine rankings.>

When you are trying to get more links you need to think about quality, not all links are created equal and links from low quality sites will have little or no impact on your rankings. The best types of links that you can get are from trusted sources such as universities, newspapers and even some of the top notch directories such as dmoz and Yahoo.

It is sometimes difficult to spot the good links; here are a few questions you should ask yourself when you are looking at sites or pages to get links from:
  • Is this site or page relevant to what I do?
  • Is this site linking out to any low quality off topic sites?
  • Is this site or page going to send the right sort of traffic?
  • Is this site or page ranking well in the search engines?
  • Does this site have a lot of links from other websites?
Another important factor is the way that the sites link to you, sites that use the rel=nofollow tag to link to you or sites that use a redirect to link to you will not help you, also the search engines look at the text that links to you, if you are trying to rank for the phrase blue widgets and you can get a site to link to you including that phrase in the text or alt tag of the link then that is going to help you to rank higher for that particular phrase.


Take these things into consideration, then check out our link building articles and you are on your way towards good off page SEO and good rankings in the search engines.

seo link building


Link popularity is the main factor major search engines use to rank web sites. This makes link building an integral part of any effective search engine optimisation strategy. Our link building services are designed to establish and improve your website's link popularity.

Our seo resources section provides all the information, help and advice you need to launch your own successful link building campaign. It includes detailed information about link building, useful link building tools, web directory lists and article site lists you can use to build links, but maybe you do not have the time to do link building for your website?

Let us build your links while you do what you do best, building your business. We will handle every aspect of your link building campaign, from start to finish.

Our link building services use only ethical techniques. We do not employ any method that could violate Google's Guidelines, Yahoo Guidelines or MSN's Guidelines.

Where should you start with our Link Building?

Our manual web directory submission service represents the best value for money, building the most amount of links to your site. Following completion of your web directory submission, our on-going link building service will build over 80 links to your website every month.

Naveen jindal (SEO)

Created by Naveen Jindal


Address:-Gali no-5,Vinod nagar,Mill gate,hisar(Haryana),INDIA
contact no.:-9034349989

Monday 27 June 2011

Meta Tags


Meta tags allow the owner of a page to specify key words and concepts under which the page will be indexed. This can be helpful, especially in cases in which the words on the page might have double or triple meanings -- the meta tags can guide the search engine in choosing which of the several possible meanings for these words is correct. There is, however, a danger in over-reliance on meta tags, because a careless or unscrupulous page owner might add meta tags that fit very popular topics but have nothing to do with the actual contents of the page. To protect against this, spiders will correlate meta tags with page content, rejecting the meta tags that don't match the words on the page.

All of this assumes that the owner of a page actually wants it to be included in the results of a search engine's activities. Many times, the page's owner doesn't want it showing up on a major search engine, or doesn't want the activity of a spider accessing the page. Consider, for example, a game that builds new, active pages each time sections of the page are displayed or new links are followed. If a Web spider accesses one of these pages, and begins following all of the links for new pages, the game could mistake the activity for a high-speed human player and spin out of control. To avoid situations like this, the robot exclusion protocol was developed. This protocol, implemented in the meta-tag section at the beginning of a Web page, tells a spider to leave the page alone -- to neither index the words on the page nor try to follow its links.

Building a Search



Searching through an index involves a user building a query and submitting it through the search engine. The query can be quite simple, a single word at minimum. Building a more complex query requires the use of Boolean operators that allow you to refine and extend the terms of the search.

The Boolean operators most often seen are:

    * AND - All the terms joined by "AND" must appear in the pages or documents. Some search engines substitute the operator "+" for the word AND.
    * OR - At least one of the terms joined by "OR" must appear in the pages or documents.
    * NOT - The term or terms following "NOT" must not appear in the pages or documents. Some search engines substitute the operator "-" for the word NOT.
    * FOLLOWED BY - One of the terms must be directly followed by the other.
    * NEAR - One of the terms must be within a specified number of words of the other.
    * Quotation Marks - The words between the quotation marks are treated as a phrase, and that phrase must be found within the document or file

What Do You Need To Know About Google Analytics


Not many are aware of certain aspects that ensure your search engine optimization success and this is where it is imperative to know that Google has all it takes to make a great business plan for accomplishing success.
To ensure online businesses and users a venue to satisfy their concerns / needs and make sure that the loyalty base is well maintained, Google is known to provide some essentials tools for free, a good example would be of Google Analytics.

This tool is known to help small businesses by providing online stats that are extensive in presenting all the moves and their respective outcome that can possibly the outcome, in actual this free tool Google Analytics help businesses to plan their future business prospects / plan.

Follow the easy step by step approach to make the most out from Google Analytics:

-> Start the process by free sign up with Google Analytics.
-> As soon as you sign up Google is programmed to gather concerned data.
-> Identify and learn the types of reporting that Google Analytics have to offer Marketer, Executive and Webmaster.
-> These types have difference in them for reporting but all have a comprehensive stats compilation to help you understand online users and their requirements.
-> Furthermore you are presented with an option of Content Optimization and Search Engine Marketing Optimization.

Simple but yet effective reporting enable you to compare and decide which path is right for you and which is not, analyzing and strong reporting show how online users start their journey from the very scratch by searching for precise keywords and ending to those online websites which have attained to Google Search Pages top rank.

Therefore, it is essential to learn the very in & outs of Google Analytics and to make it useful it is also important that you have a Google AdSense account. Remember, Google Analytics provides you detailed report for visitors on your site and any who have retuned, performance of your online presence, the locality of your visitors.

WHAT IS A PPC CAMPAIGN FOR?


PPC (Pay-per-Click) is a multifaceted, iterative and collaborative which is used by many online businesses to satisfy their online marketing requirements; three distinct but interrelated aspects are covered by a pay per click program.

1)      Advertisers

2)      Searchers

3)      Engines


ADVERTISERS: Businesses / advertisers are provided with an opportunity to present themselves to online users who are actively involved in online activities. Presenting online audience with an attractive and clear message / advert will definite help to attract customers.

SEARCHERS: Recent findings show that there are more chances to attract online users through paid search ads than any other digital advertising form and this is where you are to know that general perception is to provide things what online users are searching for.

ENGINES: The search engines have to cater the needs of advertisers and searchers, as one work as a revenue generation for them and other plays out to be the indirect users as searchers acquiring the utmost results of locating / finding products or services of their choice.

Pay-Per-Click Marketing

Well, that is what makes the search engines ideology but how would you settle down when thinking to venture in an online business. The most effective technique to adopt which allows to settle down quickly is through PPC Marketing.

Therefore, starting with PPC would be to identify the keywords or key phrases, search and identify the appropriate keywords which help in creating an effective, winning AdWords campaigns and this is where it is essential to identify winning keywords.

The true reward is achieved when the search engines gives incentives to those advertisers who successfully plan and execute their objectives with organized well thought Pay-per-click campaigns which at a result will attract more online users.

Thus, using PPC (Pay per click) management technique it is of extreme importance that you take an equal time and effort for identifying the appropriate keywords which will help you to achieve your online business within no time.

Friday 24 June 2011

Search Engine Optimization


Monday 20 June 2011

What is an IP address?

Your IP address is one of 4.3 billion unique numbers that identifies your computer on the internet. Learn the different IP classes and why businesses and government get different numbers.

How does any spider start its travels over the Web?

 "Spiders" take a Web page's content and create key search words that enable online users to find pages they're looking for.




 The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.

Google began as an academic search engine. In the paper that describes how the system was built, Sergey Brin and Lawrence Page give an example of how quickly their spiders can work. They built their initial system to use multiple spiders, usually three at one time. Each spider could keep about 300 connections to Web pages open at a time. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second.
Keeping everything running quickly meant building a system to feed necessary information to the spiders. The early Google system had a server dedicated to providing URLs to the spiders. Rather than depending on an Internet service provider for the domain name server (DNS) that translates a server's name into an address, Google had its own DNS, in order to keep delays to a minimum.
When the Google spider looked at an HTML page, it took note of two things:
  • The words within the page
  • Where the words were found
Words occurring in the title, subtitles, meta tags and other positions of relative importance were noted for special consideration during a subsequent user search. The Google spider was built to index every significant word on a page, leaving out the articles "a," "an" and "the." Other spiders take different approaches.
These different approaches usually attempt to make the spider operate faster, allow users to search more efficiently, or both. For example, some spiders will keep track of the words in the title, sub-headings and links, along with the 100 most frequently used words on the page and each word in the first 20 lines of text. Lycos is said to use this approach to spidering the Web.
Other systems, such as AltaVista, go in the other direction, indexing every single word on a page, including "a," "an," "the" and other "insignificant" words. The push to completeness in this approach is matched by other systems in the attention given to the unseen portion of the Web page, the meta tags.  Learn more about meta tags on the next page.

Web Crawling

When most people talk about Internet search engines, they really mean World Wide Web search engines. Before the Web became the most visible part of the Internet, there were already search engines in place to help people find information on the Net. Programs with names like "gopher" and "Archie" kept indexes of files stored on servers connected to the Internet, and dramatically reduced the amount of time required to find programs and documents. In the late 1980s, getting serious value from the Internet meant knowing how to use gopher, Archie, Veronica and the rest.

Today, most Internet users limit their searches to the Web, so we'll limit this article to search engines that focus on the contents of Web pages.

Before a search engine can tell you where a file or document is, it must be found. To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. When a spider is building its lists, the process is called Web crawling.

Internet Search engine

Internet search engines are special sites on the Web that are designed to help people find information stored on other sites. There are differences in the ways various search engines work, but they all perform three basic tasks:
  • They search the Internet -- or select pieces of the Internet -- based on important words.
  • They keep an index of the words they find, and where they find them.
  • They allow users to look for words or combinations of words found in that index.

Sunday 19 June 2011

The Parts Of A Crawler-Based Search Engine

Crawler-based search engines have three major elements. First is the spider, also called the crawler. The spider visits a web page, reads it, and then follows links to other pages within the site. This is what it means when someone refers to a site being "spidered" or "crawled." The spider returns to the site on a regular basis, such as every month or two, to look for changes.

Everything the spider finds goes into the second part of the search engine, the index. The index, sometimes called the catalog, is like a giant book containing a copy of every web page that the spider finds. If a web page changes, then this book is updated with new information.

Sometimes it can take a while for new pages or changes that the spider finds to be added to the index. Thus, a web page may have been "spidered" but not yet "indexed." Until it is indexed -- added to the index -- it is not available to those searching with the search engine.

Search engine software is the third part of a search engine. This is the program that sifts through the millions of pages recorded in the index to find matches to a search and rank them in order of what it believes is most relevant. You can learn more about how search engine software ranks web pages on the aptly-named How Search Engines Rank Web Pages page.

How to search engines works

The term "search engine" is often used generically to describe both crawler-based search engines and human-powered directories. These two types of search engines gather their listings in radically different ways.

Crawler-Based Search Engines
Crawler-based search engines, such as Google, create their listings automatically. They "crawl" or "spider" the web, then people search through what they have found.
If you change your web pages, crawler-based search engines eventually find these changes, and that can affect how you are listed. Page titles, body copy and other elements all play a role.

Twitter Delicious Facebook Digg Stumbleupon Favorites More