Pages

Advertisement

Tuesday, July 10, 2007

Using e-Books for Viral Marketing Campaigns - The Answer

Ideal links should link to various selected pages on your site, and you should also be able to determine the anchor text, because this could be a very important factor in determining the exact words for which your site is ranked on the search engines. But unfortunately in linking you do not get to determine your anchor texts ALL the time. But with an e-book you could determine quite a number of your anchor texts.

You provide an e-book, and offer the people who download it rights to redistribute it for free. This gives web masters a free resource to offer their users. They immediately pull your e-book from your site (of course there is still the problem of getting webmasters to see your valuable resource). You then offer the web masters the option of selling your e book, even allowing them to call it what they like as long as they leave links back to your site.

Most web masters won't sell it, and the few that do won't get many sales. Most web masters will simply put the resource on their servers and let their users download. Note that the e-book must be in PDF format.

Some Models

An excellent model for this technique is http://www.seoelite.com/, which holds the top ranking for "SEO software." The entire site's publicity is based on the viral spread of his e-book or e-lessons in SEO. The book contains links back to "Brad Callen's" site seoelite.com with SEO software as anchor text.

I have personally run across the e-book at least three times on different sites; the fellow even ran full banner ads for the e-book on SEO Chat. This book has free re-distribution rights and even has a "deal sweetener" that helps satisfy a webmaster's need for revenue. The whole e-book can either be offered as a single course or as a series of twelve courses via email (with a very strong sell message laced into every chapter).

The Clincher for Seoelite

Really, this deserves a whole chapter. Apart from giving redistribution rights, seoelite.com now offers webmasters PDF re-brander software which they can use to replace standard links and place unique affiliate links back to the landing page. Now note this technique -- they give a free resource, then tell you that if you change the links to affiliate links (they give you the free software to do this) you get paid a certain amount every time someone who clicks on that link buys seoelite software.

So what happens? Discerning webmasters put the resource on their very best pages and do everything in their power to presell the SEO software. That's why despite all the bad press saying "seoelite is obsolete" (it even rhymes), seoelite is still a top (automated and DIY) competitor for many of the very best SEO (manual and outsourcing) companies.

Using e-Books for Viral Marketing Campaigns - The Need

I am a web developer who works with teams that build and market web sites, so I always have a few sites at hand which I am optimizing for the search engines, or just looking to drive traffic towards. And always we face the need, the absolute need for content. Always we start with this need: where will we get the content that the traffic will read?

Web editors, webmasters and generally every one that owns a web site needs content. Without content, your traffic is guaranteed to dry up. Take it to the bank, "no new content, your traffic will dry up." I have seen it happen countless times; we reach a few thousand hits a day, then our webmaster puts his/her feet up, quits creating new content, and few months later s/he screams "where did they all go?"

There are different forms of content. There is text content, there are downloadables, graphics and software downloads, online applications that do fun stuff (like give you search results), and more. Most webmasters simply stick with text because of bandwidth considerations, At the danger of sounding melodramatic, webmasters are pretty desperate for content.

The Other Need

Webmasters need revenue. What's the point of paying an SEOs exorbitant fees and then not getting enough money from the site to cover bandwidth costs? Everybody wants to make a little extra green to justify all that time spent in front of the computer screen. So content and revenue are the two primary needs webmasters have. Search rankings are just a way to get traffic (but before traffic must be content).

Your Need

You need rankings; if you didn't you would not be here. You want high rankings on the SERPs. If you could get them without doing anything you probably would go for it, but that would be bad for everybody.

To get rankings the search engines ask for two things: links and content, in that order. Hopefully you have content, but even after content you need links, and sometimes you may not get links as rapidly as you would like. Links are delicate and sensitive things; they flee from ardent pursuers and come towards gentle "link baiters." As an individual you want to create the ultimate link bait, and get free one way links from a large number of sites with varying page rank. And how will you do it?

Using e-Books for Viral Marketing Campaigns

Now I am sure you have heard what I am about to say before. You probably heard it from some one page "sell" web page which touted their new, easy and probably very expensive product as the panacea to all your problems. Well this method isn’t new, it definitely is not easy, but it could be free -- if you don’t count man hours.

I believe in it so much I currently am compiling a series of programming articles into e-book form, and hopefully in two months time I will start my own viral marketing campaign for a scripts site using an e-book. What am I trying to say is, this could very well be the single best way you can use to continuously get permanent one way links back to your site (yes, I actually believe what I am saying).

This technique has two other points to recommend it: it works without any major marketing on your part, or at least with very little marketing. Additionally, you just have to create your product once, though you might have to update further editions for relevancy.

The Question

In this article we will look at the needs of people on the net, the need of your SEO adept, and then the win win situation that an e-book provides to all the parties. We will also look at major sites that offer free e-books and have shot to the tops of their categories/keywords on the SERPs.

It was actually when I was studying these top sites and asking myself "how" that the idea occurred to me. How does one site go to the top of the lists for "SEO software" and stay there, and how does another site go to the top for "web design" and stay there? I discovered two major ways; right now I am still studying the first one. This is a bit like revealing state secrets so follow closely, do your own research and find a way to create your own e-book.

Netvibes Puts Web Surfers in Control - Letting Users Slice and Dice

As I said earlier, we’ve seen customizable web portals before. There’s even Yahoo Widgets, with literally thousands of widgets you can place on your desktop. The lack of ads is an important difference, but there are other points a good company looking for an opportunity to reach users will keep in mind.

If you want to use Netvibes as a means to reach potential customers, you need to be prepared for the idea that users are going to want to be able to use your content their way, not yours. If eBay built an auction tracker, Netvibers would balk at using it if it significantly limited the auctions available for searching. That’s doubly true if you have a Universe; while it looks like you’d still have your logo on your module, there is nothing stopping someone from copying one or a whole bunch of the modules in your Universe and putting them on their private page – possibly even alongside modules from a competitor.

It does open up a lot of possibilities. Forrester Research analyst Charlene Li notes that “Netvibes provides open access to the world of web 2.0 content. Traditionally, you had to ask each company permission to do this on any Web site. Now you can read Gmail alongside Hotmail and Yahoo Mail.”

For many companies, this may take some getting used to, but in the long run it should do a much better job of getting you the publicity you’re looking for. “With Web 2.0, no one can own the whole space,” Krim noted. “In the past you wanted everyone to come to your site. Right now you need to figure out how to distribute your content to the widest number of platforms.”

The nice thing about this approach is that it doesn’t depend so much on scoring high in Google; it depends on making yourself useful to your customers so that they’ll remember you. It may involve repackaging your content (or at least some of it) so that users can choose to interact with the parts in which they’re most interested. As Krim so charmingly puts it, “People can decompose their newspapers and take the pieces for themselves.” That may sound a little morbid, perhaps, but branching out so that your heart rate doesn’t automatically rise and fall with your position in the SERPs just might help make your rest a little more peaceful.

Netvibes Puts Web Surfers in Control - Developers Take Note

Netvibes has its own Developers’ Network. Just a glance at the page tells you they’re committed to “write once, run everywhere.” Using the Universal Widget API, developers can create widgets for users that will run on a wide range of widget platforms and blog systems. The company specifically lists Google IG and Apple Dashboard, and implies they’ll work with many more.

The documentation page seems to be very complete to my non-programmer eyes. Judging from the skeleton example shown, a Netvibes widget is written with XHTML, JavaScript, and CSS. It includes several parts: a header, a model, a controller, and view parts for structure and style. The sample is very clearly shown, in step-by-step fashion. If you’re used to coding for the web, you shouldn’t have significant difficulties creating a widget. You do have to be very careful about writing well-formed code.

There are plenty of how-to sections, including how to turn an RSS feed into a widget and how to test your new widget. There are at least four different example widgets, including one from Weather.com and one that shows an astronomy picture of the day. Of course you’ll find the specifications and a frequently asked questions page.

There’s a developers’ mailing list and an official UWA forum. When I checked it, there were less than 600 posts ranging over 160-some threads. Most of the threads centered on questions about the UWA, but there were also plenty of interesting posts in the Widget Wish List forum from those who had ideas for widgets and couldn’t code them. There is also a Widget Showcase section; apparently widgets that deliver the winning numbers for particular lotteries (after the event of course) are pretty popular, but there was also a virtual fireplace, a chat module, and a widget that let you search for Marmiton recipes.

Marmiton recipes? Well, why not? If there’s someone on the web who built it, there’s probably someone else on the web who wants to see it. In that sense, Netvibes is a middleman like eBay. But it’s not money that’s changing hands, it’s attention, and that’s a far more valuable coin.

Netvibes Puts Web Surfers in Control - A Boon for Publishers

If you have a blog or podcast, you’ll be interested in the Netvibes Ecosystem. This is where Netvibes is trying to build the most complete directory of feeds it can. Netvibes users can then grab your feed and add it to their own pages. If any Netvibes users come across your site while they’re surfing and want to add your content, Netvibes also has the Netvibes Button, which you can set up on your site to let them subscribe to your feed in one click.

Are you somewhat more ambitious? You can put together your own Universe. Krim describes a Netvibes Universe this way: “Imagine building your own rich media portal that anyone can visit, like your own personal Yahoo or MySpace. Imagine going beyond the blog, and unifying your digital life in on single place with podcasts, videos, feeds, games pictures – but unlike a blog, you don’t have to post to it every day; content gets updated automatically. Imagine launching your own media company in just minutes.”

If you’re a big company like CBS News, it can help you reach a Web savvy audience. Recording artists, publishers, companies offering online services, and even educational and not-for-profit companies have built their own Universes on Netvibes. While the Universes do have the feel of being built off of templates, it’s very clear even from the thumbnails you can view when you browse the directory of Universes that each builder has put their own stamp on their creation.

Of course it wouldn’t really be Web 2.0 if you couldn’t make comments about it. When you browse the list of Universes, you get to see the number of comments each Universe has received, and a click takes you to the comments themselves. So far, very few people have garnered any comments. (I was a little disappointed that Tariq Krim himself did not have a Universe; what, no personal glimpse of the founder?).

Not everyone can use Netvibes Universes just yet; they’re in beta. The intention, according to the Netvibes blog, is that eventually every Netvibes account will have two pages: “a private page, where you subscribe to all the content for your personal, everyday use, and a public page, where you can allow others to access your favorite content – everything that you love on the web.”

Netvibes Puts Web Surfers in Control

Most of us who work at desks all day like to have all of our useful tools in one place, both figuratively and literally. From actual desktops full of paper to virtual desktops full of icons to browsers bulging with bookmarks, we like everything where we can get to it easily. Enter Netvibes.

Netvibes offers its own version of an all-in-one-place solution, and if you’re an online business looking for a new way to reach potential customers, you just might want to pay attention. But first, let me give you a little background.

Netvibes was founded in September 2005 by Tariq Krim. The France-based company has been adding new features like there’s no tomorrow, as a glance at its blog would quickly reveal. The company has at least ten million users in more than 150 countries. They’re attracted by the possibility of simplifying much of their web life onto a single home page.

As the company explains, “Netvibes lets individuals assemble all in one place their favorite websites, blogs, email accounts, social networks, search engines, instant messengers, photos, videos, podcasts, widgets, and everything else they enjoy on the web.” Not only is it all in one place, but in true web 2.0 fashion, users can easily share it with their friends. Granted, this isn’t exactly new; web portals have been around forever, and users have been able to customize their views of Yahoo, Google, and other sites for years. So what makes Netvibes different?

You don’t realize it right away because the home page has so much stuff on it (just waiting for your custom touch), but then it hits you: no ads. Not even one. You may see corporate logos, but those are attached to items such as news feed widgets – and it would look a little strange if your news feed from Reuters didn’t have the Reuters logo on it, wouldn’t it?

Is that truly the company’s policy? “We break all the rules,” explains Krim in an interview with Wired. Firms that want to reach Netvibers have to give them something useful – no mere ad hyping the virtues of the company or its products will do. For instance, if eBay wanted to show Netvibers the extent of its auctions, it couldn’t simply put ads next to related items; it would have to build an auction-tracking module, and it just might find that someone else who found a need for such a module had gotten there first.

Some simple tips to Boost your Internet Speed

1. Remove 'Bandwidth Sucking" Applications
Many applications such as anything related to file sharing (KAZAA, eDonkey,etc)and even some things like AIM are filled with addons that will run in the background while sucking up the bandwidth. This then slows your connection.


2. Install Zone Alarm or some firewall on every PC in your house
First rule is not to just apply these things to one PC on your Internet connection. Make changes to all computers since all you need is one bad apple. I recommend ZoneAlarm which you can get at any computer store.


3. Avoid opening unknown attachments
Many spyware problems arise when people click on an attachment. My rule? If I cannot identify the sender, I delete it without opening. Maybe you should too!


4. Avoid Programs that speed up downloads or browsing
There are many programs that claim to speed up your connection such as Netsonic and GO!Zilla. But let's think about this. These programs download a ton of pages so that when you go to the website again it loads faster. Ok maybe so but what if they update that website? Now you have to again update the local pages so you are really not saving anything. Maybe good on sites that never change but where are these?


5. Change your browser start page to NOTHING

Many times people sit and wait for their browser to open since they have a favorite site setup. Best thing it to remove this unless you want to always go there. This is easily done under Tools-->Options in either Internet Explorer or Mozilla Firefox.



Plz comment here if u have any doubts about this post or have any suggestions . I'll try my best resolving ur doubts ..

Viral Marketing Offers Risks, Rewards - Viral Marketing Examples

Creating something artificial doesn’t always backfire. An Adweek story from June 2005 talks about a viral marketing campaign created for Volvo to promote the S60 in Europe. The campaign featured two web sites. One site showed a pseudo-documentary about a small town in Sweden where 32 people purchased the same car in one day. A second site, supposedly made by the director, disputed whether the documentary on the first site was authentic. Are you confused yet?

Confusing or not, the campaign, dubbed “The Mystery of Dalaro,” was a success. According to Tim Ellis, global advertising director for Volvo, the car “broke every sales record.” But it was a risky campaign, and Ellis admits it was hard to convince Volvo to go for the idea.

Wal-Mart learned just how risky viral marketing campaigns can be in late 2006. That’s when not just one but several blogs that showed the super discount retailer in a positive light were revealed to be fake, created by public relations firm Edelman for Wal-Mart. The one most people heard about, “Wal-Marting Across America,” supposedly featured a family traveling across America in an RV and staying in Wal-Mart parking lots; it spoke positively of the store and its employees. Another fake blog, Paid Critics, was devoted to “exposing” links between unions and other vested interests said to be “smearing Wal-Mart” through the media. Once the blogs were found out to be fake, Wal-Mart and Edelman received a tremendous amount of bad publicity.

Sony didn’t get it right either. About a month and a half after the Wal-Mart revelations, their attempt at a viral marketing campaign was revealed as fake. You may have seen the blog site and the YouTube videos with the theme “all I want for Xmas is a PSP.” Well, it surprised almost no one that these were made by a marketing company (Zipatron by name); for one thing, a number of bloggers remarked that the males featured in the videos looked too old to be trying to convince their parents to get them a PSP.

That doesn’t mean that these kinds of sites never work, you just need to get a handle on how to do it right. Take Norelco, for instance. Do I really need to mention ShaveAnywhere.com, the site for the Philips Bodygroom? The somewhat racy site (which might not be safe for work depending on where you work) racked up hundreds of thousands of visits for the company, surprising and delighting many with its humorous approach to hirsute hygiene. Alas, I don’t have any sales figures for the Bodygroom, but I would be very surprised if the campaign did not encourage sales.

Viral Marketing Offers Risks, Rewards - Foiled by Google

Greenberg wrote with a bit of a guilty conscience, since in one sense it was his story that killed the buzz campaign – and he thought it was a good one too. He’d written an item about an ad and web campaign for MTD Products Cadet Cub riding mowers. The web campaign talks about a fictional kudzu-like strain of grass “that has reportedly begun taking over several states. Per the story, this grass grows several inches per day and defies nearly every effort to cut it,” explains Greenberg.

There’s no one web site devoted to the ad campaign, as there is for the Subservient Chicken. People are supposed to stumble across news of the strain from several different sites: a blog by a scientist, a conspiracy theorist, and an enraged stump clearer. One could also Google the name of the grass strain to find the sites. And that’s where the trouble starts.

You see, Greenberg’s source told him the name of the fictional grass strain. He naturally mentioned it in his story, though not in his “Cautionary Tale.” So what happens when someone puts the name of the grass strain into Google? The search engine, in its infinite wisdom, returns Greenberg’s story as the number one result. “And right below my story, whose slug mentions the three fictional web sites, are those three fictional web sites. Therefore, the fiction is ruined by my story about the fiction. Yes, the company got buzz but not the kind it wanted,” Greenberg elaborated.

It’s a little tricky to find Greenberg’s original story; when you do, you see that he mentions the fictional grass -- Loogootee Strain – only once. Greenberg’s editor insisted that the story couldn’t be taken down, and the name of the grass couldn’t be eliminated from it. The lawn mower company is probably relieved that Greenberg’s story now appears in the fourth position rather than the first one, but that’s still fairly prominent for viral marketing purposes.

If I hadn’t found the original story, I would have been tempted to wonder if Greenberg’s warning was intended more as a tongue-in-cheek punning statement about the hazards of faking your grass roots in viral marketing campaigns. Still, the issue he points out is quite real: you don’t always know what’s going to rank, or how, for a particular viral campaign. This is why SEO often seems like it is as much an art as a science.

“What hits me over and over in my job about getting press for my various clients is all the places online that press shows up in,” Greenberg quotes the Cub Cadet media relations person as saying. “You can’t tell where a story is going to show up.”

Viral Marketing Offers Risks, Rewards

Viral marketing campaigns can be great fun for the consumer and deliver an excellent return for the company. But they can also backfire, sometimes in unforeseen ways. It never hurts to go over the possibilities one more time.

Burger King’s Subservient Chicken web site is still around. It was one of the first to combine advertising with user interaction in a way that was uniquely Web 2.0. It wasn’t the last. While many have been successful, many others have failed – in some cases spectacularly.

That’s one of the hazards of a viral marketing campaign. You’re setting it out in the wilds of the web, to be played with and judged by the people you’re trying to reach. You can do all the market research you want, and try to think of everything, but you’re still giving up a certain degree of control. If you want to test the viral waters, you not only have to be prepared for this; you must embrace it if you wish to be successful.

What brought all this to mind was a recent piece written by Karl Greenberg for Online Media Daily. Titled “A Cautionary Tale For Viral Wannabes,” it’s the first time I’ve heard of a buzz campaign being killed before it even had a chance to start. It’s also the first time I’d ever heard of a buzz campaign being killed in quite this way. Maybe that shows my inexperience in the field, but I thought it was interesting enough to share – and certainly anyone could have a marketing campaign go bad for them in exactly this manner, especially if they forget the old slogan “Loose lips sink ships.”

Most of the time, when you’re doing marketing, advertising, or public relations, you want to be in touch with the press. There’s a reason those things are called “press releases” after all. Speaking as someone who has read innumerable press releases (and even written one or two), they’re certainly appreciated, even if we have to wade through a huge assortment of superlatives to get to the meat of the matter. But sometimes you need to think carefully about those reporters, what they’re going to do with what you tell them, and where things will go from there.

Some White Listing Essentials - Procedures Followed

Armed with an IP address and a client request, the admin first checks the sender domain to see whether or not it is forged, since spammers like varying their return address domain names. This is done using SPF or Sender Policy Framework, and is an extension of SMTP which by default gives power to spammers who are sending from a forged address; note that Simple Mail Transfer Protocols allow anybody to forge a return address (the design is basically outdated and should be changed). With SPF the sender specifies which servers are authorized to send mail, so a web admin checks the sender policy of the domain from which the information is sent and if the sent email does not comply with the sender's policies the email is treated as being from a spammer and the “white list” request is rejected.

So if Messrs Spam sends an email claiming to be genius@Whois.com and asks to be white listed for all Jungle.com users, Jungle's admin checks Whois sender's policy (after verifying that whois is a real site), and if he notices that only so and so public domain is allowed to send emails but this comes from Spam and not public domain, s/he rejects the whitelist request, since it isn’t really from whois. The above method does not work if Whois.com has compromised machines or if the spammer is actually an account holder on whois.com (but this leaves a trail for the spammer to be tracked).

Other Procedures

Other means of authenticating include SenderID and DomainKeys. DomainKeys checks emails by verifying the digital signature on the email as opposed to SPF’s method of simply querying the sender’s server to check whether the sender ID is one of the servers tagged as mail servers.

Protect Your Turf

If you want to be sure that some spammer does not start using your server to start sending mail (and you have never bothered to separate your mail sending servers from the rest), it is best to block your non-mail-sending servers, if none of your servers send mail. Then simply say so in your DNS records. Note that this blocking is voluntary but once it is done, the only thing you should be watching out for is that there are no open ports in your mail server that a hacker can use to gain access to your mail server.

Third Party Senders

Some agencies forward emails from various IPs. This third party throws a cog into authentication procedures. Since only their IP addresses are contained in the message, this gives procedures such as SPF and Sender ID problems when dealing with them. Most third party senders are trusted by the ESPs to verify that the senders are not spammers before sending their mails. Forwarders will however have their mail bounced back to them (not the sender) if it is discovered that their mails are spam, and are in turn obligated to bounce it back to the sender. Email authentication is a big deal. It is a good idea not to take white listing for granted, and it will definitely get more important as time goes on.

Sending an email through a third party     Diagram Source: www.wikipedia.org

DISCLAIMER: The content provided in this article is not warranted or guaranteed by Developer Shed, Inc. The content provided is intended for entertainment and/or educational purposes in order to introduce to the reader key ideas, concepts, and/or product reviews. As such it is incumbent upon the reader to employ real-world tactics for security and implementation of best practices. We are not liable for any negative consequences that may result from implementing any information covered in our articles or tutorials. If this is a hardware review, it is not recommended to open and/or modify your hardware.

Some White Listing Essentials - Reputation and Accreditation

The above is simply another name for white listing; it is rapidly going to become more important though as the big web mail providers are putting their weight behind it. It simply means the email service providers and also the big ISPs will not accept an email from senders which are not on a white list. Web sites like Yahoo and Microsoft (Hotmail) are throwing their support behind these email acceptance protocols. As time goes on, if it catches on, other web sites will definitely join the band wagon.

Who Decides What Gets Seen and What Doesn’t?

Right now, every site polices itself. Some use white lists, some use block lists. For small operators it is better to band together in groups and use block lists to screen emails sent to their web sites. Some individuals believe that this arbitrary banding up is unfair (who polices the police?). But if the ESPs have their way, it will definitely get worse, and it may be compulsory to be a member of one white list or another to get your emails delivered.

According to the book New Rules for the New Economy, there is also a numbers based system based on whether the email sender shares the subscriber's email address with other parties. This was in the late nineties and not many models seem to use this method, though it has to be said here that buying a list is a bad idea and may result in a lot of spam and/or unsubscribe requests.

Filters

Most ESPs filter viruses strictly (they screen ALL emails for viruses), but are more lax on the above trust level filters. Still, most ESPs check all not-rated IP addresses for spam and skip medium and high rated IP addresses; unless it is a major ISP/ESP they also skip checking black lists for all rated IP addresses. Obviously it is a good idea to run over to dnswl.org and get rated. Medium and Low trust level sites will get through an ISP, but if spam is reported by the clients, the sender is expected to purge his/her list of address that did not specifically request emails.

Note this difference between a white listed sender and a sender that is simply tagged as a “spammer” by a DNSBL. A white listed sender can send spam and will just get a note from the ESP asking for the error to be corrected; an unknown sender who does not respond to authentication requests and who sends spam is flagged as a spammer. Basically the listing is all about relationships.

If it is not known whether a sender is a spammer or not, the sender is grey listed by the ESP or ISP. A request may be sent to the sender asking for some form of authentication. This may be combined with some reverse DNS look up of the sender's connecting  IP address.

Administrators who have experience with spam in emails use certain rules when processing a white listing request. They follow these rules to make sure that a spammer does not get a white list rating. If you are registered with a basic web hosting package (since all the IP does is check the IP address) you should have no problems with passing a white list query from a client to a DNSWL. Note that if your site is not black listed, then it is white listed if it has been checked before. If it has not been checked before, it appears on the list after it has been checked. A site that has not been checked before is grey listed and may receive confirmation requests from the receiving ISP administrator.

Some White Listing Essentials - Trust Levels

Note the above phrase carefully; it is extremely important to white lists. It simply means the level of trust the white list can confer to you, given your past behavior and records. In certain cases, they even share your trust level with individuals who ask. According to www.dnswl.org there are four basic trust levels.

  • High Trust Level: Never sends spam.

  • Medium Trust Level: Extremely rare spam occurrences, corrected promptly.

  • Low Trust Level: Occasional spam occurrences, actively corrected but less promptly.

  • No Rating Whatsoever: Legitimate mail server, may also send spam.

The above model is based on input by a team of volunteers. There is no fixed protocol. It is basically based on past performance and the web site which sends email has little or no input in changing its trust level.

Sending an email    Diagram Source: www.wikipedia.org

Email Verification chart   Diagram Source: www.wikipedia.org

Some White Listing Essentials

If you have been white listed with a particular operator, it means that the email you send always gets through. No more getting caught in spam filters! But how do you accomplish this goal? Keep reading.

"Reputation is everything; guard it with your life."
-- Robert Greene

In a previous article I wrote about aligning yourself with block lists, specifically how to avoid being tagged a spammer. Since then I have discovered that most white lists have certain things they look for, and if you discover that most of your clients are with a small number of operators, then it is always better to be on their list of acceptable email senders. In the earlier article I talked about things to avoid so that you will not be considered a spammer; here I will talk about things to do so that if you apply to be on a white list (list of acceptable senders to a particular ESP/ISP) you will be accepted in an expedited fashion.

Email Techniques in Brief

Ideally, if you don’t out source your email creation and sending, you should have a single person responsible for your email campaign. Other individuals in your organization should act as members of your email team. But if you send a large number of emails you will have problems ranging from the technical (unsubscribe requests are not being obeyed) to the human (somebody blacklisting your IP address).

Not having a single person responsible for your email activities will cause conflict when difficult situations arise, and if a major mistake was made, there will be nobody accountable for correcting it and making sure it does not happen again. There are as many white lists as there are web mail providers, and in some cases some DNSBL also offer DNSWL services (WL for White list), for example MAPS (mail abuse prevention systems).

You want to be an accepted member of these various white lists. To do this you must be perceived in a certain way and pass certain requirements (some of them quite stringent). Let's take a look at some models I came across while researching this topic.

Designing Websites For Humans In A World Of Robots!! - Guidelines

Website Designing Guidelines :
Designing websites for humans is a far wider topic that can be covered in the scope of this brief article and will vary by website. That said, you might find these general guideline helpful:
1. Provide headers on each page so that your visitors can see clearly what the page they have loaded relates to. This header should be the largest text on the page.
2. Content text should be no more than 1/3 of the size of the header, this will ensure that the page is not too "monotone".
3. Navigation menus should be very clear and easy to use. This should be presented on every page, and the user should not have to rely on another source (such as a framed page) for navigation.
4 .Pages should be as light on the images as possible, as many people out these still use 56k, and loading images takes time on a 56k.
5. Pages should be no more than 40k in size (html coding size) unless they are papers e.g. technical studies / technical papers / specification sheets.

Designing Websites For Humans In A World Of Robots!! - Balance

When a user visits a web page, they have the full effect of layout and graphical interfacing, and this graphical interfacing is very important in how visitors will use our site, and ultimately, buy products, hence generating revenue. If one were to compromise too much in terms of what the visitors see, we might have a website that simply will not produce profit, even if it does rank in the top slot for competitive terms at some of these larger search engines. This is a condition that I refer to as "Over Robot Optimizing", and is a common practice with some websites on today's internet.
There is a balance between optimizing websites for robots and keeping a site convenient for users, although sometimes this balance is hard to achieve. With the correct balance, a website ranking at number 6 in search engines might easily out-perform a more highly ranked competitor in terms of revenue generation and conversion rates.. Furthermore, advertising cost will be lower in the long term, leading to higher profit margins.

Designing Websites For Humans In A World Of Robots!! - The Full Potential

Firstly, let us realize the potential of search engines in today's society, we all know that "Google'" currently powers "Yahoo!'", and we all know that "Overture'" provides it's listings to may search engines. Google, being the prime focus of my current work, pulls in an amazing 150 million different search queries everyday, or at least so they claim. Add that on to Yahoo's many million searches per day and we're talking about a lot of traffic. This is important because these Google results are free to be listed in, meaning that with the right properties, your website has the potential to hold the number 1 position for any given search term regardless of money (ignoring the sponsored links).
With such a great deal of traffic in search engines, everybody wants to be listed in the number 1 slot. This is the nature of competitiveness. This forces webmasters & SEO's (including myself) to analyze Google and other search engines to determine what properties are required in order to rank number 1 on the results pages, this in itself is still not a problem, but when some conclusions are derived from the analysis of the search engines, we are usually left some with a result requiring some page modifications in order to improve the ranking of those pages, and this is where some webmasters & SEO's get confused.
In the attempt to modify their pages to rank number 1, webmasters will often sacrifice how usable a page is to human visitors, and this is the most lethal mistake anybody could make. When a robot visits your site, it will see a whole pile of text, and based on that text, will rank your site for different terms. This text will have no formatting, in the graphical layout sense, although the weight of some text is measured differently.

Designing Websites For Humans In A World Of Robots!!

In this day and age, it can be easy to forget the basics of why your website is online. Crawlers/Robots, they come, they go, but they never pay. Thats where your visitors come in.With the ever increasing number of web pages & documents available on the internet, it has become difficult to find information fast and without having hundreds of advertisements thrown into our faces (most of which will have no relevancy to the information we are seeking). There is quite simply no realistic method to finding material on the internet other than using search engines. At this point, everyone should realize that search engines use "robots" in order to "crawl" through the internet and collect web pages & other documents. The search engine will then use these documents to make up the engines "index" or "database". This in itself is not a problem, but to every action there is an equal & opposite reaction (Newton's 3rd law of motion!).
With this expansion of information on the web, which has driven more people to use search engines on a daily basis, it has become a requirement for the search engines to become more active in order to keep their database up to date. This means crawling more web pages at a greater frequency. Website owners have indeed noticed this increase in activity, and they have not stared at it blankly in the face, they have reacted. They now realize that these search engines are producing significant percentages of their traffic (up to 90% in some cases). So what to do'
Again, with the expansion of the web, there has also come more competition in essentially every industry, from computers, travel, food, right down to buying pets online. This competition is healthy in that it has pushed prices lower, but this very same competition has indirectly lowered the overall satisfaction level of website visitors

ROBOTS.TXT Primer - ROBOTS.TXT Analysis

User-agent: EmailCollector
Disallow: /
If you were to copy and paste the above into notepad, save the file as robots.txt and then upload it to the root directory of your server (where you will find your home page)what you have done, is told a nasty email collector to keep out of your website. Which is good news as it may mean less spam!
I do not have the space here for a fully fledged robots.txt tutorial, however there is a good one at
http://www.robotstxt.org/wc/exclusion-admin.html
Or simply use the robotsbeispiel.txt I have uploaded for you. Simply copy and paste it into notepad, save it as robots.txt and upload it to your server root directory.
http://www.abakus-internet-marketing.de/robotsbeispiel.txt

ROBOTS.TXT Primer - Reasons To Use A ROBOTS.TXT File

Below are a few reasons why one would use the robots.txt file.
1. Not all robots which visit your website have good intentions! There are many, many robots out there whose sole purpose is to scan your website and extract your email address for spamming purposes! A list of the "evil" ones later.
2. You may not be finished building your website (under construction) or sections may be date/ sensitive. I for example excluded all robots from any page of my website whilst I was designing it. I did not want a half complete un-optimized page with an incomplete link structure to be indexed, as if found, it would reflect badly on myself and ABAKUS. I only let the robots in when the site was ready. This is not only useful for new websites being built but also for old ones getting re-launched.
3. You may well have a membership area that you do not wish to be visible in googles cache. Not letting the robot in is one way to stop this.
4. There are certain things you may wish to keep private. If you have a look at the abakus robots.txt file (http://www.abakus-internet-marketing.de/robots.txt) You will notice I use it to stop indexation of unnecessary forum files/profiles for privacy reasons. Some webmasters also block robots from their cgi-bin or image directories.

ROBOTS.TXT Primer

There is often confusion as to the role and usage of the robots.txt file. I thought it would be a good idea to dispel some myths and highlight what robots.txt files are all about.There is often confusion as to the role and usage of the robots.txt file. I thought it would be a good idea to dispel some myths and highlight what robots.txt files are all about. Firstly, a robots.txt file is NOT to let search engine robots and other crawlers know which pages they are allowed to spider (enter), it is primarily to tell them what pages (and directories) they can NOT spider.
The majority of websites do not have a robots.txt, and do not suffer from not having one. The robots.txt file does not influence ranking in any way. Its goal is to disallow certain spiders from visiting and taking back with them pages you do not wish for it to do so.

Protect Against Invaders by SPAM-Proofing Your Website - Blocking Malicious "Good for Nothing" Robots

The robots that you will want to block will depend on your preferences, as well as any bots that frequent your website on a regular basis.  Cutting down on bandwidth costs, preventing robots from collecting your email address, and preventing robots from collecting information from you or your website are all good reasons to block a potential robot.

The best method of deciding which robots to block is to do some quick research about the robots that like to take residence on your site.  If you cannot find reliable information about a robot or its use of something you would not approve of, simply block the robot by using a robots.txt file.  If you find that a robot does not obey the robots.txt file, pull out the big guns and use mod_rewrite to stop them dead in their tracks.

Example Robots

There are several common bots that one might run into frequently such as "Microsoft URL Control" which is a robot that ignores the robots.txt file and fetches as many pages as it can before leaving the site.  This SPAMbot is used by many different people all using the same name. 

 The second robot that frequents websites is the NameProtect (NPbot) robot. This robot's job is to collect information about websites that are potentially violating brand names of clients.  This robot does not obey the robots.txt file, responds to emails sent to the NameProtect company, and serves no good purpose as far as we have determined.

To Block the Microsoft URL Control Robot by User Agent:

RewriteCond %{HTTP_USER_AGENT} "Microsoft URL Control"
RewriteRule .* - [F,L]

To Block the Nameprotect Robot by User Agent:

RewriteCond %{HTTP_USER_AGENT} "NPbot"
RewriteRule .* - [F,L]

Furthermore, once you establish a good number of bots that you would like to block using mod_rewrite, you can compile a list and add comments as well, like so:

RewriteCond %{HTTP_USER_AGENT} "Microsoft URL Control" [OR] #bad bot
RewriteCond %{HTTP_USER_AGENT} "NPbot"
RewriteRule .* - [F,L]

One thing to note about using the examples here, make sure that you correctly know how to insert the script into mod_rewrite and that you do so in the proper rules required for this technique to be effective.  Additionally, one last thing to note is that mod_rewrite rules are not an ultimate solution to SPAM and malicious bot problems. You can, however, effectively block a good majority of bots out there and dramatically cut down on the amount of SPAM you receive. If you use the JavaScript methods and mod_rewrite then, not only will your website be one heavily guarded anti-SPAM site, but you may actually enjoy downloading your all email messages to find them SPAM free.

Protect Against Invaders by SPAM-Proofing Your Website - Using mod_rewrite

In this section, the use of mod_rewrite is very successful in blocking the SPAMbots and other spybots that visit the website with a mission to either steal your email address or grab information from your website without your permission. Consider this method as a step above using JavaScript, because it stops them before they ever read the webpage itself.  So if you are thinking of using JavaScript on the page to block bots from finding your email, consider the use of mod_rewrite as a primary defense weapon against SPAM and other malicious robots.

One note to readers: The use of mod_rewrite requires that you have it installed on your server, and you have the ability to edit the .htaccess file.  Below is a simple way to locate the .htaccess file while using a program such as CuteFTP (or a similar FTP client that performs the same functions).  If you are unsure whether you have mod_rewrite installed, you should first consult the server administrator with your primary hosting company.  Ask them if you have mod_rewrite and permissions to edit the .htaccess file.

How to Find .htaccess in a Common FTP Client

To locate the .htaccess file, most often you need to display all hidden files present when connecting to your hosting account.

To enable your FTP client to display all hidden files (.htaccess and many other files not normally seen by the user).

  1. First locate your saved site properties.
  2. Right click on the profile of the website you want to display hidden files. This is most often located in the "FTP Sites" section of most clients.
  3. Once you right click on the FTP site, select "SITE PROPERTIES" from the menu.
  4. An option box will load up displaying the site properties of your site. Look for a tab called "ACTIONS" and click on it.
  5. It will display the actions of the site. Locate a gray box called "FILTERS" and click on it.
  6. This will display the "Filters" properties of the site.
  7. Locate the "Enable Filtering" from the options available. Make sure this box is checked.
  8. Once you have checked the enable filtering box, a small box at the bottom of the options will be displayed.
  9. It should say something similar to "Enable Server Side Filtering". Make sure this box is checked as well.
  10. Now enter the following into the "Remote Filter" box: -a

Once you have entered in the filtering options, make sure to click "Ok" or "Apply" in order to save your changes.  You should now be able to see all hidden files on the server.  Make sure you start a new connection to view all files.  If you are still having trouble viewing all your files and can't seem to locate the .htaccess file, don't give up, but consult the system administrator of your hosting account to assist.

How to Setup Your .htaccess File

Once you have confirmed that you do have a .htaccess file, and mod_rewrite is turned on, add the following lines to your .htaccess file:

Options +FollowSymlinks
RewriteEngine On
RewriteBase /

Protect Against Invaders by SPAM-Proofing Your Website - How to Use the JavaScript Method

The following are examples of JavaScript that you can use to make your email address appear different in the code but still perform the same function as if it were regularly coded in HTML (ie: mailto:support@example.com).  To use these examples, just copy and paste the code into your HTML document and replace the required field(s) with your email address.

1. Basic Email Script

<script language=JavaScript>
<!--
document.write("support" + "@" + "example.com");
//-->
</script>

Result:  support@example.com

2. Basic Mailto: Email Script with Link Text

<script language=JavaScript>
<!--
var username = "support";
var hostname = "example.com";
var linktext = username + "@" + hostname;
document.write("<a href=" + "mail" + "to:" + username +
"@" + hostname + ">" + linktext + "</a>");
//-->
</script>

Result: support@example.com

3. Inline JavaScript

<a href="#" onclick="JavaScript:window.location='mailto:'+'support'+'@'+'example'+'.com'">Link Text</a>

Result: Link Text

The three scripts options above should give you some flexibility in how you choose to use these on your website.  Remember to insert your own email address into the fields where the support@example.com email address is located.

Problems Associated with JavaScript

There doesn't appear to be many problems with using the above scripts in the HTML code of your documents.  The biggest issue may be incorrectly coding the scripts or issues with older browsers that do not support JavaScript. One last issue that may see its day in history is email harvester programmers being able to find email addresses among the JavaScript code.  While this may be a reality sooner than we expect, for the most part JavaScript should be SPAM-proof enough to block most malicious SPAM bots.

Protect Against Invaders by SPAM-Proofing Your Website

Benjamin Pfeiffer discusses how to SPAM-proof your website. He explains how to use Javascript and mod_rewrite to stop SPAMbots and Spybots from finding email addresses on your website. He also talks about how to find and set up the .htaccess file and gives examples of robots and how to block them.

Despite recent improvement in tools and programs in the battle against SPAM, most of us cannot escape the menace that plagues most of our inboxes on a regular basis. Each day most of us probably receive more SPAM than actual real email, and with Spammers getting more and more creative in their ways to circumvent traditional anti-SPAM tactics, it's vital webmasters empower themselves with some anti-SPAM tactics for their own websites.

In this article I will discuss a few ways to SPAM-proof your website against malicious SPAM robots that inevitably collect your email to be sold by the thousands to Spammers worldwide, whether it be for using your information inappropriately, or simply for no-good reasons.  These tactics are so effective that within a month of implementing them, you should see a dramatic drop in the amount of SPAM that makes it through to your website email addresses, not to mention a decrease in bandwidth.

How to Stop SPAMbots Dead in Their Tracks

1. Using JavaScript
2. Using Mod_Rewrite

Both of these techniques are effective in blocking SPAMbots and Spybots from finding your email address or other personal information on your website. While JavaScript is an easier solution, using mod_rewrite to block SPAMbots is more technical and requires knowledge of editing your .htaccess file. It's best to try the JavaScript method first, and then venture into using mod_rewrite to further block SPAMbots from hitting your website.

Using JavaScript

To understand how to use JavaScript to block SPAMbots from harvesting your email, let's examine the ways that they find your email in the first place.

1. Mailto: Links - these are common links placed in the HTML code of a website, offering a potential visitor the ability to send an email to the webmaster of the site.  A visitor clicks on the email link and it opens an email client with the To: field already filled in with the address specified in the code.  These links are the prime target of SPAMbots harvesting your email address, and simple use of JavaScript can cut down on email harvesters hitting your inbox with SPAM.  The main objective with using JavaScript is to change the appearance of your email address so that email harvesters do not recognize your email, but still retain complete functionality for legitimate visitors to send you an email.

2. Contact Forms - this is another prime location for SPAMbots to leave their tracks, steal your email address and be gone, ready to report back with fresh email addresses.  These forms are another common feature on websites, and the following is what most often causes SPAMbots to find your email.

<input type="hidden" name="recipient" value="support@example.com">

Score One for the Spiders? - Spiders Gone Wild

I imagine you can all see how this has relevance for spiders and those who aggregate information about specific products...or even auctions. So, does this mean that spiders can go out and disregard those software-based warnings from Web sites telling them that they're not welcome? Does this take eBay's winning of an injunction against Bidder's Edge and turn it on its head?

You'd think it would. On the face of it, the two cases look similar. Bidder's Edge, an auction aggregation site, was sending spiders to eBay to collect information on eBay's auctions. BE would then aggregate eBay's auction information along with auction info from many other auction Web sites, making it a "one-stop shop" for that kind of information. eBay wanted them to stop -- and, in fact, was able to legally force them to stop.

That is where the similarities between the two cases end. You see, eBay didn't claim that Bidder's Edge was infringing its copyright -- or, at least, the court didn't grant the injunction based on a claim of copyright infringement. Oh no. eBay claimed that Bidder's Edge was trespassing! Bidder's Edge admitted that it was sending 80,000 to 100,000 queries a day to eBay's Web site, and eBay argued that that was akin to sending 80,000 to 100,000 robots into a bricks-and-mortar business looking for prices and not buying anything. Well, the court was skeptical of that argument, but not of certain other arguments. Specifically, eBay was able to point out that Bidder's Edge's spiders were using up eBay's computer capacity, after eBay had told BE more than once that it was not welcome to use its spiders on eBay's Web site. At that time, it was only using less than two percent of eBay's capacity -- but the principle in this case wasn't the amount of capacity it was using up, but that it was using that capacity after eBay told it not to. A Web site might not actually be real estate in the same way that land is -- but there is no question that computers and servers are property that can be owned. And Bidder's Edge was using eBay's computers in ways that eBay specifically told it that it wasn't allowed to do. What would you do if you told someone not to use your computer in a particular way and they kept doing it? Uh-huh, that's what I thought. In eBay's case, they were granted an injunction against Bidder's Edge in late May 2000.

So what does Nautical Solutions Marketing vs. Boats.com add to the sometimes-controversial issue of spiders? Well, first off, just because something was copied from a Web site -- even by a robot -- doesn't mean that the person or company doing it is committing copyright infringement. Especially if it's essentially factual information. On the other hand, that also doesn't mean that they're not in hot water. If you're a spider wrangler, make sure your spiders check to see whether they're welcome on the Web sites they visit -- and don't try to force the issue. It's a big Web out there; there's plenty of cyberspace for spiders to crawl without looking for trouble. 

Score One for the Spiders? - Copyright Infringement?

The first service is the more interesting one for our purposes (though they're both relevant) because it's that service which directly involves the use of a spider. Cleverly named "Boat Rover," this software program would connect to a targeted Web site, extract specific facts about a yacht for sale from a public yacht listing, collect those facts, and enter them in a searchable database. Boat Rover did not hold onto the HTML used in the listing; it copied it just long enough to get the facts, then discarded it. Yes, this matters -- because this case was dealing with a question of copyright infringement. Specifically, did NSM infringe any copyrights when it sent Boat Rover to Yachtworld.com repeatedly between November, 2001, and April, 2002, to collect information which it then posted on its own Yachtbroker.com Web site?

Not according to Judge Merryday, who presided over the case. I'm sure you'd love to hear that it was a simple "no," thus reaching a decision once and for all in favor of spiders running rampant, but anyone who's ever had reason to consult a lawyer knows how unlikely that would be. Legal cases involving high technology, especially the Internet, tend to be less cut-and-dried. In this case, two important points were raised in consideration of whether any copyrights were infringed. First, what kind of information was Boat Rover collecting? In this case, it was just the facts: manufacturer, model, length, year of manufacture, price, location, and the URL of the Web page that contained the yacht listing. Well, fortunately for NSM (and Joe Friday), there's an existing precedent that states that facts cannot be copyrighted; facts are part of the public domain, and thus there can be no question of copyright infringement. There's no copyright to infringe!

The second aspect of this service that might have opened NSM up to charges of copyright infringement involved displaying these listings on its Web site. Remember when I mentioned that Boat Rover's discarding the HTML information was important? That meant that NSM had to code the information without using Boats.com's coding as a template -- in theory, at least. And in fact, Judge Merryday found enough differences between the "look and feel" of a Yachtworld.com listing and a Yachtbroker.com listing to keep NSM in the clear over possible copyright infringement. The judge wouldn't even grant Boats.com a copyright on its use of descriptive headings like "electrical," "accommodations," and "galley" to describe specific features of a yacht -- because the terms were industry standards, and at least two other yacht brokering Web sites were using those terms in the very same way. The legal reasoning seems to go something like this: ideas themselves do not receive copyright protection. Expression of a particular idea does. However, in some cases, the ways to express a particular idea are so limited that the expression doesn't receive copyright protection, because that would be just like protecting the idea. In this case, there's only so many headings you can use for various areas of a yacht when listing it for sale, so -- like facts -- they don't receive copyright protection.

The second service that NSM offered its customers was a "valet service." With the permission of a yacht broker using the service, NSM would move, delete, or modify the yacht broker's listing. Boats.com complained that NSM's people were copying and pasting listings from Yachtworld.com over to Yachtbroker.com. The court found strong enough evidence that the only items being cut and pasted were descriptions and pictures, not HTML or anything that Boats.com could claim a copyright to. In fact, the court found (and both NSM and Boats.com agreed) that the yacht brokers themselves owned the copyrights to their own listings -- and, since NSM was doing its thing with their permission, they weren't infringing any copyrights.

Interestingly, this case was a "pre-emptive strike;" NSM was the plaintiff. They brought the suit seeking a declaration that they did not infringe any copyright owned by Boats.com. And they won, too.

Score One for the Spiders?

Spiders. Those creepy, crawlies of data mining that scour the web for bits and pieces of information have a habit of getting into trouble. Just ask eBay and Boats.com, who recently had to resort to some legal bug spray in order to get rid of the little pests. Is your data scavenging in danger?

Call them spiders. Call them robots. Call them bargain hunters (or one heck of a nuisance); they're software programs with a mission: to hunt down information and bring it back. Many search engines couldn't live without these electronic assistants to help them keep track of the proverbially explosive growth of the Web. Certainly they can save a lot of time when you're trying to comparison shop online -- just let a spider do the hunting and bring back the results. This is all well and good, unless the owner of the site doesn't take kindly to spiders. eBay won an injunction against Bidder's Edge that forced BE to stop sending spiders to eBay's Web site in search of auctions. But a somewhat similar-looking court case was just recently decided in favor of the spider-wrangling plaintiff. Does this have wider implications for information aggregators who use spiders?

First, let me give my disclaimer: I am not a lawyer, nor do I play one on TV. So check with someone who eats, drinks, and breathes this stuff before you do anything drastic.  That said, let's take a look at the cases at hand.

The more recent case was just decided early in April, in a district court in Florida. It involves two companies with Web sites that list yachts for sale...which may in part explain why this case did not attract the kind of attention that the earlier eBay case did. (There are more people interested in buying Beanie Babies than buying big boats.) Anyway, the older Web site in this case is owned by Boats.com, who, for the past nine years, has owned and operated Yachtworld.com, a Web site on which yacht brokers could post information about the big boats they have for sale -- sort of like electronic classified ads, with more interactivity. Enter Nautical Solutions Marketing, in 2001, with their Web site, Yachtbroker.com -- and two services that Boats.com complained blow them right out of the water.

Spider Guts - What Hurts Ranking

The next set of variables weighed by search engines are negatives. These will negatively effect the performance of a page on a search engine without exception. Avoiding these items is crucial to reaching and, more importantly, maintaining a high rank on a search engine.

  1. Broken Links - Internally or outgoing, search engines do not view pages with broken links as pages with fresh content, and are going to be scored as less relevant for their keywords.
  2. Spam - This refers to any attempt to trick a search engine, such as using irrelevant keywords to draw extra hits, placing invisible content on the page to boost keyword density, and using meta refreshes (often in combination with irrelevant keywords) to draw a user in for an irrelevant search and then direct them to the page you want them to see. These techniques can result in a ban from search engines if they catch them.
  3. Excessive Search Engine Submittal - Over submitting a site to a search engine will likely result in a ban. Submit no more than once every three months, according to Google.
  4. Empty Alt Attributes - Empty alt tags is a major accessibility issue as well as just poor coding, and will affect a page negatively.
  5. Excessive Punctuation - Excess punctuation in the Title and Description tags wastes valuable space and may cause a problem with the engine.

These negative factors could greatly effect an otherwise relevant page, of course some of them preclude the page actually being relevant, particularly spam. The biggest pitfalls for an otherwise optimized page is simple typographical errors, broken links (usually due to stale content) and oversight in markup. Simple mistakes could mean the difference between top ten and top fifty for a search on an engine, a difference that could mean thousands of dollars per day in lost revenue for many websites.

Imagine if a site like Amazon.com failed to use alt attributes and stopped using <h> tags (replaced by images, for example). Searches that would typically show the site as a number one result could start bringing the site up as a number fifty result.

Conclusion

Approaching SEO as a holistic process rather than simply a combination of steps is critical. It is simply not enough to use an effective Title tag on every page and stop, or to use keyword relevant URLs and then stop. To achieve and maintain top rating on all pages for the appropriate sets of keywords, a page must be optimized completely for the way search engines weigh content relevance, and that involves taking everything discussed earlier in the article seriously.

Spider Guts - The Core Elements for Page Relevance

There are a number of things that a spider expects to see when it looks at a web page, many of which are optional but still important in the big picture. The following is a list that describes the core elements considered by a typical search engine when calculating page relevance.

  1. Title Tag - The title tag should contain a title relevant to the page, not just "Home Page" or "Contact Us". The title should be used for up to five keywords.
  2. Headings - Search engines view <h> tags as terms of emphasis, meaning additional weight is given to terms that appear inside them. Keywords should appear in <h> tags.
  3. Bold - Also viewed as terms of emphasis, but with less weight than headings.
  4. Alt Text - Brief descriptive sentences should be used in image alt attributes. At least one keyword should appear in each alt attribute.
  5. Keyword Meta Tag - Some engines use the keyword meta tags directly, some use them as part of a validation process ensuring that the keywords closely match the page content. The latter is the more typical scenario for modern engines. Keywords should be chosen carefully and be specific to the page they appear on.
  6. Description Meta Tag - Most search engines use this tag in a similar fashion as the keyword tags. Each page should have a unique description. The description should contain a few keywords and briefly summarize the content that appears on the page with a high degree of accuracy.
  7. Keyword Placement - Terms that are higher up on a page are more heavily weighted.
  8. Keyword Proximity - Terms that are close together are probably related, and thus the site will show up in searches for those terms.
  9. Comment Tags - Some search engines use comment tags for content, particularly in graphics rich/text poor sites.
  10. Page Structure Validation - Proper coding is likely to be of better overall quality, and thus rewarded.
  11. Traffic/Visitors - Search engines keep track of how many people follow their links. The more a link is followed for a given search, the more relevant the link is assumed to be.
  12. Link Popularity - Also known as PageRank, this is a measure of how many web pages on the Internet link to your site and the relevance of those pages to the page they are linking to. The popularity of the linking site is also evaluated.
  13. Anchor Text for Inbound Links - This is a measure of the relevance of the anchor text from the referring site.
  14. Page Last Modified - Newer content is regarded as "fresh" and is treated as more relevant.
  15. Page Size - Engines tend to weigh content at the start of a document more than content further down. If a page is too long, typically more than 50k in markup only, then it should be broken up into multiple pages.
  16. Keywords in URL - URLs are considered important by engines. Use of hyphens rather than underscores in filenames and using keywords in filenames and directories improves a pages potential relevance.

These elements are all poured into an algorithm by the search engine that produces a very specific result: a relevance score for a page based on a given set of keywords. Evaluating page relevance is a constant reciprocal process that involves crawling around all pages indexed by a particular engine and evaluating the relevance of their content and the relevance of references to that content. The items listed above are things search engines expect to find in a page as well as factors that are not necessarily expected, but are considered if available (such as inbound links).

Spider Guts

What's inside the spiders? To get a good ranking in search engines, a good understanding of the fundamentals of SEO and how search robots crawl web pages is essential. The author includes valuable information such as a list of core elements considered by a typical search engine when calculating page relevance.

In the quest for that elusive nirvana of search engine friendliness, we frequently find ourselves searching for "instant fix" ways to improve a page's ranking without considering the big picture; that is, without looking at the problem of optimizing a web page as a whole and instead looking at several separate optimization steps as part of routine markup development or copy writing. While SEO experts do not tend to fit this mold, the average web developer certainly does. How many web pages have you "optimized" by simply adding keyword and description meta tags, and stopped right there? I imagine a hand count at this point would supply a fairly substantial number. An even better question at this point might be, "How many of you have tried to provide SEO for a web page without having even a basic understanding of search robot logic or what it expects to see in your pages?" Once again, I suspect we would have a healthy hand count.

The steps to optimize a page are well known to the SEO community, and many articles by authors far more knowledgeable than myself on the subject are available to web developers. So with all this knowledge out there, why are there so many developers who lack a big picture understanding of the subject? One word: fundamentals. It is crucial to know how the technology behind the scenes works, but like any other skill, the bulk of people attempting to learn that skill do not start at the bottom. They start somewhere that makes sense in trying to solve a particular problem and then they build up from that point.

If a developer held a greater understanding of the fundamentals of SEO and how search robots went about crawling web pages, they would in turn have a greater understanding of how to populate those alt attributes and meta tags. The objective of this article is to provide a general overview of how search robots (also called spiders) go about crawling and indexing web pages.

The Yahoo SLURP Crawler - Getting SLURP to Come Over

After having said all this about Yahoo SLURP, there is now the little issue of getting your site crawled by this particular search engine spider. There are some ways to go about this task, and here we begin to see the inklings of what would be the order of the day in a search engine market dominated by Yahoo! (who seems to be very, very concerned about its bottom line).

Linking

The first strategy is good old linking; just get a link on a site on which Yahoo! regularly crawls, and voila. You have SLURP knocking on your door. This can be done by corresponding with a site which ranks well on Yahoo, or by submitting your web site to directories which SLURP regularly crawls (you can find these by searching for “directories” on Yahoo). If SLURP deep crawls (crawls lots of pages instead of just one or two) your site regularly, you have a high chance of getting a good ranking on the key word or topic for which you have optimized your site.

Yahoo Companion Toolbar

This is supposed to trigger the SLURP robot to crawl your site. And it also enables searchers to search within your site, offering value for your audience and attracting Yahoo SLURP as well.

Sitematch

This is done by paying Yahoo's fees and submitting your site. This guarantees you will be added to the index (at a price) but is no guarantee of your website's ranking in the SERPs.

This is a scary service, and some reviewers speculate that it is a foretaste of what site owners would face in a market dominated by Yahoo. It is carried over from Overture (which Yahoo purchased) and involves an annual fee for submitted pages. The URLs are submitted into Yahoo’s index and are then crawled by SLURP every 48 hours.

However, apart from the one off fee, there is a cost per click fee charged for each lead driven to your site (so you better have deep pockets)

Apart from SLURP visiting every two days, you also get listed on searches done on about.com, Excite, Overture and other Yahoo partners. However there is no guarantee of a high ranking, and frankly I do not like this method (because I absolutely love free stuff).

There is a way to submit  your site for free,  however Yahoo does not guarantee that websites submitted through such means will ever be crawled by SLURP.

By now you should know enough about SLURP to spot it, track it, attract it, and prevent it from crawling specific pages of your site.

The Yahoo SLURP Crawler - Getting SLURP to Come Over

After having said all this about Yahoo SLURP, there is now the little issue of getting your site crawled by this particular search engine spider. There are some ways to go about this task, and here we begin to see the inklings of what would be the order of the day in a search engine market dominated by Yahoo! (who seems to be very, very concerned about its bottom line).

Linking

The first strategy is good old linking; just get a link on a site on which Yahoo! regularly crawls, and voila. You have SLURP knocking on your door. This can be done by corresponding with a site which ranks well on Yahoo, or by submitting your web site to directories which SLURP regularly crawls (you can find these by searching for “directories” on Yahoo). If SLURP deep crawls (crawls lots of pages instead of just one or two) your site regularly, you have a high chance of getting a good ranking on the key word or topic for which you have optimized your site.

Yahoo Companion Toolbar

This is supposed to trigger the SLURP robot to crawl your site. And it also enables searchers to search within your site, offering value for your audience and attracting Yahoo SLURP as well.

Sitematch

This is done by paying Yahoo's fees and submitting your site. This guarantees you will be added to the index (at a price) but is no guarantee of your website's ranking in the SERPs.

This is a scary service, and some reviewers speculate that it is a foretaste of what site owners would face in a market dominated by Yahoo. It is carried over from Overture (which Yahoo purchased) and involves an annual fee for submitted pages. The URLs are submitted into Yahoo’s index and are then crawled by SLURP every 48 hours.

However, apart from the one off fee, there is a cost per click fee charged for each lead driven to your site (so you better have deep pockets)

Apart from SLURP visiting every two days, you also get listed on searches done on about.com, Excite, Overture and other Yahoo partners. However there is no guarantee of a high ranking, and frankly I do not like this method (because I absolutely love free stuff).

There is a way to submit  your site for free,  however Yahoo does not guarantee that websites submitted through such means will ever be crawled by SLURP.

By now you should know enough about SLURP to spot it, track it, attract it, and prevent it from crawling specific pages of your site.

The Yahoo SLURP Crawler - Stonewalling

Another way of shutting out SLURP is by using the noindex meta-tag. Yahoo SLURP obeys this command in the document's head, and the code inserted in between the head tags of your document is

  <META NAME=”robots” CONTENT=”noindex”>

This snippet will ensure that that Yahoo SLURP does not index the document in the search engine database. Another useful command is the nofollow meta-tag. The code inserted is

  <META NAME=”robots” CONTENT=”nofollow”>

This snippet ensures that the links on the page are not followed.

Dynamic Page Indexing

This is the real charm of SLURP. Most search engine crawlers don’t bother crawling and indexing dynamic pages (.php, .asp, .jsp) since their content is subject to rapid change, which makes the process of indexing useless. Yahoo SLURP, however, does daily crawls in order to refresh the content on their indexed dynamic pages. It also does bi-weekly crawls which enables the search engine to discover new content and add it to its website incrementally. This enables a complex site's URLs, generated by forms and content management software, to be indexed.

This frequent crawls show up in your server logs as frequent download requests, as the crawler moves, stops, and restarts. Yahoo says that these frequent download requests should not be a cause for alarm.

SLURP's ability to index dynamic pages and to constantly refresh its content is a great relief to web designers (like me) who like having dynamic pages to enable fast loading and rapid updating. Websites which were not search engine friendly are suddenly in contention to be ranked number one.

However, the down side to this is that SLURP may never deliberately crawl your dynamic pages, unless you trigger the crawler via techniques which Yahoo encourages (to the benefit of their bottom line).

Getting Framed

Yahoo SLURP also has the ability to support frames, although it will not follow the SRC tag links to stand alone framesets; it only follows the HREF tags (as all good crawlers do).

The Yahoo SLURP Crawler - The Robot

SLURP crawls websites, scans their contents and meta tags, and travels down the links contained on the page. It then brings back information for the search engine to index. Yahoo SLURP 2.0 stores the full text of the page it crawls in its memory and then returns to Yahoo’s searchable database. This is one of the semi-unique points of Yahoo SLURP; not all search engine crawlers store the entire text of the pages they crawl.

While SLURP has some features unique to it, it also obeys the robots.txt command. This command is very important since it ensures that you have control over which pages the crawler searches and indexes. This lets you protect the sensitive pages which you need to keep secure, pages which contain information you would rather not have in the hands of hackers (who regularly try and infiltrate search engines databases), or pages which you don’t want indexed at all (for whatever reason).

Another good thing about the robots.txt file is that it enables you to exclude specific robots, so you can inhibit the Googlebot but enable SLURP to crawl a particular page. This can be useful if you have optimized different pages for separate search engines. This may occur in order to give you flexibility, but a search engine may think you have duplicate pages and may penalize you. So careful use of the robots.txt file should definitely be on our list of how to make your website more search engine friendly. So how do you use the robots.txt file? You open notepad and type in the following lines:

  User-Agent: Slurp
  Disallow: whatsisname.html
  Disallow: page_optimized_for_google.html
  Disallow: credit_card_list.html
  Disallow: whatnot.html

Save it as robots.txt and upload it into your root directory. You can disallow as many pages for each crawler robot as you want, but to disallow certain pages for another crawler, you start a new line of code.

  User-Agent: Slurp
  Disallow: whatsisname.html
  Disallow: page_optimized_for_google.html
  Disallow: credit_card_list.html
  Disallow: whatnot.html
  User-Agent: Googlebot
  Disallow: page_optimized_for_yahoo.html
  Disallow: credit_card_list.html
  Disallow: whatnot.html

If you want to disallow all crawlers, you replace the name of the user agent with the wildcard command (*)

Robots.txt is useful for not getting banned on search engines and can also be used to pinpoint crawlers when they come calling. Only crawlers request Robots.txt, and these requests show up on the server logs.

The Yahoo SLURP Crawler

As SEOs and webmasters, we're always looking for ways to get the search engine spiders to crawl our sites, and the deeper, the better. This article shows you how to target Yahoo's crawler and convince it to stop by regularly.

The search engine wars are fought with strategies, alliances, and robots. As Yahoo! primes itself to be the number one contender for market share after Google, websites that want to optimize for Yahoo must study how Yahoo ranks pages and how it indexes pages. The Yahoo web crawler SLURP should be studied; your site server logs should have recorded visits from various robots, including SLURP. If you do not have records of SLURP visiting your site, then this article will give tips on how to get SLURP to crawl (hopefully deep crawl) your site.

The Preamble

Yahoo SLURP evolved from Inktomi SLURP. The Yahoo SLURP robot is an upgrade from Inktomi’s SLURP. Yahoo used Inktomi’s search engine to replace Google, which used to take care of its search results. This officially triggered the second search engine wars (the first was won by Google without it declaring hostilities).

Yahoo has at least 130 million registered users on its network. Granted, Google is the definitive search engine, but Yahoo is large enough that it should not be ignored.

What Do Spiders See in a Hyperlink?

I’ll assume that you are all reasonably familiar with HTML. If you have ever looked at the source code for an HTML page, you probably noticed text like this wherever a hyperlink appeared:

When a web browser reads this, it knows that the text “SEO Chat” should be hyperlinked to the web page http://www.seochat.com. Incidentally, “SEO Chat” in this case is the “anchor text” of the link. When a spider reads this text, it thinks, “Okay, the page http://www.seochat.com is relevant to the text on this page, and very relevant to the term `SEO Chat.’”

Let’s get a little more complicated.

Now what? The anchor text hasn’t changed, so the link will still look the same when the web browser displays it. But a spider will think, “Okay, not only is this page relevant to the term `SEO Chat,’ it is also relevant to the phrase `Great Site for SEO Info.’ And hey, there’s a relationship between the page I’m crawling now and this hyperlink! It says that this link doesn’t count as a ‘vote’ for the page being linked to. Okay, so it won’t add to the page rank.”

That last point, about the link not counting as a vote for the page being linked to, is what the rel="nofollow" tag does. This tag evolved to address the problem of people submitting linked comments to blogs that said things like "Visit my pharmaceuticals site!" That kind of comment is an attempt by the commenter to raise his own website's position in the search engine rankings. It's called comment spam, by the way; most major search engines don't like comment spam because it skews their results, making them less relevant. As you may have guessed, then, the “nofollow” tag in the “rel” attribute is specifically for search engines; it really isn't there to be noticed by anyone else. Yahoo!, MSN, and Google recognize it, but AskJeeves does not support nofollow; its crawler simply ignores the nofollow tag.

In some cases, a link may be assigned to an image. The hyperlink would then include the name of the image, and might include some alternate text in an “alt” attribute, which can be helpful for voice-based browsers used by the blind. It also helps spiders, because it gives them another clue for what the page is about.

Hyperlinks may take other forms on the web, but by and large those forms do not pass ranking or spidering value. In general, the closer a link is to the classic <a href=”URL”>text</a>, the easier it is for a spider to follow a link, and vice versa.

How Search Engines Work (and Sometimes Don’t)

You know how important it is to score high in the SERPs. But your site isn't reaching the first three pages, and you don't understand why. It could be that you're confusing the web crawlers that are trying to index it. How can you find out? Keep reading.

You have a masterful website, with lots of relevant content, but it isn’t coming up high in the search engine results pages (SERPs). You know that if your site isn’t on those early pages, searchers probably won’t find you. You can’t understand why you’re apparently invisible to Google and the other major search engines. Your rivals hold higher spots in the SERPs, and their sites aren’t nearly as nice as yours.

Search engines aren’t people. In order to handle the tens of billions of web pages that comprise the World Wide Web, search engine companies have almost completely automated their processes. A software program isn’t going to look at your site with the same “eyes” as a human being. This doesn’t mean that you can’t have a website that is a joy to behold for your visitors. But it does mean that you need to be aware of the ways in which search engines “see” your site differently, and plan around them.

Despite the complexity of the web, and dealing with all that data at speed, search engines actually perform a short list of operations in order to return relevant results to their users. Each of these four operations can go awry in certain ways. It isn’t so much that the search engine itself has gone awry; it may have simply encountered something that it was not programmed to deal with. Or the way it was programmed to deal with whatever it encountered led to less than desirable results.

Understanding how search engines operate will help you understand what can go wrong. All search engines perform the following four tasks:

  • Web crawling. Search engines send out automated programs, sometimes called “bots” or “spiders,” which use the web’s hyperlink structure to “crawl” its pages. According to some of our best estimates, search engine spiders have crawled maybe half of the pages that exist on the Internet.
  • Document indexing. After spiders crawl a page, its content needs to be put into a format that makes it easy to retrieve when a user queries the search engine. Thus, pages are stored in a giant, tightly managed database that makes up the search engine’s index. These indexes contain billions of documents, which are delivered to users in mere fractions of a second.
  • Query processing. When a user queries a search engine, which happens hundreds of millions of times each day, the engine examines its index to find documents that match. Queries that look superficially the same can yield very different results. For example, searching for the phrase “field and stream magazine,” without quotes around it, yields more than four million results in Google. Do the same search with the quote marks, and Google returns only 19,600 results. This is just one of many modifiers a searcher can use to give the database a better idea of what should count as a relevant result.
  • Ranking results. Google isn’t going to show you all 19,600 results on the same page – and even if it did, it needs some way to decide which ones should show up first. Thus, the search engine runs an algorithm on the results to calculate which ones are most relevant to the query. These are shown first, with all the others in descending order of relevance.

Now that you have some idea of the processes involved, it’s time to take a closer look at each one. This should help you understand how things go right, and how and why these tasks can go “wrong.” This article will focus on web crawling, while a later article will cover the remaining processes.

 

You’re probably thinking chiefly of your human visitors when you set up your website’s navigation, as well you should. But certain kinds of navigation structures will trip up spiders, making it less likely for those visitors to find your site in the first place. As an added bonus, many of the things you do to your site that will make it easier for a spider to find content, will often make it easier for visitors to navigate your site.

It’s worth keeping in mind, by the way, that you might not want spiders to be able to index everything on your site. If you own a site with content that users pay a fee to access, you probably don’t want a Google bot to grab that content and show it to anyone who enters the right keywords. There are ways to deliberately block spiders from such content. In keeping with the rest of this article, which is intended mainly as an introduction, they will only be mentioned briefly here.

Dynamic URLs are one of the biggest stumbling blocks for search engine spiders. In particular, pages with two or more dynamic parameters will give a spider fits. You know a dynamic URL when you see it; it usually has a lot of “garbage” in it such as question marks, equal signs, ampersands (&) and percent signs. These pages are great for human users, who usually get to them by setting certain parameters on a page. For example, typing a zip code into a box at weather.com will return a page that describes the weather for a particular area of the US – and a dynamic URL as the page location.

There are other ways in which spiders don’t like complexity. For example, pages with more than 100 unique links to other pages on the same site can make them get tired with just one look. A spider may not follow each link. If you are trying to build a site map, there are better ways to organize it.

Pages that are buried more than three clicks from your website’s home page also might not be crawled. Spiders don’t like to go that deep. For that matter, many humans can get “lost” on a website with that many levels of links if there isn’t some kind of navigational guidance.

Pages that require a “Session ID” or cookie to enable navigation also might not be spidered. Spiders aren’t browsers, and don’t have the same capabilities. They may not be able to retain these forms of identification.

Another stumbling block for spiders is pages that are split into “frames.” Many web designers like frames; it allows them to keep page navigation in one place even when a user scrolls through content. But spiders find pages with frames confusing. To them, content is content, and they have no way of knowing which pages should go in the search results. Frankly, many users don’t like pages with frames either; rather than providing a cleaner interface, such pages often look cluttered.

 

Most of the stumbling blocks above are ones you may have accidentally put in the way of spiders. This next set of stumbling blocks includes some that website owners might use on purpose to block a search engine spider. While I mentioned one of the most obvious reasons for blocking a spider above (content that users must pay to see), there are certainly others: the content itself might be free, but should not be easily available to everyone, for example.

Pages that can be accessed only after filling out a form and hitting “Submit” might as well be closed doors to spiders. Think of them as not being able to push buttons or type. Likewise, pages that require use of a drop down menu to access might not be spidered, and the same holds true for documents that can only be accessed via a search box.

Documents that are purposefully blocked will usually not be spidered. This can be handled with a robots meta tag or robots.txt file. You can find other articles that discuss the robots.txt file on SEO Chat.

Pages that require a login block search engine spiders. Remember the “spiders can’t type” observation above. Just how are they going to log in to get to the page?

Finally, I’d like to make a special note of pages that redirect before showing content. Not only will that not get your page indexed, it could get your site banned. Search engines refer to this tactic as “cloaking” or “bait-and-switch.” You can check Google’s guidelines for webmasters (http://www.google.com/intl/en/webmasters/guidelines.html) if you have any questions about what is considered legitimate and what isn’t.

Now that you know what will make spiders choke, how do you encourage them to go where you want them to? The key is to provide direct HTML links to each page you want the spiders to visit. Also, give them a shallow pool to play in. Spiders usually start on your home page; if any part of your site cannot be accessed from there, chances are the spider won’t see it. This is where use of a site map can be invaluable.

Page Rank : Prepare your site for the top 40

Meta tags are now extremely important in search engine ranking. Engines search for the invisible tags and use them to rank your site. Yet, only 25% of websites have them.

META tags are HTML tags that are invisible on the Web page. They're important to search engines that use them to rank your site.

Page Rank :Keeping SEO Simple

PageRankWhat is Google’s PageRank? If you have ever done any reading about search engine optimization or were just curious how you can get your site to the top of the Google search engine results, understanding PageRank is vital. I’m going to introduce you to the basics of PageRank and also provide a brief discussion on how much you should really worry about PageRank if you are running a website or Internet business.

Google’s founders, Larry Page and Sergey Brin, invented PageRank and it forms the basis for how Google works. Google didn’t become the best search engine in the world by chance, it became the best search engine because it provided the best results. PageRank is in fact the technology that gave Google its competitor-killing edge, a way to greatly improve the accuracy and validity of a search response to a user query.

In essence PageRank provides a means to determine the value of a website for any given search term or keyword phrase. This value is determined by how websites link together with the more popular (and theoretically better) sites receiving more links. It’s these incoming links that help the site have a high PageRank value and thus display higher up in search results.

Let’s read how Google explains their PageRank system:

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page’s value. Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.”

Important, high-quality sites receive a higher PageRank, which Google remembers each time it conducts a search. Of course, important pages mean nothing to you if they don’t match your query. So, Google combines PageRank with sophisticated text-matching techniques to find pages that are both important and relevant to your search. Google goes far beyond the number of times a term appears on a page and examines all aspects of the page’s content (and the content of the pages linking to it) to determine if it’s a good match for your query.

The key rule to understand is that it is a combination of variables that determine how well your site performs in Google. These are the most important variables to worry about:

  • Incoming links to your site.
  • The relevancy (to your site’s theme) of the pages linking to your site and the PageRank of these pages.
  • The keywords that other sites use to link to your site.
  • The keywords on your website in particular in places like page titles and headlines.

Some of those factors you can control, others you can manipulate but not directly control. The important thing to understand regarding PageRank is that all those variables will determine how high your site shows up in search engine results. PageRank is the name for the technology that ranks sites and includes all those variables and many more.

PageRank Numbers - The Little Green Bar

If you install the Google Toolbar into your browser you can choose to switch on the PageRank display (it’s in the options). This will make a little green bar appear above web pages you visit. The green bar represents the PageRank of the page you are viewing in your browser. The ranking starts at 0 (no ranking) up to 10, the highest ranking and can be blanked out completely if the page has been banned from Google. If you don’t want to use the toolbar you can try this free PageRank lookup tool to find the ranking for any web address.

Google created quite a storm when it launched its green PageRank bar. Webmasters became obsessed with methods to increase their PageRank and high PageRank sites started selling text links for hundreds of dollars. A link from a high PageRank page, from a PageRank 7, 8, 9 or 10, has been known to make lower PageRank pages increase a full number, even two if the incoming link is from a PageRank 10, and there is no doubt it is good for search engine rankings.

The problem with PageRank being displayed in a little green bar is that it is very hard to really gauge how valuable a ranking is. The Google PageRank technology is complex containing many variables, some of which I mentioned above, and to interpret a number from 0-10, especially when only Google really knows how it works, is difficult. Worse still, the visible representation, the green bar that the public can see, only changes on a quarterly basis, while the real PageRank of a page changes on a daily basis. Most of the time you are looking at a very outdated ranking value.

PageRank paranoia is an issue that every webmaster may fall victim to. There are rumours that Google will be changing the PageRank system because they are not happy with how it is being manipulated and interpreted. As a rule of thumb, watch the green bar with interest but don’t take it too seriously or spend too much time trying to force it to increase (staring and yelling at it will do you no good, trust me on that one).

The Randomness Of PageRank

Search engine optimization experts actively track PageRank and investigate things like page backlinks to try and work out what the top search engine ranked sites are doing right so they can replicate and then surpass them in the rankings. This is a very good strategy for any person running a web business looking to improve their search ranking. There is no need to reinvent the wheel - copy what works and do it slightly better than the competition.

This is all good in theory, but unfortunately there is a good amount of randomness in PageRank and search engine results. Google of course would argue that it’s not randomness and their PageRank system is merely using algorithms that we don’t understand, and no doubt that is true, but for the human webmaster trying to get traffic, PageRank and Google can be baffling sometimes.

There are instances of high PageRanked sites having little to no backlinks. Given that incoming links are one of the most important variables used in PageRank calculations you have to scratch your head and wonder how a site with no links could have a big green bar. Google’s own backlink lookup tool (see this article - Beginners Guide To Backlinks - for details) is another phenomenon that search engine experts often choose to ignore rather than try and evaluate.

Thankfully the randomness of PageRank can result in positive outcomes as well, with your sites jumping high into search results in places where you wouldn’t expect it. The only consistency is randomness but there is logic that can be followed and smart search engine optimization practices that when implemented well will work. Just don’t expect it to work precisely how or when you want it to.

What You Should Know And Do About PageRank

This advice I offer from experience as an avid PageRank chaser and search engine optimizer. The key to gaining PageRank is to ignore it and focus on the variables that control it.

Having people link to your site has always been a good thing and PageRank was in fact a result of this. Don’t get confused with the order of things, first came the Internet and links and then came PageRank. Focus on amassing quality incoming links from quality sites relevant to your site. This practice will naturally improve your PageRank and also increase the amount of visitors coming to your site. Don’t get bogged down chasing links from only high PageRank sites or waste energy adding links from just any site willing to link to you. Do things naturally and your site will grow naturally.

Learn about the importance of keywords. My SEO articles will help you with this. Keywords play a crucial role in bringing the right type of traffic to your site but you should never spend half an hour in front of a computer trying to come up with the perfect title for your article. Name your content logically and think about what search words your audience would use to find your article and you can very quickly and easily develop good keywords without spending hours and hours tweaking every little phrase and heading. See what your competitors do in regards to keywords if you are completely lost.

If you build a good website with good content, always keep in mind your important keywords and proactively work every day to earn and create new backlinks to your site you will improve your PageRank. The best sites with the highest PageRank never worry about PageRank, they simply keep churning out content that people love to link to. This is a strategy that every webmaster and Internet entrepreneur should emulate for success online.

Yaro Starak
Internet Business Coach