Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Top 10 technical SEO issues (and how to fix them)

James A. Martin | May 21, 2014
SEO experts share insights on the most common website technical issues that can negatively impact search rankings and offer advice on how to fix them.

 SEO guy with graphic
Image Credit: Thinkstock

In the age of "Penguin," "Panda," "Hummingbird" and other big Google algorithm updates, winning the search engine optimization (SEO) game means publishing high-quality content that earns links. But all the quality content in the world won't help your search rankings if your site has structural or other technical issues.

We asked the SEO community to weigh in on the most common technical issues that can adversely impact your site's search rankings today. Many of these technical challenges, such as duplicate content, have been around for years. But as the SEO game matures and evolves it's even more important to clean up site messiness. After all, successful SEO is one third on-page optimization, one third off-page optimization (such as backlinks), and a third clean website structure that's free of technical issues, according to Bill Sebald, owner of Greenlane Search Marketing.

Here are the top 10 SEO technical issues in 2014, according to nearly 30 experts, along with tips on how to address them.

1) Duplicate Content
Almost all of the SEO professionals we queried cited duplicate content as a top technical concern. Simply put, dupliscate content is any content that is "appreciably similar" or exactly the same as content that resides on your site, according to Google Webmaster Tools.

"Google's crawlers must cover a lot of ground," says Michael Stricker, U.S. marketing director for SEMrush. "Google can't consume all of the data, especially when one considers that Google must revisit each page again and again to find changes or new material. Anything that slows Google's discovery or crawling of the Web is unwelcome. Dynamically created websites that create Web pages on the fly from databases are frequently misconfigured from an SEO point of view. These sites may create lots of pages, or URLs, that contain basically the same content, over and over."

Other sources of duplicate content include the use of both "plain" and secure protocol URLs (HTTP and HTTPS); no expressed preference for a www.domain.com versus domain.com (without the www); blog tags; and syndicated RSS feeds.

Duplicate content can also result from common content management system (CMS) functionalities, including sorting parameters, according to Johnny Ewton, Web analyst for Delegator.com.

The remedy is to crawl your site looking for duplications and apply "crawl directives" to inform Google of the relative value of multiple URLs, Stricker says. You can use "robots.txt" (a file that allows you to control how Google's bots crawl and index your public Web pages) to tell Google the specific folders and directories that are not worth crawling.

 

1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.