Duplicate Content? Tell Google which URL Parameters to Ignore
- September 17, 2009
- by Dave Freeman
Google has recently added a new feature to Google Webmaster Tools (GWT) called “Parameter Handling”. This feature allows site owners, web development and search agencies to add up to 15 URL parameters that you don’t want Google to crawl and index.
The new Parameter Handling feature works in two ways:
- When Google crawls your website and finds duplicate content from parameter driven URLs, it will list suggested parameters that you could tell Google to ignore in the future – thus eliminating any duplicate content produced from these parameter driven URLs
- Google allows you to get in there and specify parameters yourself. So if you’re launching a new site and know that there are a number of parameter driven URLs that could give you a duplicate content headache you can simply go to the GWT console and manually tell Google to ignore the parameters
Why did Google launch parameter handling?
Well that’s an interesting question, this new feature, whilst helpful with its additional insights into how Google sees you website, certainly doesn’t provide the best method of reducing or even completely eliminating duplicate content issues, unless perhaps you’re an SME. Also from Google’s perspective, by getting sites with duplicate content issues to at least reduce the amount of it, it will instantly free up their computing and crawler resource.
Why’s this useful for my site?
Well in short it’s not, unless perhaps you’re an SME. Yes it gives you some additional insights into how Google view your website but for companies with an in-house team, web development and/or SEO agency working alongside them you really shouldn’t be worrying about this new feature.
Firstly it only applies to Google so this won’t solve any issues in Bing or Yahoo.
Secondly when developing a website it should be designed and structured in a way that minimises duplicate content, but let’s face it eliminating all duplicate content via the website structure isn’t always possible, but you can certainly minimise it.
Thirdly if your SEO agency or in-house team are worth the money you’re paying them then they should already be aware of any duplicate content issues and should have solved them or be on their way to fixing them via any of or a combination of the four methods below:
- 301 redirect
- Meta Canonical tags
- Meta Robots tags
Perfect for SME’s
Let’s face it in the UK and US, amongst other countries, Google dominates the search landscape, so for SME’s and companies that don’t have any on-going web development or SEO support they may not be able to implement 301 redirects or any of the other methods to eliminate duplicate content. However, the Parameter Handling function in GWT provides a free and easy method of reducing duplicate content within Google – and currently in most instances Google is the only search engine that SME’s need to worry about.
Got duplication of content? What’s the solution?
As much as I’d like to say there’s a perfect solution there isn’t – it really depends on the structure of your site and URLs. There are advantages and disadvantages to all four methods that were listed earlier but ultimately your web developer or SEO agency will advise you as to which is the best method to eliminate any duplicate content on your site. Personally I’m not a fan of the Canonical tags and I only use it as a last resort, purely as I think other options are better, but that’s just my opinion – I’m not saying its right or wrong.
In an ideal world the 301 redirect would be used to eliminate all duplicate content, but sometimes it’s just not the right solution. But where possible the 301 redirect is the preferred method to reduce duplicate content issues. This is seen as the best option because it redirects users, search engines and a portion of any link juice to the primary page, thus leaving only one version of the page, which is ideal.1 Comment