How To Identify True Search Competitors – SEO Competitive Analysis

At SMX Advanced back in June I attended a session where I heard about a really cool idea. The concept was that you could take a ton of keywords, and then map out what sites were showing up on those keywords to really get a good idea of who your real search competitors are.

The bad thing was that they said that they used their own developed software. I’m always bothered when I get excited about an idea, but then have no way of reproducing it. Well, I did some digging around and was able to find a way to do a similar type of report. Sure, it’s more labor intensive, but you can really learn some thing about who else is competing for your same keyword sets.

Below I’m going to outline the techniques I use to get the data, how to organize it, and what to do with it once you have the information. I admit that it may be a little choppy, so if you have any questions on it feel free to hit me up on Twitter: @dan_patterson

This technique was also mentioned by my friend Matt Siltala in his Pubcon presentation last week, which you can view here: Competitive Intelligence Pubcon Las Vegas.

Step 1 – Get a List of Keyword Sets

Since the goal of this whole exercise is to find other sites that are going after the same keyword sets as you, the first step in this process is to come up with a large list of keywords. A good place to start is to go through you analytics to find keywords that you’re already getting traffic from.

Once you have a list, break them up into topic sets. This way, you will be able to find which sites are full competitors or just partial competitors. Come up with a short name for all of these sets, and use this when you’re scraping results to identify which set that scrape belongs in.

Step 2 – Scraping the SERPs

There are plenty of tools out there that you can use to scrape the search results. Some of them use proxies and other tactics that the search engines aren’t fond of, so instead I’m going to go over two tools you can use that shouldn’t raise any of these problems.

The first is a handy little Firefox plugin called OutWit Hub. The second I’ll go over is the SEOmoz Pro Keyword Difficulty & SERP Analysis Tool for those of you that are already members of SEOmoz Pro.

Scraping SERPs with OutWit Hub

1- Download OutWit Hub

Go to http://www.outwit.com and download OutWit Hub (Free Firefox Plugin). Technically it’s a site content scraper tool, so we’re going to use it to scrape URLs from the SERPs.

OutWit Hub

2- Change Your Google Search Settings

In order to effectively use the plugin, you’re going to have to change a few of your Google search settings

  1. Turn off Google Instant by going to your account Search Settings, and then choosing “Do not use Google Instant”.
    Turn Off Google Instant
    Google Instand and Number of Results
  2. Decide how deep you want to look and set “Number of Results” to match. You can choose 10 (default), 20, 30, 50, or 100.
  3. Save your preferences and go back to Google Search

3- Scrape

Do a search for your first term. Once the SERPs come up, click on the OutWit Button in Firefox.

OutWit Button

This will give you the OutWit Hub Window. In the window, click on the ‘Guess’ option.

Guess Button

This will give you the info from the SERPs. We’re most interested in the “id” (rank) and “URL” columns. You can either export this info or just copy and paste it into Excel.

Ann Smarty did a post about OutWit a while back and also has a custom scraper you can use for Google results. The only problem I’ve found with this is that sometimes you’ll get URLs with spaces in them from breadcrumbs, which makes it a little harder to filter things down in Step 3. If you are in a niche that doesn’t have this problem, this can be a faster way to go.

4- Download Your Scrape and Clean It Up

One problem with OutWit Hub is that it can be inconsistent. Sometimes you get local listings in the export, sometimes you don’t. Sometimes you get paid listings in there. Somteims you don’t. So you have to watch what you’re scraping and make sure you’re actually getting the right info. They usually have a heading row, but you still have to do some filtering and cleanup work to get an accurate list. When you do this, make sure you also update the id (rank) column to reflect the real ranking you’re seeing.

You can either export the data to a CSV, or you can also just copy and paste it into Excel. I like the copy and paste option because if I see some paid ads at the top or bottom of the data, I can just not copy those rows.

5- Rinse and Repeat

This is unfortunately the labor intensive part of this whole process. You’ll have to repeat this process for all of the keywords you want to check. Again, there are other tools that do a little bit more brute force against Google, but OutWit Hub is a great FREE tool that will help you get the data you need if you’re willing to take the time.

No matter which method you use, make sure that you add a column at the beginning that includes your shortname for each set before the rank and URL of each scrape. This way you can identify which set the rankings and URLs belong to later.

Also, make sure you’re combining all of your data into one spreadsheet so we can do the comparison and filtering later. In Step 3 of this whole process I’ll show you what to do once you have all of your scraping done.

SEOmoz Keyword Difficulty & SERP Analysis Tool

SEOmoz Keyword Difficulty & SERP Analysis

In the long run, I think that using the SEOmoz tool is a lot easier and cleaner to use for this exercise. One nice thing about using the Keyword Difficulty & SERP Analysis Tool is that you can run up to 5 keywords at a time, and you don’t have the cleanup work that you have to do with OutWit. Once difference between the two is that with OutWit you can dig as deep as you want to set your Google Search settings. With SEOmoz you will get the top 25 and that’s it.

Here are the steps to getting the same data with the SEOmoz Keyword Difficulty & SERP Analysis Tool:

1- Run a Report (up to 5 at a time)

Sometimes I’ve found that the tool will time out if you run 4 or 5, so if you’re having that problem just run 3 and you’ll have an easier time.

2- CSV Export

Once the report loads, click on it and then choose the “Export to CSV” link down towards the bottom. It’s above the table with all the pretty greens and reds.

CSV Export - SEOmoz Tool

3- Rinse and Repeat

The only columns we need are ‘Rank’ and ‘URL’. If you want to start getting in to Domain and Page Authority comparisons you could use that data as well, but for this blog post I’m just going to keep it simple.

Just like with the OutWit Hub data, make sure you’re combining all of your CSV download into one master file so you can do the filtering you’ll need to do.

Step 3 – Filter Down to Just Domain Names

In order to really do the comparison, you need to filter your SERP scraping down to just the domain names. With a little Excel formula magic, this is easily done. Here are the basic steps to follow in Excel. Since there are so many different version of Excel and other spreadsheet programs, I’m just going to give you the basic steps and formulas here so you can do what you need to in the program/version you’re using.

Before you do the steps below, make sure to MAKE A COPY OF ALL YOUR SCRAPED DATA. We’re going to filter down to just the domain names you’ve scraped, but that’s only so we have a list of unique domains that we can then do some counting and averages on. You have to leave your original data so you can get the counts. So I repeat, make a copy of all your scraped data and do the steps below on the copy.

  1. Use ‘Text to Columns’ and delimit on ‘/’. This is probably the easiest way to break out the http: and any other folders in the URLs you’ve scraped. Delete all of the columns that don’t have just the domain name.
  2. Get rid of www. Since some of your scraping with have URLs with www and some won’t, we need to get rid of these. Sort your list of domain names alphabetically. Then, do another text to columns on the domains that have the www in them. You can do this the easiest by doing the ‘Fixed Width’ option since www. is always the same width. You may also have other subdomains in your list, but honestly I would just treat these as separate sites from the main.
  3. De-dupe. Now that you have your list of just domain names without the www and folders, de-dupe this list so that you have a list of unique domain names.

Step 4 – Count # of Results and Average Rank For All Unique Domains

Once again, for this part I’m just going to give you the steps rather than screenshots since it might vary a little bit from spreadsheet program to spreadsheet program.

To set up your spreadsheet matrix, you should have all of your unique domains down the left, and then across the top you’ll have a column for # Results and Avg Rank for each of your keyword set shortnames. Put these at the top of the two columns and merge over them if you want to make it a little prettier, and it will give you something to reference in your formulas.

Getting Number of Results Per Unique Domain

  1. This formula will vary a little bit based on how big your data set is and where your list of original domains is.
  2. For example, if your first unique domain is in cell B3, your full data set of URLs is from cell D120 to cell D293, the column in the data set with the keyword set short names is in cells A120 to A293, and your first short name column name is in cell C1 your formula would look like this: =COUNTIFS($D$120:$D$293,”*”&B3&”*”,$A$120:$A$293,$C$1)
  3. Notice the absolute references for the cell ranges. This is critical, otherwise you won’t get the correct count.
  4. The “*”&B3&”*” is a wild card that basically says match B3 with anything before or after it. So you’ll get www, non-www, home page, and any other page for that domain name.
  5. If your formula looks good, copy it down to all of your unique domains.
  6. Repeat this process or all of your keyword sets.

What this number tells you is the number of times that unique domain shows up in your scrapes for that keyword set. If they show up a lot, than that’s something they are going after, and can be higher if they have multiple listings as well. If they don’t show up very much, than it isn’t an important set for them.

Getting the Average Rank Per Unique Domain

  1. This formula will also vary a little bit based on how big your data set is, etc.
  2. Let’s use the same example cells as listed above, but your Rank data is in cells B120 to B293. Your formula in Excel would look like this: =AVERAGEIFS($B$120:$B$293,$D$120:$D$293,”*”&B2&”*”,$A$120:$A$293,$C$1)
  3. Again, notice the absolute references and make sure you have them in there.
  4. If your formula looks good, copy it down for all of your unique domains.
  5. Repeat this process or all of your keyword sets.

What this number tells you is the average rank for that domain name for that keyword set. Naturally, the lower the number the better they rank on average. The higher the number, the less of a threat they currently are, but it also shows that they are at least showing up for that set.

Step 5 – Organize and Analyze

Once you have all of your formulas down and you’re happy with what you see, I recommend copying and then pasting back the values for your matrix. This way you can sort the data any which way you want without it messing up the data (I made this mistake once and it wasn’t pretty).

Now you have a really cool matrix that will show you by keyword set which sites are going after different sets, and how important each set is to them based on how often they show up and what their average ranking is.

Have some fun sorting by different columns and even highlighting the numbers and sites that stand out to you. Here’s a screenshot sample of a matrix I did once to help.

Sample Data

What To Do With This Info

One of the problems with competitive analysis is that site owners and marketers only look at the companies they know, the major players in their space. Well, with this technique you will also see the affiliate sites that are competing that you may have overlooked, how big of a player sites like Wikipedia are in your space, etc.

If you run this every couple of months, you can also see the changes that are happening in the SERPs and better keep an eye on those sites that are becoming more of a threat.

As you identify new competitors, you also now have another site to analyze for marketing ideas, competitive links, etc.

Let Me Know What You Think

I really hope that this has been a helpful post for you to learn a technique to identify more of the true competitors in your space, and then what to do with that information. I’m sure that there are other ways to get this information, and if you have any additional tips please share them in the comments below.

Creating Authority Through Content with Loren Baker (@lorenbaker) on #SEOchat

On Thursday, August 11, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. Loren Baker (@lorenbaker) will be our guest answering questions about Creating Authority Through Content.

Details:

Starts at 7:00 pm Mountain Time
About 1 hour long
Use hashtag #seochat
Host: Ash Buckles

About Loren Baker

Loren Baker is the Vice President of Services at Blue Glass Interactive, an internet marketing company with offices in Florida, Utah, New York, and California. In his role at Blue Glass, he oversees all of the company’s service offerings and teams.

Prior to assuming his position at BlueGlass, Loren co-founded Search & Social, an agency specializing in innovative search marketing and social media engagement tactics. Loren is a pioneer in the search marketing industry. He is the Managing Editor of Search Engine Journal, an AdAge Top 10 blog that he created in 2003 to cover search marketing news and tactics. Over the last decade, Loren has consulted with Fortune 500 companies, universities, financial institutions, and startups. He has successfully assisted them with the development of their strategic online marketing campaigns.

Loren has been featured on CNN, NPR, PCWorld, BusinessWeek, ZDNet, PRWeek, TechCrunch, Mashable, and AdAge and is a regular speaker at SMX, Pubcon, and other conference series. He was a member of advisory panels at Yahoo and Microsoft Search.

Summary: Website Architecture with @TonyVerre on #SEOchat

Guest: @TonyVerre. @TonyVerre is the CEO of Silver Arc Search Marketing, a local Milwaukee, WI company. He works to help small and mid-sized business owners with their search marketing strategy and tactics. @TonyVerre has been in the SEO industry for the last 6 years. He also writes The Milwaukee SEO, a blog dedicated to SEO, SEM, and online business philosophy. Tony has co-authored eProfitability, a guide for C-level execs to understand search landscape & maximize online profitability. He currently works at Top Floor Technologies as a Sr Search Marketer, measuring SEO/SEM/SMO for over 36 B2B clients. @TonyVerre is an Operation Iraqi Freedom war veteran and was in the military for 8 yrs (THANK YOU FOR YOUR SERVICE!)

What are the most common architecture problems ecommerce sites run into? How can you fix them?

Big 3 things I see: click paths problems, category titles (subs and deats too), multiple indexation paths (ensure that there are multiple ways for a user crawler to access a product page). Clickpaths: I work for no more than 3, 4 at a maximum clicks, to detail page). Naming conventions: every category, subcategory, and product page needs to be appropriately labeled.

@bryantdunivan: @TonyVerre are you pro-clean url’s, or if you canonize it dont matter?
@TonyVerre: @bryantdunivan cleaner the better, but some CMSs make that a tough job.

@dan_patterson: @TonyVerre Speaking of CMSs, do you have any that you prefer to work with over others?
@TonyVerre: @dan_patterson I personally like Drupal a lot. Gotten better with it as of late. HUGE Learning curve. Like ModX too.

@jasonmun: @TonyVerre What are your thought on Magento?
@TonyVerre: @jasonmun I like it. But if you don’t have a PRO developing on it, even w/ Yoast’s stuff, the directory dupes kill you.

@SEOGroup: We use ExpressionEngine. Thoughts?
@TonyVerre: @SEOGroup Expression Engine is another that can be really advantageous, just you need a PRO on it. Find their Mod Community weak.

@AnnieCushing: @TonyVerre OK, I’ll go there … Pagination … Rel canonical or noindex,follow? What say you?
@TonyVerre: @AnnieCushing on pagination, I’d choose canonical on this one. especially for ecomm

What are some of the unique challenges that B2B sites face that B2C sites don’t have? Any examples?

Informational siloing issues: i.e. organize by industry? by services offered? by products you manufacture? Fuzzy conversions and how to accentuate those conversions through architecture, navigation. IMO, it’s ok to offer multiple paths, but one path, your strongest must, must be front and center.

@SEOGroup: My guess would be lower search volume.
@TonyVerre: @SEOGroup Certainly. The volumes are very low, and in some spaces, extremely competitive for a small number of eyes.

@bryantdunivan: @TonyVerre what is your take on nofollowing those silos, is it worthwhile, or does it create too big of a mess?
@TonyVerre: @bryantdunivan IMO, I would never use Nofollow for anything other than login and carts. Too messy and dangerous.
@aknecht: @TonyVerre @bryantdunivan I also sometimes do no index on privacy policies. Helps prevent them from appearing on branded searches.

@lyena: In my experience – longer lead time too. Way longer. And tracking conversions in real life is quite a challenge.
@TonyVerre: @lyena tracking those conversions, I found is nearly impossible after submission. Most B2B have that go right in Sales = GONE.
@lyena: @TonyVerre If it is integrated with SalesForce or similar, it is better. The rest – nightmare.
@TonyVerre: @lyena we’ve work hard to try and shape sales procedures once submissions come in, but so MUCH pushback. It’s almost not worth it.
@TonyVerre: @lyena we all use call tracking on our sites (well some of them) we find that gives us some really nice metrics.
@AnnieCushing: @TonyVerre Do you have a fave call tracking platform?
@TonyVerre: @AnnieCushing we like Call Tracks. Works well. Some nice options, and execs love it for Big Brothery type stuff too. :)
@jasonmun: We have Advanser here in Australia for call tracking.
@shuey03: @TonyVerre you using mongoose? Or something else for call tracking?
@matt_storms: @shuey03 @TonyVerre I think that for small mom and pop mongoose is too much but there are others.
@aknecht: @matt_storms Think when it comes to Mongoose, it depends on sales volume not if it’s a mom & pop. Need to base call tracking needs on call volume & value of calls in sales cycle.
@lyena: @ashbuckles @TonyVerre @AnnieCushing I think, MyNextCustomer also tracks calls and more.

@jasonmun: @TonyVerre How do you suggest a websites with ALOT of products split up their XML sitemaps?
@TonyVerre: @jasonmun depending on product/SKU count, it could be broken up my top level categories.
@ashbuckles: @jasonmun Base your XML sitemaps on similar categories/products/etc.

Faceted navigation is a common problem in large ecommerce. What do you think are some good ways to handle it?

The best possible solution is to always try and end up at one single product URL. No matter the paths to product, one URL. If the problem is the best crawl possible with the best possible indexation, without wasting crawl/indexation on “junk” URLs. The best way(s) to handle this is, imo, “noindex” URLs you don’t indexed. If huge site, then I would go robots.txt and kill off.

@jasonmun: @TonyVerre Would you suggest Ajax based pagination?
@bryantdunivan: @jasonmun ajax still has indexing bugs, can really cost you some flow
@dan_patterson: @bryantdunivan @jasonmun Yeah, I’d be really careful with using ajax that way.

@davidmalmborg: @TonyVerre How do you feel about # tags for facets?
@TonyVerre: @davidmalmborg to be honest, never really work with those. We keep it as KISS as possible. Crowd Advice on this?
@dan_patterson: @TonyVerre @DavidMalmborg my $.02, the more facets you add the more important one base URL for canonical becomes.
@AnnieCushing: @jasonmun Ohhh watch out for hash bang URLs. Hash bang URLs can have disastrous results. Learn from LifeHacker’s mistakes. All their content went missing b/c of a redesign that used hash bang URLs earlier this year, I think.
@lyena: I bet, it’s because GA lets you re-configure the query string – you can use # instead of ?.

Lots of B2B sites have a hard time coming up with good, marketable content. Any advice for these sites?

Still working this one out myself. ;-) The easy, and lame answer, is to be creative with who the products/services serve. Find a way to link your B2B product or service content to current events or pressing social issues. Gots a good example too.

Example: CPAP. We created content around the cities with the dirtiest air quality and matched up to best CPAP for city. B2B has a tendency to be “plain Jane”; does this does that. Really drive home benefit-driven content. Make it sexy. B2B is all about product knowledge. You have to get intimate with product groupings, for certain. As far as content goes, you’ll have to pull teeth to get them to think of their product/service like something special. But when you do, it’s worth it. Incorporate a sit-down early in content writing so you can get to that heart faster.

@dan_patterson: @TonyVerre Consistency is a tough thing on those hard niches too, wouldn’t you say?
@TonyVerre: @dan_patterson it is! Especially at niche level, b/c products can be so radically different. Just got to keep on trucking sometimes.

What is the biggest architectural mess you’ve ever come across? How did you fix it?

A site that duplicated directories about 16 deep. Content spread across 18 different sites, and over 1000 404 pages. Still in progress, but the first step is to move it to a more search friendly platform, currently on DotNetNuke. Next is to use 301s to create canonical content. Last is to have a meeting to discuss the architecture as a whole, and where we can create new content and kill extra stuff. Bottom-line: if we can correct even half the crap going on, we’ll have created a stronger, more solid site. And, we’ll have powered more trust, relevance, and authority to content that needs it. All without one damn link. :-)

@AnnieCushing: @TonyVerre Oh, I’ve been curious about DotNetNuke. What do you think of it?
@TonyVerre: @AnnieCushing Personally? It’s a trash bin. I would never use it. But, Big Corps love it for some reason?

@jasonmun: Companies should engage SEO’s at the planning stage not after the website is launched!
@lyena: True, but in limited capacity.
@SEOGroup: OMG True!
@TonyVerre: @jasonmun it’s an absolute must have. 90% of SEO/SEM is front-loaded strategy and 10% is implementation.

Last Words

Architecture matters. Don’t ignore it ! It’s not the difference now, it’s the diff in about 6 months.

SEO Ethics & Industry Regulation with David Harry (@theGypsy) #seochat

On Thursday, June 16, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. David Harry (@theGypsy) will be our guest answering questions about SEO Ethics & Industry Regulation.

Details:

Starts at 7:00 pm Mountain Time
About 1 hour long
Use hashtag #seochat
Host: Greg Shuey (@shuey03)

About David Harry

Hi my name is Dave and I, am an algo-holic

I am an avid search geek that spends most of his time reading about and playing with search engines. My main passion has always been about the technical side of things from a strong perspective rooted in IR and related technologies.

I have spent much of the last 5 years writing on my original/personal blog, The Fire Horse Trail. In late 2009 I started the SEO Training Dojo and then… well, I started Search News Central.

With more than 8 years in SEO (started in 1998 in web development) there isn’t much I haven’t seen over the years.

SMX Advanced Recap with Claye Stokes (@claye) #seochat

On Thursday, June 9, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. Claye Stokes (@claye) will be our guest answering questions about SMX Advanced 2011.

Details:

Starts at 7:00 pm Mountain Time
About 1 hour long
Use hashtag #seochat
Host: Ash Buckles (@ashbuckles)

About Claye Stokes

Claye Stokes is the Director of SEO at SEO.com. Claye has always been intrigued by SEO and the web – especially as it pertains to advertising and accomplishing meaningful results.

Professionally, he has worked for both SEO agencies and full service ad agencies. As a result, his experience ranges from managing the online presence and SEO for small- to enterprise-level companies, to working side by side with business owners to fulfill their entire marketing and advertising campaigns, both on and off the web.

When he isn’t working or spending time with his beautiful wife and two children, you will probably find him researching and practicing online marketing techniques, or outdoors practicing his golf swing.

Analytics Driven SEO with Hugo Guzman (@hugoguzman) #seochat

On Thursday, May 26, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. Hugo Guzman (@hugoguzman) will be our guest answering questions about Analytics Driven SEO.

Details:

Starts at 7:00 pm Mountain Time
About 1 hour long
Use hashtag #seochat
Host: Dan Patterson (@dan_patterson)

About Hugo Guzman

Hugo is marketing agency executive with nearly a decade of experience working with enterprise-level brands. He writes about the online marketing industry on his personal blog, www.hugoguzman.com and is also an avid guest poster that has been featured on sites like seobook.com and techipedia.com.

Optimizing for Google News with Kaila Strong (@cliquekaila) #seochat

On Thursday, May 12, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. Kaila Strong (@cliquekaila) will be our guest answering questions about Optimizing for Google News.

Details:

Starts at 7:00 pm Mountain Time
About 1 hour long
Use hashtag #seochat
Host: Greg Shuey

About Kaila Strong

As a Campaign Manager at Vertical Measures, Kaila works directly with clients to evaluate and analyze their overall Internet Marketing needs, creates sales proposals and recommendations. In addition she regularly reports on client rankings, gives SEO advice to brands in a variety of industries and manages client expectations.

Kaila develops Internet Marketing campaigns for clients, gearing a variety of efforts toward the overall goal of increasing search engine rankings, brand recognition, traffic, and conversions. She is most skilled in utilizing a mixture of content marketing and social media marketing to help the Vertical Measures clients due to her background in social media marketing, link building, SEO and content marketing.

In addition to her client work Kaila is a regular author on Search Engine Watch writing about social media topics, and has guest blogged on Wordstream’s Blog, Bruce Clay, Stay On Search, The Social Robot, and many others. Her book “Facebook For Business: A How-To Guide” is available now in the Vertical Measures store. She also loves to present webinars on topics related to Internet Marketing for Vertical Measures.

Getting the Most SEO Benefit Out of Social Media with Kevin Mullett (@kmullett) #seochat

On Thursday, April 28, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. Kevin Mullett (@kmullett) will be our guest answering questions about Getting the Most SEO Benefit Out of Social Media. Details: Starts at 7:00 pm Mountain Time About 1 hour long Use hashtag #seochat Host: Dan Patterson

About Kevin Mullett

Kevin Mullett, leads the Cirrus ABS vertical market product development efforts and social media programs. Kevin is a proven and experienced web developer, marketer, and user interface designer with over 12 years experience in website and internet marketing experience. Kevin has many professional accommodations for his work including an ADDY award for interactive design excellence and has been an integral part of over 300 custom websites and e-business solutions. Kevin leverages his 15 years experience in sales and marketing along with a passion for social media, search engine marketing, great design and usability to help Cirrus ABS create web marketing solutions that make our clients successful.

Content Is King, Autobloggers Fail

Well, in my whole life of being a webmaster, content has always been king – where traffic drives people to your content, so what does that mean? It means if you have great, quality content, then people will recognize you for the things that you write about, and are passionate about.

With an autoblog, you’re just copying other people’s work, and not bringing true definition of “content” to the webmasters community – and when Google started cracking down on Autoblogs, people started whining about it, but why? Because they were cheating the system and making money at the same time, which is very wrong.

So, we’re back in the days of “Content is king” where traffic drives your content – so if you hear someone say that SEO is dead, tell them no, autobloggers are dead, and that’s a fact. Google doesn’t want spam sites in its search – and us webmasters are sick and tired of trying to battle it out with autobloggers that steal our content – so that means, we fight back, by stepping up our link building campaigns, and content skills.

Let’s say you ran an autoblog and then decided to change to a “real” blog, your chances of success are slim to none because Google already knows where your content came from.

So, content is still king, and will always be king – and true webmasters define “SEO”, by writing quality content for visitors, and not spiders.

So, do you agree that content is still king and that auto bloggers fail?

- Written by Cpvr, owner of Virtual pet list

Internet Law & How It Affects Search Marketing with David Mink (@dmmink) #seochat

On Thursday, April 21, 2011 Search Marketing Weekly will be hosting an #seochat on Twitter. David Mink (@dmmink) will be our guest answering questions about Internet Law & How It Affects Search Marketing.

Details:

Starts at 7:00 pm Mountain Time
About 1 hour long
Use hashtag #seochat
Host: Greg Shuey

About David Mink

David Mink has been at the forefront of web development for years. David received his Bachelor’s degree from Brigham Young University in Marketing. He then went on to receive his Juris Doctorate at the University of Alabama.

David has personally provided Internet consulting services for over one thousand businesses in the past several years. He has also developed eCommerce training curriculum for a number of business learning companies.

David enjoys group trainings and has presented Internet Marketing and Internet Law seminars at several Universities and corporations across the USA. David is the Chief Legal Officer for Dream Systems Media, and is a licensed attorney in the state of Utah.