Guest: @dr_pete. Here’s his bio: http://bit.ly/nDMr7O . Dr. Pete is the President of User Effect – http://www.usereffect.com/ . He also blogs regularly for the SEOmoz blog and works part time with their marketing team.
What are some indicators that your site has been hit by panda vs over optimization, etc.?
1st clue is the Panda release dates – 1.0 (2/23), 2.0 (4/11), 2.1 (5/9), 2.2 (6/21), 2.3 (7/23), Global (8/12). Panda also tends to hit site-wide and isn’t query-specific, unlike many over-optimization issues. Toughest part of Panda is that data updates aren’t real-time. It’s more like the old “Google Dance”.
Starting to say “Post-Panda” as if there was an apocalypse. “In a Post-Panda world, one man must fight to save SEO…”
@lyena: If I am not hit by Panda, is there a chance that Google just did not get to me yet?
@dr_pete: Unfortunately, it seems that way in some cases. Panda could hit you weeks after a version roll-out. It seems that way, anecdotally. I can’t prove it.
@shuey03: Are there any good references out there that list the symptoms of panda that you can share with us?
@dr_pete: The Wired article, oddly, is pretty thorough – http://t.co/Sroz4PH9.
@dohertyjf: @cyrusshepard did a great post here about Panda dangers: bit.ly/on85rU . This post by @tomcritchlow was awesome too: Google’s Panda/Farmer Update – What To Do About It on @distilled http://bit.ly/nSvWG4 .
@aknecht: What is a high ad to content ration, is it site wide or just specific pages (ie home page)?
@JadedTLC: Above the fold ads are looked down on. So less is more.
@dr_pete: I think it takes a site-wide density issue – not every page, but critical mass.
@dan_patterson: Question for you agency types: did you see any ecommerce sites hit or mainly just content sites?
@AlanBleiweiss: I’ve had lots of both info and ecom sites come to me after getting panda poo on them.
@dan_patterson: Mainly large ecomm selling others’ products, or manufacturers, too?
@AlanBleiweiss: Both – crappy sites are crappy sites. Panda takes a dump on all of them.
@ashbuckles: Either large ecommerce with little content or affiliates.
What is the proper way to target long tail keywords post panda?
You need to consolidate, and you need to support long-tail keywords with unique content. We have to give up the idea that every term needs a page. Indexed pages cost Google money. During the search engine wars, SEs raced for the biggest index. Now, they want to cut costs. The days of low-value long-tail (“casinos in City X”, “casinos in City Y”, etc. X 1000) are over, IMO. I know that last one will rub some people wrong. That money aspect was a really eye-opener for me. Google wants fast pages and consolidated content because slow crap = $.
@BrettASnyder: Same way as pre-panda…with good content that people would want to link to and that adds value to the end user.
@bryantdunivan: Look at historic search data, create new content based on the long tail searches you can get the most bang for your buck from.
@ashbuckles: I think a focus on well-written, full-featured content is more than necessary going forward.
@dr_pete: 70% reality, 30% wishful thinking?
@ashbuckles: IDK. I’m seeing more evidence of content saving rankings right now. Often when it shouldn’t too.
@aknecht: Yet crap writing & unverified facts in Wikipedia continue to out rank better quality sites. No #panda there.
@b_gardiner: Short of google places, how do you “prove” authority for longtail w/ location? No physical address in city = no citations.
@dr_pete: Unique content per location, local links, links with geo-targeted anchor text.
@kenjansen: So more text to a page and more than one target phrase now? is that what I am understanding?
How should one tackle cleaning up duplicate/low quality content on-site?
Start with the easy wins, like URL-based duplicates. Use the canonical tag – it’s very powerful. Then, tackle paginated search results. Check out Google’s new rel=prev, rel=next. Lastly, take a hard look at “near duplicates”. It may be time to 301 low-value pages. This is a tough an very situational topic, BTW. Canonical in the wrong hands = #pandapoo everywhere.
@AlanBleiweiss: Do a Site:domain.com -www to find possible duplicate subdomains of main content.
@shuey03: Is canonical better? or 301ing all pages into one better?
@dr_pete: If the pages really are dupes or 95%, I think canonical can be superior. 301s or 404s for content that just needs to go.
@AlanBleiweiss: Canonical has value but better to 301 if there are links to the dupe content URLS.
@dr_pete: My gut is that canonicals are carrying link-juice well, but generally agreed on links being the big factor.
@matt_storms: Canonicals are only god if done right. Tracking code in the url should not be included.
@dr_pete: You can seriously f— up a canonical implementation. No doubt. I did – http://t.co/RpjCEqqV . I did that on purpose, of course. People hit us with terrifying questions about canonical in Moz Q&A, where I go “Sweet Jesus, don’t do it!” Then, I think – “What would happen if I did that?” That’s why I’m usually in trouble.
@ashbuckles: The exception is if you really want the user to experience a dynamic version but only index one page.
@AlanBleiweiss: Maile Ohye said u can have canonical & the new pagination rels on the same pages. I freaked. Most will implement wrong.
@dr_pete: Yeah, that made me twitchy, too. I’m already running it on a client site
@TonyVerre: If you’re asking, the rel=canonical is very dangerous, sometimes the old tools work best & honored by all: The 301. IMO, of course.
@ashbuckles: Never seen the level of canonical issues on a Linux box that I’ve seen on a Windows box.
@dr_pete: .Net is crap for home-page canonicalization. I’d like to tell MS where to stick default.aspx .
@lyena: Does Bing recognize canonical tag, or it is just Google?
@dr_pete: Some SEOs tell me it’s unreliable in Bing. Stefan Weitz at Bing told me they do honor it.
@ashbuckles: The canonical tag is support by Google, Yahoo and Bing.
@JadedTLC: Bing says they honor it, but I say G + B may not get it completely. They both say default to 301 when possible.
If you got hit by panda, is moving your content to subdomains really a good fix? Why or why not?
I don’t think so. Short term, it may work, but there are too many long-term consequences. If it works, the subdomain becomes a fragment. So, what’s the value? Evaluate the content. Exception may be massive UGC, but it many cases I’d rather NOINDEX that. It’s a bit like hiding the drugs in the kitchen while the cops search your living room, IMO. More indexed pages does NOT = better. I felt strongly about that long before Panda.
@AlanBleiweiss: DO NOT Move content to subdomains just because you were hit by Panda. Consolidate content onto fewer pages, do 301s to retain link value. Do NOT subdomain. Better to turn massive #pandapood site into a tighter, more focused authority site than go to subdomains. Subdomains are a fools errand mostly, and only to be implemented in rare non-panda situations.
@AnnieCushing: I disagree. Subdomains can help bigger sites rank for more competitive head terms.
@AlanBleiweiss: That’s because you know what you’re doing. Most are idiots with half-baked information.
@AnnieCushing: Oh, okay. Then carry on.
@TonyVerre: Certainly can. but not as switcheroo technique. If you have brand/topic identity, then absolutely.
@BrettASnyder: Couldn’t subdomains be a way to FIND the problem areas though? a way to segment and prioritize?
@dr_pete: In a “shoot first and let God sort it out way”, probably. Seriously, if you’re talking massive scale where you need time to regroup, but there are risks. It’s much easier to undo a META NOINDEX, though. Seriously underused tag, IMO. It’s not 100% honored, but I’ve found it beats alternatives, like Robots.txt.
@ashbuckles: That’s an awful lot of risk and work to find the problem. Usually analysis will point you correctly pretty quick.
@BrettASnyder: Have you used manual exclude from WMT? I’ve used it infrequently but seems to do the trick if NOINDEX doesnt.
@dr_pete: GWT parameter exclusion is pretty effective, too, but I use GWT methods sparingly.
@TonyVerre: It’s also not a very popular choice, but if you get hit, you insert dynamite into foundation. IGNITE. Start over.
How should your link building change post panda? Does it need to change at all?
- Build relevant links to deep content, to improve Google’s perception of your long-tail quality
- Be even more mindful of cross-linking your own domains, especially if they’re similar
- Don’t forget your internal links and architecture. How you pass your own authority is critical
@AndrewOverseas: I’ve been using this tool: bit.ly/aFAccG to compare internal duplicate content. What’s a good ratio to shoot for?
@joshbachynski: My research indicates less than 40% post-panda.
@AnnieCushing: Screaming Frog is better for determining dupe content en masse, IMO. Uses hash values. I like Screaming Frog and GWT to evaluate internal linking.
@dr_pete: Dr. Screaming Frog is awesome for so many things. Well worth the paid version.
@AlanBleiweiss: Link building is vital to provide new confirmation to on-site corrective actions taken post Panda authority. Link building in #SustainableSEO requires proper diversity of link source types, low link to root ratio.
You Might Also Like: