This is a simple way to model your thinking about how to approach a website and breathe life into your analytics numbers. It’s how I approach things and I don’t think it’s terribly original, but I wanted to clarify my thinking on my approach, so here we are.

Let’s say we’re looking at the homepage of a site.

The What: Analytics and Clickstream Data

The ‘what’ consists of the clickstream data around the homepage: visits, uniques, pages per visit, time on page, time on site for visitors who landed there, bounce rate, conversion rate. There are more but those are the ones we usually look at.

We have this data for all the pages on our site, but it doesn’t usually come into play until there’s a problem. Once there’s a problem, usually in the form of someone asking “how come are sales aren’t higher?” and so you look and maybe see that the bounce rate is really high for the homepage and so hmm that’s not good, is it?

You dig deeper and see that 80% of traffic is entering the site through the homepage and that the bounce rate for the homepage is 70%. Ouch. Track down the designer and publicly shame them. Kidding. A better idea would be to go ask the designer for the research they conducted during the design process. If you get a blank stare, then great, you’re going to earn your money.

OK, back to the what. So the analytics data you have will tell you what is going on. People are entering, seeing the homepage, and then leaving. Side note: if you’re using Google Tag Manager, then I recommend setting up an event timer listener so you know if they’re actually just bouncing right away or if they’re sticking around for a bit.

Now, at this point, you can go back to your client and tell them that their bounce rate is high, which will be of zero value to them since you don’t have a solution to address the problem. Besides, if they pissed away paid thousands of dollars for the design, they have a vested interest in being right about that decision and you’ll be fighting upstream psychologically-speaking, which it’s tough enough to change a homepage as it is (or you’ll be fighting the IKEA effect if they built it themselves, which made me laugh when I discovered that there’s a cognitive bias named after IKEA).

I digress. My point was that you have to provide solutions, not just tell people that things suck.

The How

The How is where we figure out how the homepage is sucking. In what way does it suck, now that we know that it does indeed suck? There are so many ways that a page can suck, maybe infinite ways. Here, we want to make an educated guess about how people are leaving.

Are they scrolling down the page, not finding what they want, and then leaving? Are they leaving immediately, like in less than 3 seconds? Where is their mouse going? Are they clicking on stuff that isn’t a link and then leaving?
These questions are tough to answer, but heatmaps and other tools that allow you to actually see a video of the user’s visit can provide some assistance. And rarely will all visitor behavior fit into one nice clean bucket so you have to look for patterns and make an educated guess.

The Why

This is where the money is made. Up to this point, we’ve been picking up clues but haven’t really hit on any big insights. The why is the insight-rich territory that can provide you answers and solid clues as to what is going on.

There are two basic approaches (that I can think of) for generating the why: listening and science. Listening is powerful as hell but only science will tell you if you were right.


By listening I mean listening to people that actually use the site. Just ask them and they’ll tell you the most amazing things like “I can’t find the pricing!” and “THIS SITE BLOWS WTF” and “where’s the schedule” and “tried to sign up but can’t find it” and “where do you ship” and “DO YOU TAKE VISA”.

Now, if you ask me, there’s more gold in those statements than in the analytics and if I had to choose between one or the other, I would go with listening instead of the analytics. For instance, how they say it can tell you a lot about the visitors socioeconomic status and education level.

And if you listen long enough, you’ll start to see clear patterns emerge that will inform your own hypotheses and guide you to the most significant obstacles to conversion on the site.

Listening can be done in many ways at different price points:

  • Do in-person user testing in a “usability lab” (no need for an actual lab, a normal room with a camera will do).
  • Use a service like
  • Use an on-site survey program.
  • Have your mom use the site and talk about it as she does it.
  • Have the people at your company use the site.
  • Talk to the sales team, if there is one, and ask them what kind of things people say on the phone about the site or about their needs in general (their needs on the phone will be similar to their needs on the site).
  • Look back at the research that was originally done when the site was designed (one can hope).
  • Look at available research about your target market.


By science, I mean the scientific method. Formulate a hypothesis about why the bounce rate is low, then design an experiment to test that hypothesis. If you were able to use any of the methods in the Listening section, then you may already have some great hypotheses.

If not, then you can generate hypotheses on your own.

Look at the homepage. Does it do a good job of establishing the value proposition of the site? Does it clearly communicate what your site is about and why the visitor should be there? These and many other questions can help you generate hypotheses and I recommend reading this book by Chris Goward for more ideas.

Wrapping it up

Now you have something to talk to the client about. “We see that your sales are low and we think that it’s because your homepage isn’t engaging enough and visitors are leaving immediately. But don’t worry, because we did some research and we think that x, y, and z are the reasons that visitors are bouncing. We’re going to set up an A/B test to see if that improves the bounce rate. If not, those hypotheses will be ruled out and we can continue testing until we fix the problem.”


The testing mentality is a philosophy that can be applied to much more than websites and really to any facet of life, whether it’s productivity, new habits, health and nutrition, sleep patterns, art, comedy, etc. For me, the testing mentality consists of four basic tenets, subject to revision:

1. We are often ignorant of why things succeed.

2. Domain knowledge, expertise, experience, and good theory can take us only so far.

3. When something new does succeed, we craft a story around it to cover our ignorance and make it seem like we knew all along that it would succeed.

4. Since we don’t know what will succeed in the future, controlled experimentation is the best way to find out.



I listened to this episode of EconTalk last month, in which Moises Velasquez-Manoff discussed the introduction of parasites into the body to mitigate the symptoms of allergies:

Velasquez-Manoff explores a recent hypothesis in the epidemiological literature theorizing the increase is a response to the overly hygienic environment in rich countries and the absence of various microbes and parasites. Velasquez-Manoff also considers whether reintroducing parasites into our bodies can have therapeutic effects, a possibility currently under examination through FDA trials. The conversation continues a theme of EconTalk–the challenge of understanding causation in a complex world.

This wasn’t the first time I’ve heard of this concept–RadioLab did a story a few years ago (with a follow-up here) about a man (a non-scientist I believe) did something similar and claimed that it cured him of his allergies.

I have to admit that when I first heard the story on RadioLab, I really wanted to try it. I’ve had some pretty bad eye allergies for years that make my eyes dry and force me to blink and adjust my contacts way more than is socially acceptable. I’m still not going to try it–the idea of ingesting parasites, or allowing them to crawl through my skin intentionally just grosses me out and seems fraught with risk. But what I found really fascinating was that the EconTalk discussion led them to talk about how the scientists testing this stuff out were testing it out on themselves first.


Back when I was mainly doing SEO consulting, a lot of the people that I worked with were small-business owners that knew they should be doing something with SEO and in many cases had already been doing something. But at a certain point, they realized that things weren’t working for them and they needed to bring an expert in, or it just became too time-consuming for them to handle on their own.

I think that CRO is about where SEO was in 2007–there are already some great companies that make excellent tools for it, some forward-thinking big companies are already doing it, but it hasn’t really hit the mainstream yet. Most major cities only have a few companies offering CRO as a service and most small-business owners have at best a vague notion that it’s something they should be thinking about.

So, if you’re a bit ahead of the curve and want to get started with CRO for your site, here’s an affordable way to start:

1. Set up Google Analytics on your site (free) and set up some goals to track your conversions.

2. Get the lowest-end option that CrazyEgg offers ($9/month)

3. Do a single-user test on ($49)

4. Get the lowest-end option that Optimizely offers for A/B testing ($19/month)

Total cost: $77

Optimizely and CrazyEgg offer free 30-day trials so you could probably start getting results before you even spend the full $77 and after 60 days of analyzing and testing, there’s a good chance that you’ll have made your money back.



Yesterday I wrote about calculating the ROI of conversion rate optimization, with an attempt to help people figure out whether or not CRO is something they should be investing in for their websites.

But what about optimizing yourself? Self-experimentation and self-tracking have been growing in popularity, with tons of communities, blogs, and new products and apps aimed at improving self-measurement with the purpose of improving physical and mental health outcomes.

So, how do you know if it’s worth investing in for yourself?

The easy answer is to say “just try it for 30 days and see if it helps or if you find out anything interesting about yourself.”

We could leave it there but I do think it’s important to consider the time and energy spent on self-experimentation without focusing solely on the benefits. And it may be ironic to hear it from me because I have a pretty big Excel spreadsheet that I update 3-4 times per day. I track what I eat, the supplements I take, whether or not I exercised, what time I went to bed and woke up, and my energy level and anxiety throughout the day, in addition to a variety of other inputs.

The thing is that while I love my spreadsheet and I love the insights that it has helped me formulate (more on those in the future), I only do it out of necessity.

After suffering for years from (previously undiagnosed but now officially diagnosed) sleep apnea, and a whole host of strange and disquieting neurological symptoms (vertigo, brain fog, mood swings, piercing anxiety, I could go on…) I came to a point where I had to start taking my health into my own hands.

For me, that meant a combination tracking changes in sleep patterns and diet, finding doctors that could help and diagnose me correctly, working with what my doctors found and building on it with research of my own, and ultimately recording the data to measure the actual effects of what my doctors prescribed me.

For me, I came to self-experimentation from desperation, and frankly, I would be quite happy to never have needed it. If the spreadsheet wasn’t so valuable to me, then I wouldn’t bother to update it.

So, calculating the ROI for self-experimentation is a little it trickier. Websites have clean revenue or lead numbers while the metrics that measure your quality of life or productivity are more subjective and slippery.

The costs of self-experimentation come in the form of time spent collecting and analyzing the data and in the monetary costs of tracking devices.

Can you put a price on the quality of your life? Yes, I think so. At least at the margins. If you’re pretty happy with your health and vitality, then the time and energy spent on self-experimentation may not be worth the upside. If you’re currently miserable and want to get to the bottom of why, then self-experimentation is probably something you should look into.


Calculating the ROI of Conversion Rate Optimization: Back-of-the-envelope Edition

April 1, 2014

How much should you invest in conversion rate optimization (CRO)? Should you invest in it all? Will it be a big waste of money? As biased as I am, the answer for me is not always “yes, you should invest in CRO!” The opportunity costs of a single test or an entire testing program and […]

Read the full article →

The 10 Weirdest A/B Tests?

March 28, 2014

I missed this webinar from Kissmetrics but luckily they put it online and shared the deck on slideshare. There are some good nuggets in there but honestly I was looking forward to some really strange but surprisingly successful tests. Instead, they suggested tests like ‘using images’ or ‘use free trials’. Neither of those are weird […]

Read the full article →

Why aren’t people scrolling down the page?

March 27, 2014

One of the nifty features of CrazyEgg is the scroll map, which shows you how far down the page visitors are scrolling (hint: the further down the page, the fewer people there are). Sometimes you’ll see a particularly harsh drop off below the mystical fold, which could mean one of two things (that I can […]

Read the full article →

Obligatory first post that nobody will read

March 27, 2014

Here it is. There, feel better?

Read the full article →