Tag Archives: programming

How We Improved Our Conversion Rate by 72%

One of my favourite parts of creating web apps is being able to make subtle changes to our sales pages and see the impact it has on our signups.

With CareLogger being a side project, our time available for marketing is limited. We found conversion optimization to be a good way to spend 30 minutes once a week refining our pitch to customers. It also maximizes the number of signups so we have access to more people to conduct customer development with.

After these 3 experiments, our free signup conversion rate went from 14.5% to 25%, a 72% improvement.

1) Including a pain point in our headline

When we launched the site our headline said Keeping Tabs on Your Diabetes Just Got A Lot Easier:

While this explained in very general terms a benefit of our product it didn’t hit on the real need that people had when they decided to keep a logbook to record their diabetes. Making your life easier is what almost every product promises.

People weren’t looking for our software because it was easier than what they currently do. They wanted better insight into their illness so they can stay as healthy as possible.

So we changed our message to Maintain Your Optimal Health by Keeping Tabs on Your Diabetes. We also highlighted the most important benefit “Optimal Health” with green.

This change resulted in a 31% increase in conversion after 1000 trials (our metric was signing up). Not bad at all.

2) Changing our signup button from Green to Red

Earlier this week I came across an article by Performable that explained how changing their call-to-action from green to red increased conversion by 21%.

I had to try it out so that day I set up an A/B test on our homepage call-to-action.

So far we’ve had 600 participants and our conversion rate has increased by 34%.

We generally wait until 1000 trials but so far the results have been pretty significant.

We believe that it was so effective because we used green on many other parts of the landing page and the red just stands out so much more. It’s all about contrast.

3) Changing our button text from “Signup for Free” to “Get Started Now”

We also experimented with changing the button text on the call to action to form “Signup for Free” to “Get Started Now”. This one had a smaller effect, after 1000 trials our conversion rate improved by 7%.

The difference in this one is that “Get Started Now” is an easier sounding commitment than signing up. Signing up also has connotations with paying (our app is free).

Next up we’re going to try alternating the stock photo on the homepage from a male/female couple to a doctor and patient.

How to Scrape Websites in Ruby on Rails using scRUBYt!

After launching Contrastream I’ve always wanted to create a script that would automatically add new albums to the site as they were released (saving me a lot of time). The idea was to scrape a bunch of album titles from review sites and combine it with information from the Last.fm.

The task seemed daunting at the time, so I never jumped in and tried it. After a year I finally wrote my first script and realized how wrong I was. Scraping sites with ruby is surprisingly easy.

For my first project, I decided to try the simple task of pulling product information from homedepot.com. Here’s how I did it after some trial and error. See the completed script: http://www.pastie.org/267676

The Plan

  • Go to http://homedepot.com
  • Search for “Hoover Vacuums”
  • Find the First Product in the Results
  • Click Product Details Link
  • Fetch Name + Description
  • Repeat for Each Product in the Results
  • Save Data to MySQL

What You Need

This is purely a ruby script but I used Rails to save the data using Mass Assignment + ActiveRecord (MySQL).

  1. Firefox + Firebug + XPath Checker
  2. Set up basic Rails application framework
  3. Install scRUBYt! plugin (sudo gem install scrubyt –include-dependencies )
  4. Set up the database.yml file
  5. Create homedepot.rb in [rails_root]/scripts folder

Now the fun part begins…

Fetching the site

Add this to the top of the script.

	require '.ubygems' 
	require 'scrubyt' 
	Scrubyt.logger = Scrubyt::Logger.new 
	product_data = Scrubyt:: Extractor.define do 
	  fetch 'http://www.homedepot.com/'

These are the basic scRUBYt includes plus the rails environment so we can use ActiveRecord. After that you need to direct the script to the site your scraping (called fetching).

Searching the site

Now that the page is loaded up, it’s time to search the site.

fill_textfield 'keyword', 'hoover vacuums' 

I highlighted the search input box in Firefox and viewed source. I found that the input name was “keyword”. The second string is the search query “hoover vacuums”.

Find the First Product in the Results (Create a Pattern)

The search results page shows a list of Hoover products. The next step is to create a pattern that the script will follow. In this case, the pattern is: for each product in the results I want to click and see the product details.

product_row "//div[@class='product']" doend

From viewing the source I found that each product is wrapped in div class=”product”. Now for each div with a class=”product” the next action will take place.

Click Product Details Link

The link to the product details page is the name of the product. For example, the link “Hoover Legacy Pet Rewind Vacuum” goes to the details page. Because the link that will be clicked is unique on each row, I need to use the xpath of the link to create a pattern. (Otherwise, I would have used product_link “View Details”.)

    product_link "/form/div[1]/p[1]/a", :generalize => false do

To find the xpath, right-click the first link in Firefox and click view XPath. This is the xpath for the link inside the div: “/form/div[1]/p[1]/a”. This shows that the link was wrapped in a form tag, followed by div, followed by a p (all within div class=”product”).

Now, scRUBYt has a pattern: for each product in the search results, click the link (located inside a form, div and p tag).

Each product in the results has the same HTML structure so the script can easily find the next product in the results.

Grab Title + Description

Now that the script is in the product details page, it’s time to start scraping some data. I wanted two things from this page, the name and description.

	   product_details do 
	     product_record "//p[@class='product-name']" do 
	        name "Homelite 20 In. Homelite Corded Electric Mower" 
	     parent "//div[@id='tab-features']" do 
	       description "/p[1]" 

First, from looking at the HTML, I found the name was wrapped in the class “product-name”, so now I can grab the contents. I copied the title from the first product so scRUBYt can create a pattern for the rest of the products. Now the script knows exactly what data to kick back (the text within the “product-name” div).

The description is similar except I used the xpath “/p[1]” to create a pattern instead of copying the whole paragraph of text.

The “name” + “description” preceding the paths above it is the label attached to the data when you export the data later on.

Save to MySQL

The script is pretty much done; it is now searching for hoover vacuums, going to each product page and grabbing the name/description. Now you can save the data to MySQL (or any ActiveRecord DB).

The first step is to name your database columns the same name as what you labelled the data above. In this case, I used “name” and “description”. The mass assignment will automatically save the data to the corresponding DB columns.

	product_data_hash = product_data.to_hash 
	product_data_hash.each do |item|
	  @product = Product.create(item)

Now, we save it to the DB by converting the data to a hash. Each item creates a DB entry and mass assignment automatically sorts the data into the columns.

Note: you could also save the data to XML or a basic text file.

The Complete Script

You can view the complete script here: http://www.pastie.org/267654

To run the script, run “ruby homedepot.rb” in a command line.

7 Reasons Why My Social Music Site Never Took Off

I read posts on Hacker News about “Why my start-up failed” and found them interesting because it’s a raw start-up story, not the fairy tales that magazines like to write about. So here is my shot at it.

Last summer, I started developing a social music site, Contrastream. The problem: finding good indie music was difficult and a good reason why the 5 big labels control everything. The solution: a Digg-inspired site where you submit albums you want to share and vote up the ones you think are worth listening too, each album gets its own page with a YouTube music video, comments, etc.

7 Reasons Why it Never Took Off

  1. Design Perfection, I’ve heard its common for founders to spend too much time doing what they like or what they are good at. For me, it was spending too much time perfecting the design. Not only how it looks but all the aspects of user experience. This would have been a good thing if I was a designer on a team but there was a lot more I should have been doing. Adding content, SEO, getting press, writing blog posts, and getting to know the early community to name a few.
  2. Underestimated the “Cold Start” problem, I read this article by Bokado Social Design which talks about a big issue you face with a social site, especially when it relies on user-generated content. The value you are providing to your users should centre around the content you put on the site, so to build some base of users, you need to create a lot of content so the first users can kick off the community. In my case, it was having interesting new indie albums always on the site. But user content usually follows the 80/20/1 rule, 80% browse, 20% interact (comment), and 1% contribute (add albums). So even though I had 10,000 people visiting in the early months, I still ended up adding 75% of the content. It was very demanding and time-consuming especially being a solo founder.
  3. Market Size vs Business Model, the market I was targeting with the site, indie fans who knew a lot about music outside of the usual review sites, was small. I had planned to monetize the site with ads and in the process, I realized that you need a LOT of people using your site every day to make money off of ads, or a certain type of people who can’t tell the difference from AdSense and site links. The people I was targeting were also notorious for not clicking on ads (similar to the tech community).
  4. Bad launch, the launch of the site wasn’t planned very well at all. I decided to use the “genius” marketing ploy of having a private beta to create scarcity. I saw a bunch of other sites doing it and figured it was a good strategy. I was wrong. Private betas are good for sites that have either complex technology or something that’s hard to scale, and those two are about the only times you should do that. After the site appeared to be ready to go live, I decided to email Techcrunch about it. At the time I didn’t understand the importance these types of blogs put on exclusives. I figured they would take a day or two and send some questions. But about an hour or two after I emailed them, Techcrunch posted about Contrastream. Not only was I unprepared to have thousands of people come to my site right away, it turns out I didn’t get the chance to fully pitch my site to TC. Michael Arrington called the number I had posted in the whois and privacy policy which was my co-founders’ cell phone. I had used him because my phone was broken at the time. He called at about 11 pm and my friend was sleeping, he also had no idea who Michael Arrington, an internet celebrity to most tech founders, was. In his half-asleep state, he didn’t know what the call was about. I didn’t find out about the call until my friend had woken up a couple hours later and decided to tell me about it… so we were off to a good start.
  5. Competition, we had a lot of high-quality competitors. Our site offered information for great albums, including community voting. However, sites like Last.fm and Hype Machine were offering the actual music. That was a competitive advantage that’s hard to beat and we lacked a significant user base to convince enough people.
  6. Motivation, having to consistently find new content was probably the biggest hit to my motivation for the site. As much as I loved indie music it was draining to constantly find new albums to post up. It turned something I liked doing into a chore mainly because at the same time I was busy marketing the site, redesigning it, and attending classes. (I was in college for business for 18 hours of the week).
  7. Co-founder, I had started the site with a friend, he was smart and knew a lot about business but in the end, could not contribute much to the site. One reason why was that is not technical, so he couldn’t help much with the development of the site and knowing who Michael Arrington was might have helped. The other reason was that he wasn’t really into indie music so providing content and reaching out to the users was a barrier.
  8. Bonus: Derivative Idea, I tried to avoid just listing the common start-up mistakes written by Paul Graham, but this one was pretty accurate. The idea itself for Contrastream wasn’t too innovative or original. It was inspired by Digg.com which had applied the same model to news and articles on the web. I still believe applying this model to music is interesting and useful, but there were so many other me-too dig sites. It was simple software to develop and it could be applied to almost anything. Plus there were open source PHP versions of it available on the web.

4 Important Things I learned in the Process

  1. Ruby on Rails + Mac, I had used PHP+MySQL professionally in my last year of high school but had become rusty over the years. I decided to learn Ruby on Rails to develop Contrastream and I’m completely convinced it was the right decision. This is a language that lets you build applications quickly and stay agile. Like most rails developers I also bought a Mac which is another thing I’m completely convinced about, OSX has some of the best-designed software.
  2. How to Get Press, I learned the importance of having something interesting to say, how to leverage a product launch to get press, and the other basics like press releases and messages.
  3. Practical “Getting Real”, I’ve read this book twice and had the opportunity to apply almost everything with Contrastream. Great way to learn something.
  4. Niche Social Networks are not Businesses, they are communities. They have to be built like a community. That means spending a lot of time building relationships, knowing everything about the niche, and finding users. You can monetize a niche social network but its really important that you don’t approach it like a business.

Bottom line, I learned more from starting this site then I would ever have from college business classes or reading blogs. As a music site, it hasn’t exactly “failed” at the moment, it still pulls in around 3000 people a month mainly from search engines. I now consider it a hobby site and I’m looking forward to applying what I learned with my next start-up, Integrate.