The Devver Blog

A Boulder startup improving the way developers work.

Posts Tagged ‘Devver

Lessons Learned

As we’ve begun to wrap things up here at Devver, we’ve had the chance to reflect a bit on our experience. Although shutting down is not the outcome we wanted, it’s clear to both Dan and I that doing a startup has been an amazing learning experience. While we still have a lot to learn, we wanted to share some of the most important lessons we’ve learned during this time.

The community

When we started Devver, we were hesitant to ask for feedback and help. We quickly found that people are incredibly helpful and generous with their time. Users were willing to take a chance and use our products while giving us valuable feedback. Fellow Rubyists gave us ideas and helped us with technical problems. Mentors made time for meetings and introduced us to others who could assist us. And other entrepreneurs, both new and seasoned, were happy to share stories, compare experiences, and offer support.

If you are working on a startup, don’t be afraid to ask for help! The vast majority of people want to help you succeed, provided that you respect them and their time. That means you need to prepare adequately (do your research and ask good questions), figure out their preferred methods of communication (e.g. don’t call if they prefer email), show up on time, don’t overburden them, and thank them. And when other people need your help, give back!

You can build awesome relationships with various communities on your own, but we strongly recommend joining a mentorship program like TechStars. The program accelerated the process of connecting with mentors, users, and other entrepreneurs by providing an amazing community during the summer (and to this day). The advice, introductions, and support have been simply incredible.

Founding team

Dan and I are both technical founders. Looking back, it would have been to our advantage to have a third founder who really loved the business aspect of running a startup.

There is a belief (among technical founders) that technical founders are sufficient for a successful startup. Or, put more harshly, that you can teach a hacker business, but you can’t teach a businessman how to hack“. I don’t want to argue whether that’s true or not. Clearly there are examples of technical founders being sufficient to get a company going, but my point is that having solely technical founders is non-optimal. You can teach a hacker business, but you can’t make him or her get excited about it, which means it may not get the time or attention it deserves.

Hackers are passionate about, well, hacking. And so we tend to measure progress in terms of features completed or lines of code written. Clearly, code needs to be written, but ideally a startup would have a founder who is working on important non-technical tasks: talking with customers, measuring key metrics, developing distribution channels, etc. I’m not advocating that only one founder works on these tasks while technical founders ignore customer development – everyone needs to get involved. Rather, I’m pointing out that given a choice, technical founders will tend to solve problems technically and having a founder who has the opposite default is valuable.

Remote teams

We embraced working remotely: we hired Avdi to work in Pennsylvania while Dan and I lived in Boulder and later on, Dan moved to Washington, DC. There are many benefits to having a distributed team, but two stood out in our experience. First, we could hire top talent without having to worry about location (in fact, our flexibility regarding location was very attractive to most candidates we interviewed). Secondly, being in different locations allowed every team member to work with minimal distractions, which is invaluable when it comes to efficiently writing good code.

That said, communication was a challenge. To ensure we were all synced up, we had a daily standup as well as a weekly review. When Dan moved to DC, he and I scheduled another weekly meeting with no set agenda to just bring up all the issues, large and small, that were on our minds. We also all got together in the same location every few months to work in the same room and rekindle our team energy.

Also, pair programming was difficult to do remotely and we never came up with a great solution. As a result, we spent less than a day pairing a week on average.

The most significant drawback to a remote team is the administrative hassle. It’s a pain to manage payroll, unemployment, insurance, etc in one state. It’s a freaking nightmare to manage in three states (well, two states and a district), even though we paid a payroll service to take care of it. Apparently, once your startup gets larger, there are companies that will manage this with minimal hassle, but for a small team, it was a major annoyance and distraction.

Product development

Most of the mistakes we made developing our test accelerator and, later, Caliper boiled down to one thing: we should have focused more on customer development and finding a minimum viable product (MVP).

The first thing we worked on was our Ruby test accelerator. At the time, we thought we had found our MVP: we had made encouraging technical progress and we had talked to several potential customers who were excited about the product we were building. Anything simpler seems “too simple” to be interesting.

Our mistake at that point was to go “heads down” and focus on building the accelerator while minimizing our contact with users and customers (after all, we knew how great it was and time spent talking to customers was time we could be hacking!). We should have asking, “Is there an even simpler version of this product that we can deliver sooner to learn more about pricing, market size, and technical challenges?”

If we had done so, we would have discovered:

  • whether the need was great enough (and if the solution was good enough) to convince people to open their wallets
  • that while a few users acutely felt the pain of slow tests, most didn’t care about acceleration. However, many of those users did want a “simpler” application – non-accelerated Ruby cloud testing.
  • the primary technical challenge was not accelerating tests, it was configuring servers for customers’ Rails applications. Not only did we spend time focusing on the wrong technical challenges, we also made architectural decisions that actually made it harder to solve this core problem.

After eventually discovering that setup and configuration was our primary adoption problem (and after trying and failing to implement various strategies to make it simple and easy), we tried to move to the other end of the spectrum. Caliper was designed to provide value with zero setup or configuration – users just provided a link to source code and instantly got valuable data.

Unfortunately, we again made the mistake of focusing on engineering first and customer development second. We released our first version to some moderate success and then proceeded to continue to churn out features without really understanding customer needs. Only later on, after finally engaging potential customers did we realize that market was too small and price point was to low to have Caliper sustain our company by itself.

Conclusion

This is by no means a comprehensive list, but it is our hope that other startups and founders-to-be can learn from our experiences, both mistakes and successes. Doing a startup has been an incredible learning experience for both Dan and I and we look forward to learning more in the future – both first-hand and from the amazing group of entrepreneurs and hackers that we’ve been privileged enough to know.

Advertisements

Written by Ben

April 26, 2010 at 11:04 am

Improving Code using Metric_fu

Often, when people see code metrics they think, “that is interesting, I don’t know what to do with it.” I think metrics are great, but when you can really use them to improve your project’s code, that makes them even more valuable. metric_fu provides a bunch of great metric information, which can be very useful. But if you don’t know what parts of it are actionable it’s merely interesting instead of useful.

One thing when looking at code metrics to keep in mind is that a single metric may not be as interesting. If you look at a metric trends over time it might help give you more meaningful information. Showing this trending information is one of our goals with Caliper. Metrics can be your friend watching over the project and like having a second set of eyes on how the code is progressing, alerting you to problem areas before they get out of control. Working with code over time, it can be hard to keep everything in your head (I know I can’t). As the size of the code base increases it can be difficult to keep track of all the places where duplication or complexity is building up in the code. Addressing the problem areas as they are revealed by code metrics can keep them from getting out of hand, making future additions to the code easier.

I want to show how metrics can drive changes and improve the code base by working on a real project. I figured there was no better place to look than pointing metric_fu at our own devver.net website source and fixing up some of the most notable problem areas. We have had our backend code under metric_fu for awhile, but hadn’t been following the metrics on our Merb code. This, along with some spiked features that ended up turning into Caliper, led to some areas getting a little out of control.

Flay Score before cleanup

When going through metric_fu the first thing I wanted to start to work on was making the code a bit more DRY. The team and I were starting to notice a bit more duplication in the code than we liked. I brought up the Flay results for code duplication and found that four databases models shared some of the same methods.

Flay highlighted the duplication. Since we are planning on making some changes to how we handle timestamps soon, it seemed like a good place to start cleaning up. Below are the methods that existed in all four models. A third method ‘update_time’ existed in two of the four models.

 def self.pad_num(number, max_digits = 15)
    "%%0%di" % max_digits % number.to_i
  end

  def get_time
      Time.at(self.time.to_i)
  end

Nearly all of our DB tables store time in a way that can be sorted with SimpleDB queries. We wanted to change our time to be stored as UTC in the ISO 8601 format. Before changing to the ISO format, it was easy to pull these methods into a helper module and include it in all the database models.

module TimeHelper

  module ClassMethods
    def pad_num(number, max_digits = 15)
      "%%0%di" % max_digits % number.to_i
    end
  end

  def get_time
      Time.at(self.time.to_i)
  end

  def update_time
    self.time = self.class.pad_num(Time.now.to_i)
  end

end

Besides reducing the duplication across the DB models, it also made it much easier to include another time method update_time, which was in two of the DB models. This consolidated all the DB time logic into one file, so changing the time format to UTC ISO 8601 will be a snap. While this is a trivial example of a obvious refactoring it is easy to see how helper methods can often end up duplicated across classes. Flay can come in really handy at pointing out duplication that over time that can occur.

Flog gives a score showing how complex the measured code is. The higher the score the greater the complexity. The more complex code is the harder it is to read and it likely contains higher defect density. After removing some duplication from the DB models I found our worst database model based on Flog scores was our MetricsData model. It included an incredibly bad high flog score of 149 for a single method.

File Total score Methods Average score Highest score
/lib/sdb/metrics_data.rb 327 12 27 149

The method in question was extract_data_from_yaml, and after a little refactoring it was easy to make extract_data_from_yaml drop from a score of 149 to a series of smaller methods with the largest score being extract_flog_data! (33.6). The method was doing too much work and was frequently being changed. The method was extracting the data from 6 different metric tools and creating summary of the data.

The method went from a sprawling 42 lines of code to a cleaner and smaller method of 10 lines and a collection of helper methods that look something like the below code:

  def self.extract_data_from_yaml(yml_metrics_data)
    metrics_data = Hash.new {|hash, key| hash[key] = {}}
    extract_flog_data!(metrics_data, yml_metrics_data)
    extract_flay_data!(metrics_data, yml_metrics_data)
    extract_reek_data!(metrics_data, yml_metrics_data)
    extract_roodi_data!(metrics_data, yml_metrics_data)
    extract_saikuro_data!(metrics_data, yml_metrics_data)
    extract_churn_data!(metrics_data, yml_metrics_data)
    metrics_data
  end

  def self.extract_flog_data!(metrics_data, yml_metrics_data)
    metrics_data[:flog][:description] = 'measures code complexity'
    metrics_data[:flog]["average method score"] = Devver::Maybe(yml_metrics_data)[:flog][:average].value(N_A)
    metrics_data[:flog]["total score"]   = Devver::Maybe(yml_metrics_data)[:flog][:total].value(N_A)
    metrics_data[:flog]["worst file"] = Devver::Maybe(yml_metrics_data)[:flog][:pages].first[:path].fmap {|x| Pathname.new(x)}.value(N_A)
  end

Churn gives you an idea of files that might be in need of a refactoring. Often if a file is changing a lot it means that the code is doing too much, and would be more stable and reliable if broken up into smaller components. Looking through our churn results, it looks like we might need another layout to accommodate some of the different styles on the site. Another thing that jumps out is that both the TestStats and Caliper controller have fairly high churn. The Caliper controller has been growing fairly large as it has been doing double duty for user facing features and admin features, which should be split up. TestStats is admin controller code that also has been growing in size and should be split up into more isolated cases.

churn results

Churn gave me an idea of where might be worth focusing my effort. Diving in to the other metrics made it clear that the Caliper controller needed some attention.

The Flog, Reek, and Roodi Scores for Caliper Controller:

File Total score Methods Average score Highest score
/app/controllers/caliper.rb 214 14 15 42

reek before cleanup

Roodi Report
app/controllers/caliper.rb:34 - Method name "index" has a cyclomatic complexity is 14.  It should be 8 or less.
app/controllers/caliper.rb:38 - Rescue block should not be empty.
app/controllers/caliper.rb:51 - Rescue block should not be empty.
app/controllers/caliper.rb:77 - Rescue block should not be empty.
app/controllers/caliper.rb:113 - Rescue block should not be empty.
app/controllers/caliper.rb:149 - Rescue block should not be empty.
app/controllers/caliper.rb:34 - Method name "index" has 36 lines.  It should have 20 or less.

Found 7 errors.

Roodi and Reek both tell you about design and readability problems in your code. The screenshot of our Reek ‘code smells’ in the Caliper controller should show how it had gotten out of hand. The code smells filled an entire browser page! Roodi similarly had many complaints about the Caliper controller. Flog was also showing the file was getting a bit more complex than it should be. After picking off some of the worst Roodi and Reek complaints and splitting up methods with high Flog scores, the code had become easily readable and understandable at a glance. In fact I nearly cut the Reek complaints in half for the controller.

Reek after cleanup

Refactoring one controller, which had been quickly hacked together and growing out of control, brought it from a dizzying 203 LOC to 138 LOC. The metrics drove me to refactor long methods (52 LOC => 3 methods the largest being 23 LOC), rename unclear variable names (s => stat, p => project), move some helpers methods out of the controller into the helper class where they belong. Yes, all these refactorings and good code designs can be done without metrics, but it can be easy to overlook bad code smells when they start small, metrics can give you an early warning that a section of code is becoming unmanageable and likely prone to higher defect rates. The smaller file was a huge improvement in terms of cyclomatic complexity, LOC, code duplication, and more importantly, readability.

Obviously I think code metrics are cool, and that your projects can be improved by paying attention to them as part of the development lifecycle. I wrote about metric_fu so that anyone can try these metrics out on their projects. I think metric_fu is awesome, and my interest in Ruby tools is part of what drove us to build Caliper, which is really the easiest way try out metrics for your project. Currently, you can think of it as hosted metric_fu, but we are hoping to go even further and make the metrics clearly actionable to users.

In the end, yep, this is a bit of a plug for a product I helped build, but it is really because I think code metrics can be a great tool to help anyone with their development. So submit your repo in and give Caliper hosted Ruby metrics a shot. We are trying to make metrics more actionable and useful for all Ruby developers out, so we would love to here from you with any ideas about how to improve Caliper, please contact us.

Written by DanM

October 27, 2009 at 10:30 pm

Announcing the Devver API (alpha version 0.1)

Devver is great if you have a large test or spec suite that benefits from massive parallelism. But what if you have a project with fast tests and you just want to run them regularly? In this case, our existing service may be more than you need.

To further our vision of enabling Ruby developers to easily and quickly run their tests in the cloud, we’ve recently released the very first version of a HTTP-based Devver API. Right now, it offers a single service: a GitHub compatible web hook which will cause Devver to do a continuous integration-style build of your project, and email you the results.

The API is still in it’s infancy but we already have some cool features:

  • Support for MySQL, PostgreSQL, and SQLite databases
  • A RubyGem installation system that (by default) uses your
    config/environment.rb

    and

    config/environments/test.rb

    files to install gems

  • An extremely flexible hooks system so you can customize the way we install gems, prepare your database, run your ‘build’ command, and notify you.

We’re releasing very early, so there are some rough spots (notably, the output is too verbose and the setup is too complicated). But we’re also releasing often, so please let us know what you think and which features you’d like to see added.

If you have a project on GitHub, please try out the API and send us feedback. We’d really appreciate it!

Written by Ben

September 16, 2009 at 11:55 am

Posted in Devver

Tagged with ,

Devver is now in public beta!

We’re very happy to announce that today we released our public beta!

This is a big step forward for us and we’re extremely excited about this release. Please try it out and give us feedback on our support site!

Written by Ben

August 17, 2009 at 7:17 pm

Posted in Devver

Tagged with

Screencast: Setting up Devver on a non-Rails project

In order to show how easy it is to configure Devver for a project, we’ve made a short screencast to walk you through the steps. We’ve used DataMapper as an example application. As you can see, it only takes a few minutes to set up Devver and then the specs complete in a fraction of the time. In fact, the whole process – setup and Devver run – takes less time than running ‘rake spec’.

In order to see the commands clearly, you’ll want to enter full-screen mode. Or, if you prefer, you can download the high-quality version.

Written by Ben

July 9, 2009 at 1:02 pm

Posted in Devver

Tagged with , , ,

Devver adds Postgres and SQLite database support

We are working hard to quickly expand our compatibility on Ruby projects. With that goal driving us, we are happy to announce support for Postgres and SQLite databases. With the addition of these database options, along with our existing support for MySQL, Devver now supports all of the most popular databases commonly used with Ruby. These three databases are the default databases tested against ActiveRecord and we expect will cover the majority of the Ruby community.

To begin working with Postgres or SQLite on Devver all you need to do is have a database.yml with the test environment set to the adapter of your choice. If we don’t support your favorite database, you can still request a beta invite and let us know which database you want us to support. If we just added support for your database, perhaps we can speed up your project on Devver, so request a beta invite.

Written by DanM

July 6, 2009 at 12:24 pm

Posted in Development, Devver, Ruby, Testing

Tagged with , ,

Devver.net has a new look!

Tonight we just launched the brand new version of Devver at http://devver.net. It’s not perfect (yeah, yeah, we know the blog doesn’t match – that should be fixed in the next week or so), but we’re trying to “release early, release often.” Let us know how you like the new look and how we can improve it!

Written by Ben

May 14, 2009 at 8:40 pm

Posted in Uncategorized

Tagged with ,