The Devver Blog

A Boulder startup improving the way developers work.

Archive for the ‘Tools’ Category

Speeding up multi-browser Selenium Testing using concurrency

I haven’t used Selenium for awhile, so I took some time to dig into the options to get some mainline tests running against Caliper in multiple browsers. I wanted to be able to test a variety of browsers against our staging server before pushing new releases. Eventually this could be integrated into Continuous Integration (CI) or Continuous Deployment (CD).

The state of Selenium testing for Rails is currently in flux:

So there are multiple gems / frameworks:

I decided to investigate several options to determine which is the best approach for our tests.

selenium-on-rails

I originally wrote a couple example tests using the selenium-on-rails plugin. This allows you to browse to your local development web server at ‘/selenium’ and run tests in the browser using the Selenium test runner. It is simple and the most basic Selenium mode, but it obviously has limitations. It wasn’t easy to run many different browsers using this plugin, or use with Selenium-RC, and the plugin was fairly dated. This lead me to try simplest next thing, selenium-client

open '/'
assert_title 'Hosted Ruby/Rails metrics - Caliper'
verify_text_present 'Recently Generated Metrics'

click_and_wait "css=#projects a:contains('Projects')"
verify_text_present 'Browse Projects'

click_and_wait "css=#add-project a:contains('Add Project')"
verify_text_present 'Add Project'

type 'repo','git://github.com/sinatra/sinatra.git'
click_and_wait "css=#submit-project"
verify_text_present 'sinatra/sinatra'
wait_for_element_present "css=#hotspots-summary"
verify_text_present 'View full Hot Spots report'

view this gist

selenium-client

I quickly converted my selenium-on-rails tests to selenium-client tests, with some small modifications. To run tests using selenium-client, you need to run a selenium-RC server. I setup Sauce RC on my machine and was ready to go. I configured the tests to run locally on a single browser (Firefox). Once that was working I wanted to run the same tests in multiple browsers. I found that it was easy to dynamically create a test for each browser type and run them using selenium-RC, but that it was increadly slow, since tests run one after another and not concurrently. Also, you need to install each browser (plus multiple versions) on your machine. This led me to use Sauce Labs’ OnDemand.

browser.open '/'
assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
assert browser.text?('Recently Generated Metrics')

browser.click "css=#projects a:contains('Projects')", :wait_for => :page
assert browser.text?('Browse Projects')

browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
assert browser.text?('Add Project')

browser.type 'repo','git://github.com/sinatra/sinatra.git'
browser.click "css=#submit-project", :wait_for => :page
assert browser.text?('sinatra/sinatra')
browser.wait_for_element "css=#hotspots-summary"
assert browser.text?('View full Hot Spots report')

view this gist

Using Selenium-RC and Sauce Labs Concurrently

Running on all the browsers Sauce Labs offers (12) took 910 seconds. Which is cool, but way too slow, and since I am just running the same tests over in different browsers, I decided that it should be done concurrently. If you are running your own Selenium-RC server this will slow down a lot as your machine has to start and run all of the various browsers, so this approach isn’t recommended on your own Selenium-RC setup, unless you configure Selenium-Grid. If you are using¬† Sauce Labs, the tests run concurrently with no slow down. After switching to concurrently running my Selenium tests, run time went down to 70 seconds.

My main goal was to make it easy to write pretty standard tests a single time, but be able to change the number of browsers I ran them on and the server I targeted. One approach that has been offered explains how to setup Cucumber to run Selenium tests against multiple browsers. This basically runs the rake task over and over for each browser environment.

Althought this works, I also wanted to run all my tests concurrently. One option would be to concurrently run all of the Rake tasks and join the results. Joining the results is difficult to do cleanly or you end up outputting the full rake test output once per browser (ugly when running 12 times). I took a slightly different approach which just wraps any Selenium-based test in a run_in_browsers block. Depending on the options set, the code can run a single browser against your locally hosted application, or many browsers against a staging or production server. Then simply create a separate Rake task for each of the configurations you expect to use (against local selenium-RC and Sauce Labs on demand).

I am pretty happy with the solution I have for now. It is simple and fast and gives another layer of assurances that Caliper is running as expected. Adding additional tests is simple, as is integrating the solution into our CI stack. There are likely many ways to solve the concurrent selenium testing problem, but I was able to go from no Selenium tests to a fast multi-browser solution in about a day, which works for me. There are downsides to the approach, the error output isn’t exactly the same when run concurrently, but it is pretty close.¬† As opposed to seeing multiple errors for each test, you get a single error per test which includes the details about what browsers the error occurred on.

In the future I would recommend closely watching Webrat and Capybara which I would likely use to drive the Selenium tests. I think the eventual merge will lead to the best solution in terms of flexibility. At the moment Capybara doesn’t support selenium-RC, and the tests I originally wrote didn’t convert to the Webrat API as easily as directly to selenium-client (although setting up Webrat to use Selenium looks pretty simple). The example code given could likely be adapted easily to work with existing Webrat tests.

namespace :test do
  namespace :selenium do

    desc "selenium against staging server"
    task :staging do
      exec "bash -c 'SELENIUM_BROWSERS=all SELENIUM_RC_URL=saucelabs.com SELENIUM_URL=http://caliper-staging.heroku.com/  ruby test/acceptance/walkthrough.rb'"
    end

    desc "selenium against local server"
    task :local do
      exec "bash -c 'SELENIUM_BROWSERS=one SELENIUM_RC_URL=localhost SELENIUM_URL=http://localhost:3000/ ruby test/acceptance/walkthrough.rb'"
    end
  end
end

view this gist

require "rubygems"
require "test/unit"
gem "selenium-client", ">=1.2.16"
require "selenium/client"
require 'threadify'

class ExampleTest  1
      errors = []
      browsers.threadify(browsers.length) do |browser_spec|
        begin
          run_browser(browser_spec, block)
        rescue => error
          type = browser_spec.match(/browser\": \"(.*)\", /)[1]
          version = browser_spec.match(/browser-version\": \"(.*)\",/)[1]
          errors < type, :version => version, :error => error}
        end
      end
      message = ""
      errors.each_with_index do |error, index|
        message +="\t[#{index+1}]: #{error[:error].message} occurred in #{error[:browser]}, version #{error[:version]}\n"
      end
      assert_equal 0, errors.length, "Expected zero failures or errors, but got #{errors.length}\n #{message}"
    else
      run_browser(browsers[0], block)
    end
  end

  def run_browser(browser_spec, block)
    browser = Selenium::Client::Driver.new(
                                           :host => selenium_rc_url,
                                           :port => 4444,
                                           :browser => browser_spec,
                                           :url => test_url,
                                           :timeout_in_second => 120)
    browser.start_new_browser_session
    begin
      block.call(browser)
    ensure
      browser.close_current_browser_session
    end
  end

  def test_basic_walkthrough
    run_in_all_browsers do |browser|
      browser.open '/'
      assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
      assert browser.text?('Recently Generated Metrics')

      browser.click "css=#projects a:contains('Projects')", :wait_for => :page
      assert browser.text?('Browse Projects')

      browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://github.com/sinatra/sinatra.git'
      browser.click "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')
      browser.wait_for_element "css=#hotspots-summary"
      assert browser.text?('View full Hot Spots report')
    end
  end

  def test_generate_new_metrics
    run_in_all_browsers do |browser|
      browser.open '/'
      browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://github.com/sinatra/sinatra.git'
      browser.click "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')

      browser.click "css=#fetch"
      browser.wait_for_page
      assert browser.text?('sinatra/sinatra')
    end
  end

end

view this gist

Advertisements

Written by DanM

April 8, 2010 at 10:07 am

Making Rack::Reloader work with Sinatra

According to the Sinatra FAQ, source reloading was taken out of Sinatra in version 0.9.2 due to “excess complexity” (in my opinion, that’s a great idea, because it’s not a feature that needs to be in minimal a web framework like Sinatra). Also, according to the FAQ, Rack::Reloader (included in Rack) can be added to a Sinatra application to do source reloading, so I decided to try it out.

Setting up Rack::Reloader is easy:

require 'sinatra'
require 'rack'

configure :development do
  use Rack::Reloader
end

get "/hello" do
  "hi!"
end
$ ruby hello.rb
== Sinatra/0.9.4 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.2.4 codename Flaming Astroboy)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
[on another terminal]
$ curl http://localhost:4567/hello
hi!

If you add another route, you can access it without restarting Sinatra:

get "/goodbye" do
  "bye!"
end
$ curl http://localhost:4567/goodbye
bye!

But what happens when you change the contents of a route?

get "/hello" do
  "greetings!"
end
$ curl http://localhost:4567/hello
hi!

You still get the old value! What is going on here?

Rack::Reloader simply looks at all files that have been required and, if they have changed on disk, re-requires them. So each Sinatra route is re-evaluated when a reload happens.

However, identical Sinatra routes do NOT override each other. Rather, the first route that is evaluated is used (more precisely, all routes appended to a list and the first matching one is used, so additional identical routes are never run).

We can see this with a simple example:

require 'sinatra'

get "/foo" do
 "foo"
end

get "/foo" do
 "bar"
end
$ curl http://localhost:4567/foo
foo   # The result is 'foo', not 'bar'

Clearly, Rack::Reloader is not very useful if you can’t change the contents of any route. The solution is to throw away the old routes when the file is reloaded using

Sinatra::Application.reset!

, like so:

configure :development do
  Sinatra::Application.reset!
  use Rack::Reloader
end
$ curl http://localhost:4567/hello
greetings!

Success!

A word of caution: you MUST call

reset!

very early in your file – before you add any middleware, do any other configuration, or add any routes.

This method has worked well enough for our Sinatra application. However, code reloading is always tricky and is bound to occasionally produce some weird results. If you want to significantly reduce the chances for strange bugs (at the expense of code loading time), try Shotgun or Rerun. Happy reloading!

Written by Ben

December 21, 2009 at 3:20 pm

Announcing Caliper Community Statistics

For the past few months, we’ve been building Caliper to help you easily generate code metrics for your Ruby projects. We’ve recently added another dimension of metrics information: community statistics for all the Ruby projects that are currently in Caliper.

The idea of community statistics is two-fold. From a practical perspective, you can now compare your project’s metrics to the community. For example, Flog measures the complexity of methods. Many people wonder exactly defines a good Flog score for an individual method. In Jake Scruggs’ opinion, a score of 0-10 is “Awesome”, while a score of 11-20 is “Good enough”. That sounds correct, but with Caliper’s community metrics, we can also compare the average Flog scores for entire projects to see what defines a good average score.

To do so, we calculate the average Flog method score for each project and plot those averages on a histogram, like so:

flog_average_method_histogram

Looking at the data, we see that a lot of projects have an average Flog score between 6 and 12 (the mean is 10.3 and the max is is 21.3).

If your project’s average Flog score is 9, does that mean it has only “Awesome” methods, Flog-wise? Well, remember that we’re looking at the average score for each project. I suspect that in most projects, lots of tiny methods are pulling down the average, but there are still plenty of big, nasty methods. It would be interesting to look at the community statistics for maximum Flog score per project or see a histogram of the Flog scores for all methods across all projects (watch this space!).

Since several of the metrics (like Reek, which detects code smells) have scores that grow in proportion to the number of lines of code, we divide the raw score by each project’s lines of code. As a result, we can sensibly compare your project to other projects, no matter what the difference in size.

The second reason we’re calculating community statistics is so we can discover trends across the Ruby community. For example, we can compare the ratio of lines of application code to test code. It’s interesting to note that a significant portion of projects in Caliper have no tests, but that, for the projects that do have tests, most of them have a code:test ratio in the neighborhood of 2:1.

code_to_test_ratio_histogram

Other interesting observations from our initial analysis:
* A lot of projects (mostly small ones) have no Flay duplications.
* Many smaller projects have no Reek smells, but the average project has about 1 smell per 9 lines of code.

Want to do your own analysis? We’ve built a scatter plotter so you can see if any two metrics have any correlation. For instance, note the correlation between code complexity and code smells.

Here’s a scatter plot of that data (zoomed in):

scatter_plot

Over the coming weeks, we’ll improve the graphs we have and add new graphs that expose interesting trends. But we need your help! Please let us know if you spot problems, have ideas for new graphs, or have any questions. Additionally, please add your project to Caliper so it can be included in our community statistics. Finally, feel free to grab the raw stats from our alpha API* and play around yourself!

* Quick summary:

curl http://api.devver.net/metrics

for JSON,

curl -H 'Accept:text/x-yaml' http://api.devver.net/metrics

for YAML. More details. API is under development, so please send us feedback!

Written by Ben

November 19, 2009 at 8:37 am

Posted in Development, Ruby, Tools

Tagged with ,

Improving Code using Metric_fu

Often, when people see code metrics they think, “that is interesting, I don’t know what to do with it.” I think metrics are great, but when you can really use them to improve your project’s code, that makes them even more valuable. metric_fu provides a bunch of great metric information, which can be very useful. But if you don’t know what parts of it are actionable it’s merely interesting instead of useful.

One thing when looking at code metrics to keep in mind is that a single metric may not be as interesting. If you look at a metric trends over time it might help give you more meaningful information. Showing this trending information is one of our goals with Caliper. Metrics can be your friend watching over the project and like having a second set of eyes on how the code is progressing, alerting you to problem areas before they get out of control. Working with code over time, it can be hard to keep everything in your head (I know I can’t). As the size of the code base increases it can be difficult to keep track of all the places where duplication or complexity is building up in the code. Addressing the problem areas as they are revealed by code metrics can keep them from getting out of hand, making future additions to the code easier.

I want to show how metrics can drive changes and improve the code base by working on a real project. I figured there was no better place to look than pointing metric_fu at our own devver.net website source and fixing up some of the most notable problem areas. We have had our backend code under metric_fu for awhile, but hadn’t been following the metrics on our Merb code. This, along with some spiked features that ended up turning into Caliper, led to some areas getting a little out of control.

Flay Score before cleanup

When going through metric_fu the first thing I wanted to start to work on was making the code a bit more DRY. The team and I were starting to notice a bit more duplication in the code than we liked. I brought up the Flay results for code duplication and found that four databases models shared some of the same methods.

Flay highlighted the duplication. Since we are planning on making some changes to how we handle timestamps soon, it seemed like a good place to start cleaning up. Below are the methods that existed in all four models. A third method ‘update_time’ existed in two of the four models.

 def self.pad_num(number, max_digits = 15)
    "%%0%di" % max_digits % number.to_i
  end

  def get_time
      Time.at(self.time.to_i)
  end

Nearly all of our DB tables store time in a way that can be sorted with SimpleDB queries. We wanted to change our time to be stored as UTC in the ISO 8601 format. Before changing to the ISO format, it was easy to pull these methods into a helper module and include it in all the database models.

module TimeHelper

  module ClassMethods
    def pad_num(number, max_digits = 15)
      "%%0%di" % max_digits % number.to_i
    end
  end

  def get_time
      Time.at(self.time.to_i)
  end

  def update_time
    self.time = self.class.pad_num(Time.now.to_i)
  end

end

Besides reducing the duplication across the DB models, it also made it much easier to include another time method update_time, which was in two of the DB models. This consolidated all the DB time logic into one file, so changing the time format to UTC ISO 8601 will be a snap. While this is a trivial example of a obvious refactoring it is easy to see how helper methods can often end up duplicated across classes. Flay can come in really handy at pointing out duplication that over time that can occur.

Flog gives a score showing how complex the measured code is. The higher the score the greater the complexity. The more complex code is the harder it is to read and it likely contains higher defect density. After removing some duplication from the DB models I found our worst database model based on Flog scores was our MetricsData model. It included an incredibly bad high flog score of 149 for a single method.

File Total score Methods Average score Highest score
/lib/sdb/metrics_data.rb 327 12 27 149

The method in question was extract_data_from_yaml, and after a little refactoring it was easy to make extract_data_from_yaml drop from a score of 149 to a series of smaller methods with the largest score being extract_flog_data! (33.6). The method was doing too much work and was frequently being changed. The method was extracting the data from 6 different metric tools and creating summary of the data.

The method went from a sprawling 42 lines of code to a cleaner and smaller method of 10 lines and a collection of helper methods that look something like the below code:

  def self.extract_data_from_yaml(yml_metrics_data)
    metrics_data = Hash.new {|hash, key| hash[key] = {}}
    extract_flog_data!(metrics_data, yml_metrics_data)
    extract_flay_data!(metrics_data, yml_metrics_data)
    extract_reek_data!(metrics_data, yml_metrics_data)
    extract_roodi_data!(metrics_data, yml_metrics_data)
    extract_saikuro_data!(metrics_data, yml_metrics_data)
    extract_churn_data!(metrics_data, yml_metrics_data)
    metrics_data
  end

  def self.extract_flog_data!(metrics_data, yml_metrics_data)
    metrics_data[:flog][:description] = 'measures code complexity'
    metrics_data[:flog]["average method score"] = Devver::Maybe(yml_metrics_data)[:flog][:average].value(N_A)
    metrics_data[:flog]["total score"]   = Devver::Maybe(yml_metrics_data)[:flog][:total].value(N_A)
    metrics_data[:flog]["worst file"] = Devver::Maybe(yml_metrics_data)[:flog][:pages].first[:path].fmap {|x| Pathname.new(x)}.value(N_A)
  end

Churn gives you an idea of files that might be in need of a refactoring. Often if a file is changing a lot it means that the code is doing too much, and would be more stable and reliable if broken up into smaller components. Looking through our churn results, it looks like we might need another layout to accommodate some of the different styles on the site. Another thing that jumps out is that both the TestStats and Caliper controller have fairly high churn. The Caliper controller has been growing fairly large as it has been doing double duty for user facing features and admin features, which should be split up. TestStats is admin controller code that also has been growing in size and should be split up into more isolated cases.

churn results

Churn gave me an idea of where might be worth focusing my effort. Diving in to the other metrics made it clear that the Caliper controller needed some attention.

The Flog, Reek, and Roodi Scores for Caliper Controller:

File Total score Methods Average score Highest score
/app/controllers/caliper.rb 214 14 15 42

reek before cleanup

Roodi Report
app/controllers/caliper.rb:34 - Method name "index" has a cyclomatic complexity is 14.  It should be 8 or less.
app/controllers/caliper.rb:38 - Rescue block should not be empty.
app/controllers/caliper.rb:51 - Rescue block should not be empty.
app/controllers/caliper.rb:77 - Rescue block should not be empty.
app/controllers/caliper.rb:113 - Rescue block should not be empty.
app/controllers/caliper.rb:149 - Rescue block should not be empty.
app/controllers/caliper.rb:34 - Method name "index" has 36 lines.  It should have 20 or less.

Found 7 errors.

Roodi and Reek both tell you about design and readability problems in your code. The screenshot of our Reek ‘code smells’ in the Caliper controller should show how it had gotten out of hand. The code smells filled an entire browser page! Roodi similarly had many complaints about the Caliper controller. Flog was also showing the file was getting a bit more complex than it should be. After picking off some of the worst Roodi and Reek complaints and splitting up methods with high Flog scores, the code had become easily readable and understandable at a glance. In fact I nearly cut the Reek complaints in half for the controller.

Reek after cleanup

Refactoring one controller, which had been quickly hacked together and growing out of control, brought it from a dizzying 203 LOC to 138 LOC. The metrics drove me to refactor long methods (52 LOC => 3 methods the largest being 23 LOC), rename unclear variable names (s => stat, p => project), move some helpers methods out of the controller into the helper class where they belong. Yes, all these refactorings and good code designs can be done without metrics, but it can be easy to overlook bad code smells when they start small, metrics can give you an early warning that a section of code is becoming unmanageable and likely prone to higher defect rates. The smaller file was a huge improvement in terms of cyclomatic complexity, LOC, code duplication, and more importantly, readability.

Obviously I think code metrics are cool, and that your projects can be improved by paying attention to them as part of the development lifecycle. I wrote about metric_fu so that anyone can try these metrics out on their projects. I think metric_fu is awesome, and my interest in Ruby tools is part of what drove us to build Caliper, which is really the easiest way try out metrics for your project. Currently, you can think of it as hosted metric_fu, but we are hoping to go even further and make the metrics clearly actionable to users.

In the end, yep, this is a bit of a plug for a product I helped build, but it is really because I think code metrics can be a great tool to help anyone with their development. So submit your repo in and give Caliper hosted Ruby metrics a shot. We are trying to make metrics more actionable and useful for all Ruby developers out, so we would love to here from you with any ideas about how to improve Caliper, please contact us.

Written by DanM

October 27, 2009 at 10:30 pm

Spellcheck your files with Aspell and Rake

We recently redid our website. The new site included a new design and much more content explaining what we do. We wanted a quick way to check over everything and make sure we didn’t miss any spelling errors or typos. First I started looking for a web service that could scan the site for spelling errors. I found spellr.us, which is nice but would only catch errors once they were live. It also can’t scan all of the pages which require being logged in.

I was pairing with Avdi who thought we should just run Aspell, which worked out great. We were originally trying to just create a simple Emacs macro to go through all our HTML files and check them but in the end created simple Rake tasks, which makes it really easy to integrate spellcheck into CI. After Avdi figured out the commands we needed to use on each file to get the information we needed from Aspell, it was easy to just wrap the command using Rake’s FileList. To keep everyone on the same setup, we created a local dictionary of words to ignore or accept and keep that checked into source control as well.

The final solution grabs all the files you want to spell check, then runs them through Aspell with HTML filtering. We have two tasks: one that runs in interactive mode the the user can fix mistakes and one mode for CI that just fails if it finds any errors.

def run_spellcheck(file,interactive=false)
  if interactive
    cmd = "aspell -p ./config/devver_dictionary -H check #{file}"
    puts cmd
    system(cmd)
    [true,""]
  else
    cmd = "aspell -p ./config/devver_dictionary -H list  'spellcheck:interactive'

namespace :spellcheck do
  files = FileList['app/views/**/*.html.erb']

  desc "Spellcheck interactive"
  task :interactive do
    files.each do |file|
      run_spellcheck(file,true)
    end
    puts "spelling check complete"
  end

  desc "Spellcheck for ci"
  task :ci do
    files.each do |file|
      success, results = run_spellcheck(file)
      unless success
        puts results
        exit 1
      end
    end
    puts "no spelling errors"
    exit 0
  end
end

view this gist

Written by DanM

May 26, 2009 at 8:33 am

Our Tools & Practices for Remote Collaboration

Last week, we had Avdi, the newest addition to our team, join us in Boulder, CO. It was great to get some face-to-face time, since Avdi will primarily be working from his home in Pennsylvania while Dan and I continue to work in Boulder.

We are excited about the benefits of having a distributed team, but we’re also aware that there are a number of challenges. As a result, one of the things we worked on last week was figuring out the tools and practices we’ll be using to work effectively from across the country. Luckily, both Avdi and Dan have experience working remotely which we can draw upon.

We evaluated a number of options, but settled on the following tools and practices.

Practices

  • Daily Standup. Every day at the same time, we all get on video chat. We cover what we did yesterday, what we’re working on today, and whether or not we’re blocked on anything. The goal is to keep this meeting at 15 min or less.
  • Minimize interruptions. Whenever we need to communicate with each other, we try to do so on the channel that is the least disruptive (and disrupts the fewest team members). Of course, sometimes we need to be disruptive if an issue is pressing, if someone is blocked, or if we need to have high-bandwidth communication (information, especially cues like body language, don’t come across very effectively on channels like email)
  • Keep it simple. We want to use the smallest number of tools and channels that will allow us to work effectively.

Channels and Tools

Less
disruptive
More
disruptive
Channel Tool Properties
Passive Updates Present.ly
  • Asynchronous
  • Not required reading
Email Any email client (in practice, Gmail)
  • Asynchronous
  • Required reading (usually)
  • Sometimes time-sensitive, sometimes not
IM Skype
  • Semi-synchronous (but usually synchronous)
  • Usually time-sensitive
Voice/video chat Skype
  • Synchronous
  • High bandwidth* (especially video chat)
  • Best for meetings

* By “high bandwidth”, I don’t mean that the tool itself requires a lot of TCP/IP traffic (although this is true, it doesn’t really matter). What I mean is that we can communicate a lot of information between team members in a short amount of time.

Other Tools

  • Lighthouse for issue tracking
  • GitHub for source control and our project wiki
  • RealVNC for screen sharing (essential for remote pair programming)

This is our first attempt at finding a good set of tools and practices for remote collaboration. As time goes on, we’ll undoubtedly iterate and improve upon these.

For another perspective (with a slightly different set of tools), here is a presentation from 2008 about virtual teams.

What tools and practices have worked (and which have not worked) for your team?

Written by Ben

April 28, 2009 at 8:57 am

Managing Amazon EC2 with your iPhone

I wanted a quick way when out and about to easily manage our AWS EC2 instances while out and about. It hasn’t happened often, but occasionally I am away from the computer and I need to reboot the instances. Perhaps I remember our developer cluster isn’t being used and want to shut it down to save some money.

I didn’t find anything simple and free with a quick Google search, so in a about an hour I wrote a nice little Sinatra app that will let me view our instances, shutdown, or reboot any specific instance or all of them. The tiny framework actually turned out to be even more useful as I now have options that let us tail error logs, reboot Apache, reboot mongrel clusters, or execute any common system administration task.

I won’t be going into detail on how to build a iPhone webapp using Sinatra and iUI, because Ben already created an excellent post detailing all of those steps. In fact I used his old project as the template when I created this project. I can’t begin to explain how amazingly simple it is to build an iPhone webapp using Sinatra, so if you have been thinking of a quick project I highly recommend it.

Here are some screen shots showing the final app. (screenshot courtesy of iPhoney):

ec2 manager home view

ec2 manager home view.

ec2 manager describe view

ec2 manager describe instances view.

ec2 manager instance view.

ec2 manager instance view.

This app uses the Amazon EC2 API Tools to do all the heavy lifting. So this app assumes that you already have the tools installed and working on the machine you want this app to run on. This normally involves installing the tools and setting up some environment variables like EC2_HOME, so make sure you can run ec2-describe-instances from the machine. After that you should just have to change EC2_HOME in the Sinatra app to match the path where you installed the EC2 tools.

Let me know if you have any issues, it is quick and dirty, but I have already found it useful.

To run the app:
cmd> ruby -rubygems ./ec2_manager.rb

require 'sinatra'

EC2_HOME = '~/.ec2'

use Rack::Auth::Basic do |username, password|
  [username, password] == ['some_user', 'some_pass']
end

get "/" do
  @links = %w{describe_ec2s restart_all_ec2s shutdown_all_ec2s}.map { |cmd|
    cmd_link(cmd)
  }.join
  erb :index
end

get "/describe_ec2s" do
  results = `cd #{EC2_HOME}; ec2-describe-instances`
  instances = results.scan(/INSTANCE\ti-\w*/).each{|i| i.sub!("INSTANCE\t",'')}
  @links = instances.map { |i|
    instance_link(i)
  }.join
  erb :index
end

get "/restart_all_ec2s" do
  @results = `cd #{EC2_HOME}; ec2-describe-instances`
  instances = @results.scan(/INSTANCE\ti-\w*/).each{|i| i.sub!("INSTANCE\t",'')}
  cmd="cd #{EC2_HOME}; ec2-reboot-instances #{instances.join(' ')}"
  @results = `cmd`
  erb :index
end

get "/shutdown_all_ec2s" do
  @results = `cd #{EC2_HOME}; ec2-describe-instances`
  instances = @results.scan(/INSTANCE\ti-\w*/).each{|i| i.sub!("INSTANCE\t",'')}
  cmd="cd #{EC2_HOME}; ec2-terminate-instances #{instances.join(' ')}"
  @results = `cmd`
  erb :index
end

get "/instance/:id" do
  id = params[:id] if params[:id]
  verify_id(id)
  @results = `cd #{EC2_HOME}; ec2-describe-instances #{id}`
  @links = "<li><a href='/shutdown/#{id}' target='_self'>shutdown #{id}</a></li>"
  @links += " <li><a href='/reboot/#{id}' target='_self'>reboot #{id}</a></li>"
  erb :index
end

get "/reboot/:id" do
  id = params[:id] if params[:id]
  verify_id(id)
  @results = `cd #{EC2_HOME}; ec2-reboot-instances #{id}`
  erb :index
end

get "/shutdown/:id" do
  id = params[:id] if params[:id]
  verify_id(id)
  @results = `cd #{EC2_HOME}; ec2-terminate-instances #{id}`
  erb :index
end

helpers do

  def cmd_link(cmd)
    "<li><a href='#{cmd}' target='_self'>#{cmd}</a></li>"
  end

  def instance_link(instance)
    "<li><a href='/instance/#{instance}' target='_self'>#{instance}</a></li>"
  end

  def verify_id(id)
    raise Sinatra::ServerError, 'bad-id, What you doin?' unless id.match(/i-\w*/)
  end

end

use_in_file_templates!

__END__

@@ index



@import "/stylesheets/iui.css";




<div class="toolbar">
<h1 id="pageTitle"></h1>
</div>


<ul id="home">
<li><a href='/' target='_self'>home</a></li>


</ul>





<li><strong>results</strong></li>

<ul id="home">
<li><a href='/' target='_self'>home</a></li>

&lt;%= @results.gsub(&quot;\n&quot;,&quot;<br />") %&gt;
</ul>




view this gist

Written by DanM

March 5, 2009 at 10:03 am