The Devver Blog

A Boulder startup improving the way developers work.

Archive for the ‘Testing’ Category

Speeding up multi-browser Selenium Testing using concurrency

I haven’t used Selenium for awhile, so I took some time to dig into the options to get some mainline tests running against Caliper in multiple browsers. I wanted to be able to test a variety of browsers against our staging server before pushing new releases. Eventually this could be integrated into Continuous Integration (CI) or Continuous Deployment (CD).

The state of Selenium testing for Rails is currently in flux:

So there are multiple gems / frameworks:

I decided to investigate several options to determine which is the best approach for our tests.

selenium-on-rails

I originally wrote a couple example tests using the selenium-on-rails plugin. This allows you to browse to your local development web server at ‘/selenium’ and run tests in the browser using the Selenium test runner. It is simple and the most basic Selenium mode, but it obviously has limitations. It wasn’t easy to run many different browsers using this plugin, or use with Selenium-RC, and the plugin was fairly dated. This lead me to try simplest next thing, selenium-client

open '/'
assert_title 'Hosted Ruby/Rails metrics - Caliper'
verify_text_present 'Recently Generated Metrics'

click_and_wait "css=#projects a:contains('Projects')"
verify_text_present 'Browse Projects'

click_and_wait "css=#add-project a:contains('Add Project')"
verify_text_present 'Add Project'

type 'repo','git://github.com/sinatra/sinatra.git'
click_and_wait "css=#submit-project"
verify_text_present 'sinatra/sinatra'
wait_for_element_present "css=#hotspots-summary"
verify_text_present 'View full Hot Spots report'

view this gist

selenium-client

I quickly converted my selenium-on-rails tests to selenium-client tests, with some small modifications. To run tests using selenium-client, you need to run a selenium-RC server. I setup Sauce RC on my machine and was ready to go. I configured the tests to run locally on a single browser (Firefox). Once that was working I wanted to run the same tests in multiple browsers. I found that it was easy to dynamically create a test for each browser type and run them using selenium-RC, but that it was increadly slow, since tests run one after another and not concurrently. Also, you need to install each browser (plus multiple versions) on your machine. This led me to use Sauce Labs’ OnDemand.

browser.open '/'
assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
assert browser.text?('Recently Generated Metrics')

browser.click "css=#projects a:contains('Projects')", :wait_for => :page
assert browser.text?('Browse Projects')

browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
assert browser.text?('Add Project')

browser.type 'repo','git://github.com/sinatra/sinatra.git'
browser.click "css=#submit-project", :wait_for => :page
assert browser.text?('sinatra/sinatra')
browser.wait_for_element "css=#hotspots-summary"
assert browser.text?('View full Hot Spots report')

view this gist

Using Selenium-RC and Sauce Labs Concurrently

Running on all the browsers Sauce Labs offers (12) took 910 seconds. Which is cool, but way too slow, and since I am just running the same tests over in different browsers, I decided that it should be done concurrently. If you are running your own Selenium-RC server this will slow down a lot as your machine has to start and run all of the various browsers, so this approach isn’t recommended on your own Selenium-RC setup, unless you configure Selenium-Grid. If you are using¬† Sauce Labs, the tests run concurrently with no slow down. After switching to concurrently running my Selenium tests, run time went down to 70 seconds.

My main goal was to make it easy to write pretty standard tests a single time, but be able to change the number of browsers I ran them on and the server I targeted. One approach that has been offered explains how to setup Cucumber to run Selenium tests against multiple browsers. This basically runs the rake task over and over for each browser environment.

Althought this works, I also wanted to run all my tests concurrently. One option would be to concurrently run all of the Rake tasks and join the results. Joining the results is difficult to do cleanly or you end up outputting the full rake test output once per browser (ugly when running 12 times). I took a slightly different approach which just wraps any Selenium-based test in a run_in_browsers block. Depending on the options set, the code can run a single browser against your locally hosted application, or many browsers against a staging or production server. Then simply create a separate Rake task for each of the configurations you expect to use (against local selenium-RC and Sauce Labs on demand).

I am pretty happy with the solution I have for now. It is simple and fast and gives another layer of assurances that Caliper is running as expected. Adding additional tests is simple, as is integrating the solution into our CI stack. There are likely many ways to solve the concurrent selenium testing problem, but I was able to go from no Selenium tests to a fast multi-browser solution in about a day, which works for me. There are downsides to the approach, the error output isn’t exactly the same when run concurrently, but it is pretty close.¬† As opposed to seeing multiple errors for each test, you get a single error per test which includes the details about what browsers the error occurred on.

In the future I would recommend closely watching Webrat and Capybara which I would likely use to drive the Selenium tests. I think the eventual merge will lead to the best solution in terms of flexibility. At the moment Capybara doesn’t support selenium-RC, and the tests I originally wrote didn’t convert to the Webrat API as easily as directly to selenium-client (although setting up Webrat to use Selenium looks pretty simple). The example code given could likely be adapted easily to work with existing Webrat tests.

namespace :test do
  namespace :selenium do

    desc "selenium against staging server"
    task :staging do
      exec "bash -c 'SELENIUM_BROWSERS=all SELENIUM_RC_URL=saucelabs.com SELENIUM_URL=http://caliper-staging.heroku.com/  ruby test/acceptance/walkthrough.rb'"
    end

    desc "selenium against local server"
    task :local do
      exec "bash -c 'SELENIUM_BROWSERS=one SELENIUM_RC_URL=localhost SELENIUM_URL=http://localhost:3000/ ruby test/acceptance/walkthrough.rb'"
    end
  end
end

view this gist

require "rubygems"
require "test/unit"
gem "selenium-client", ">=1.2.16"
require "selenium/client"
require 'threadify'

class ExampleTest  1
      errors = []
      browsers.threadify(browsers.length) do |browser_spec|
        begin
          run_browser(browser_spec, block)
        rescue => error
          type = browser_spec.match(/browser\": \"(.*)\", /)[1]
          version = browser_spec.match(/browser-version\": \"(.*)\",/)[1]
          errors < type, :version => version, :error => error}
        end
      end
      message = ""
      errors.each_with_index do |error, index|
        message +="\t[#{index+1}]: #{error[:error].message} occurred in #{error[:browser]}, version #{error[:version]}\n"
      end
      assert_equal 0, errors.length, "Expected zero failures or errors, but got #{errors.length}\n #{message}"
    else
      run_browser(browsers[0], block)
    end
  end

  def run_browser(browser_spec, block)
    browser = Selenium::Client::Driver.new(
                                           :host => selenium_rc_url,
                                           :port => 4444,
                                           :browser => browser_spec,
                                           :url => test_url,
                                           :timeout_in_second => 120)
    browser.start_new_browser_session
    begin
      block.call(browser)
    ensure
      browser.close_current_browser_session
    end
  end

  def test_basic_walkthrough
    run_in_all_browsers do |browser|
      browser.open '/'
      assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
      assert browser.text?('Recently Generated Metrics')

      browser.click "css=#projects a:contains('Projects')", :wait_for => :page
      assert browser.text?('Browse Projects')

      browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://github.com/sinatra/sinatra.git'
      browser.click "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')
      browser.wait_for_element "css=#hotspots-summary"
      assert browser.text?('View full Hot Spots report')
    end
  end

  def test_generate_new_metrics
    run_in_all_browsers do |browser|
      browser.open '/'
      browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://github.com/sinatra/sinatra.git'
      browser.click "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')

      browser.click "css=#fetch"
      browser.wait_for_page
      assert browser.text?('sinatra/sinatra')
    end
  end

end

view this gist

Written by DanM

April 8, 2010 at 10:07 am

Improving Code using Metric_fu

Often, when people see code metrics they think, “that is interesting, I don’t know what to do with it.” I think metrics are great, but when you can really use them to improve your project’s code, that makes them even more valuable. metric_fu provides a bunch of great metric information, which can be very useful. But if you don’t know what parts of it are actionable it’s merely interesting instead of useful.

One thing when looking at code metrics to keep in mind is that a single metric may not be as interesting. If you look at a metric trends over time it might help give you more meaningful information. Showing this trending information is one of our goals with Caliper. Metrics can be your friend watching over the project and like having a second set of eyes on how the code is progressing, alerting you to problem areas before they get out of control. Working with code over time, it can be hard to keep everything in your head (I know I can’t). As the size of the code base increases it can be difficult to keep track of all the places where duplication or complexity is building up in the code. Addressing the problem areas as they are revealed by code metrics can keep them from getting out of hand, making future additions to the code easier.

I want to show how metrics can drive changes and improve the code base by working on a real project. I figured there was no better place to look than pointing metric_fu at our own devver.net website source and fixing up some of the most notable problem areas. We have had our backend code under metric_fu for awhile, but hadn’t been following the metrics on our Merb code. This, along with some spiked features that ended up turning into Caliper, led to some areas getting a little out of control.

Flay Score before cleanup

When going through metric_fu the first thing I wanted to start to work on was making the code a bit more DRY. The team and I were starting to notice a bit more duplication in the code than we liked. I brought up the Flay results for code duplication and found that four databases models shared some of the same methods.

Flay highlighted the duplication. Since we are planning on making some changes to how we handle timestamps soon, it seemed like a good place to start cleaning up. Below are the methods that existed in all four models. A third method ‘update_time’ existed in two of the four models.

 def self.pad_num(number, max_digits = 15)
    "%%0%di" % max_digits % number.to_i
  end

  def get_time
      Time.at(self.time.to_i)
  end

Nearly all of our DB tables store time in a way that can be sorted with SimpleDB queries. We wanted to change our time to be stored as UTC in the ISO 8601 format. Before changing to the ISO format, it was easy to pull these methods into a helper module and include it in all the database models.

module TimeHelper

  module ClassMethods
    def pad_num(number, max_digits = 15)
      "%%0%di" % max_digits % number.to_i
    end
  end

  def get_time
      Time.at(self.time.to_i)
  end

  def update_time
    self.time = self.class.pad_num(Time.now.to_i)
  end

end

Besides reducing the duplication across the DB models, it also made it much easier to include another time method update_time, which was in two of the DB models. This consolidated all the DB time logic into one file, so changing the time format to UTC ISO 8601 will be a snap. While this is a trivial example of a obvious refactoring it is easy to see how helper methods can often end up duplicated across classes. Flay can come in really handy at pointing out duplication that over time that can occur.

Flog gives a score showing how complex the measured code is. The higher the score the greater the complexity. The more complex code is the harder it is to read and it likely contains higher defect density. After removing some duplication from the DB models I found our worst database model based on Flog scores was our MetricsData model. It included an incredibly bad high flog score of 149 for a single method.

File Total score Methods Average score Highest score
/lib/sdb/metrics_data.rb 327 12 27 149

The method in question was extract_data_from_yaml, and after a little refactoring it was easy to make extract_data_from_yaml drop from a score of 149 to a series of smaller methods with the largest score being extract_flog_data! (33.6). The method was doing too much work and was frequently being changed. The method was extracting the data from 6 different metric tools and creating summary of the data.

The method went from a sprawling 42 lines of code to a cleaner and smaller method of 10 lines and a collection of helper methods that look something like the below code:

  def self.extract_data_from_yaml(yml_metrics_data)
    metrics_data = Hash.new {|hash, key| hash[key] = {}}
    extract_flog_data!(metrics_data, yml_metrics_data)
    extract_flay_data!(metrics_data, yml_metrics_data)
    extract_reek_data!(metrics_data, yml_metrics_data)
    extract_roodi_data!(metrics_data, yml_metrics_data)
    extract_saikuro_data!(metrics_data, yml_metrics_data)
    extract_churn_data!(metrics_data, yml_metrics_data)
    metrics_data
  end

  def self.extract_flog_data!(metrics_data, yml_metrics_data)
    metrics_data[:flog][:description] = 'measures code complexity'
    metrics_data[:flog]["average method score"] = Devver::Maybe(yml_metrics_data)[:flog][:average].value(N_A)
    metrics_data[:flog]["total score"]   = Devver::Maybe(yml_metrics_data)[:flog][:total].value(N_A)
    metrics_data[:flog]["worst file"] = Devver::Maybe(yml_metrics_data)[:flog][:pages].first[:path].fmap {|x| Pathname.new(x)}.value(N_A)
  end

Churn gives you an idea of files that might be in need of a refactoring. Often if a file is changing a lot it means that the code is doing too much, and would be more stable and reliable if broken up into smaller components. Looking through our churn results, it looks like we might need another layout to accommodate some of the different styles on the site. Another thing that jumps out is that both the TestStats and Caliper controller have fairly high churn. The Caliper controller has been growing fairly large as it has been doing double duty for user facing features and admin features, which should be split up. TestStats is admin controller code that also has been growing in size and should be split up into more isolated cases.

churn results

Churn gave me an idea of where might be worth focusing my effort. Diving in to the other metrics made it clear that the Caliper controller needed some attention.

The Flog, Reek, and Roodi Scores for Caliper Controller:

File Total score Methods Average score Highest score
/app/controllers/caliper.rb 214 14 15 42

reek before cleanup

Roodi Report
app/controllers/caliper.rb:34 - Method name "index" has a cyclomatic complexity is 14.  It should be 8 or less.
app/controllers/caliper.rb:38 - Rescue block should not be empty.
app/controllers/caliper.rb:51 - Rescue block should not be empty.
app/controllers/caliper.rb:77 - Rescue block should not be empty.
app/controllers/caliper.rb:113 - Rescue block should not be empty.
app/controllers/caliper.rb:149 - Rescue block should not be empty.
app/controllers/caliper.rb:34 - Method name "index" has 36 lines.  It should have 20 or less.

Found 7 errors.

Roodi and Reek both tell you about design and readability problems in your code. The screenshot of our Reek ‘code smells’ in the Caliper controller should show how it had gotten out of hand. The code smells filled an entire browser page! Roodi similarly had many complaints about the Caliper controller. Flog was also showing the file was getting a bit more complex than it should be. After picking off some of the worst Roodi and Reek complaints and splitting up methods with high Flog scores, the code had become easily readable and understandable at a glance. In fact I nearly cut the Reek complaints in half for the controller.

Reek after cleanup

Refactoring one controller, which had been quickly hacked together and growing out of control, brought it from a dizzying 203 LOC to 138 LOC. The metrics drove me to refactor long methods (52 LOC => 3 methods the largest being 23 LOC), rename unclear variable names (s => stat, p => project), move some helpers methods out of the controller into the helper class where they belong. Yes, all these refactorings and good code designs can be done without metrics, but it can be easy to overlook bad code smells when they start small, metrics can give you an early warning that a section of code is becoming unmanageable and likely prone to higher defect rates. The smaller file was a huge improvement in terms of cyclomatic complexity, LOC, code duplication, and more importantly, readability.

Obviously I think code metrics are cool, and that your projects can be improved by paying attention to them as part of the development lifecycle. I wrote about metric_fu so that anyone can try these metrics out on their projects. I think metric_fu is awesome, and my interest in Ruby tools is part of what drove us to build Caliper, which is really the easiest way try out metrics for your project. Currently, you can think of it as hosted metric_fu, but we are hoping to go even further and make the metrics clearly actionable to users.

In the end, yep, this is a bit of a plug for a product I helped build, but it is really because I think code metrics can be a great tool to help anyone with their development. So submit your repo in and give Caliper hosted Ruby metrics a shot. We are trying to make metrics more actionable and useful for all Ruby developers out, so we would love to here from you with any ideas about how to improve Caliper, please contact us.

Written by DanM

October 27, 2009 at 10:30 pm

Unit Testing Filesystem Interaction

Like most Rubyists, I write unit tests to verify the non-trivial parts of my code. I also try to use mocks and stubs to stub out interactions with systems external to my code, like network services.

For the most part, this works fine. But I’ve always struggled to find a good way to test interaction with the filesystem (which can often be non-trivial and therefore should be tested). On the one hand, the filesystem could be considered “external” and mocked out. But on the other hand, the filesystem is accessible when the tests run. In this way, the filesystem is sort of like a local database – it could be mocked out, but it doesn’t have to be, and there are tradeoffs to both approaches.

Over the past year or so, I’ve tried out a few approaches for testing interactions with the filesystem, each of which I’ll explain below. Since none of the approaches met my needs, Avdi and I built a new testing library, which I’ll introduce below.

Mocking the file system.

Sometimes, it is simplest to just mock the interaction with the filesystem. This works well for single calls to methods like

File#read

or

File#exist?

(these examples use Mocha):

File.stubs(:read).returns("file contents")
File.stubs(:exist?).returns(true)

However, this approach breaks down when you want to test more complex code, which, of course, is the code you’re more likely to want to test thoroughly. For instance, imagine trying to set up mocks/stubs for the following method (which atomically rewrites the contents of a file):

require 'tempfile'

class Rewriter

  def rewrite_file!(target_path)
    backup_path = target_path + '.bak'
    FileUtils.mv(target_path, backup_path)
    Tempfile.open(File.basename(target_path)) do |outfile|
      File.open(backup_path) do |infile|
        infile.each_line do |line|
          outfile.write(yield(line))
        end
      end
      outfile.close
      FileUtils.cp(outfile.path, target_path)
    end
  rescue Exception
    if File.exist?(backup_path)
      FileUtils.mv(backup_path, target_path)
    end
    raise
  end

end

Now imagine setting up those same mocks/stubs for each of the five or so tests you’d want to test that method. It gets messy.

Even more importantly, mocking/stubbing out methods ties your tests to a specific implementation. For instance, if you use the above stub (

File.stubs(:read).returns("file contents")

) in your test and then refactor your implementation to use, say,

File.readlines

, you’ll have to update your tests. No good.

MockFS

MockFS is a library that mocks out the entire filesystem. It allows you write test code like this:

require 'test/unit'
require 'mockfs'

class TestMoveLog < Test::Unit::TestCase

  def test_move_log
    # Set MockFS to use the mock file system
    MockFS.mock = true

    # Fill certain directories
    MockFS.fill_path '/var/log/httpd/'
    MockFS.fill_path '/home/francis/logs/'

    # Create the access log
    MockFS.file.open( '/var/log/httpd/access_log', File::CREAT ) do |f|
      f.puts "line 1 of the access log"
    end

    # Run the method under test
    move_log

    # Test that it was moved, along with its contents
    assert( MockFS.file.exist?( '/home/francis/logs/access_log' ) )
    assert( !MockFS.file.exist?( '/var/log/httpd/access_log' ) )
    contents = MockFS.file.open( '/home/francis/logs/access_log' ) do |f|
      f.gets( nil )
    end
    assert_equal( "line 1 of the access log\n", contents )
  end
end

Although I suspect MockFS would be a great fit for some projects, I ended up running into issues.

First of all, it depends on a library (extensions) that can have strange monkey-patching conflicts with other libraries. For example, compare this:

require 'faker'
puts [].respond_to?(:shuffle) # true

to this:

require 'extensions/all'
require 'faker'
puts [].respond_to?(:shuffle) # false

Secondly, as you’ll notice in the above example, using MockFS requires you to use methods like

MockFS.file.exist?

instead of just

File.exist?

. This works fine if you’re only testing your own code. However, if your code calls any libraries that use filesystem methods, MockFS won’t work.

(Note: There is a way to mock out the default filesystem methods, but it’s experimental. From the MockFS documentation:

“Reading the testing example above, you may be struck by one thing: Using MockFS requires you to remember to reference it everywhere, making calls such as MockFS.file_utils.mv instead of just FileUtils.mv. As another option, you can use File, FileUtils, and Dir directly, and then in your tests, substitute them by including mockfs/override.rb. I’d recommend using these with caution; substituting these low-level classes can have unpredictable results. “)

All that said, MockFS is probably your best option if you’re only testing your code and you want to mock out files that you can’t actually interact with – for instance, if you need to test that a method reads/writes a file in

/etc

(although for the sake of testability, it’s generally good to avoid hardcoding fully-qualified paths in your code).

FakeFS is another library that uses this approach. I haven’t used it personally, but it looks quite nice.

Creating temp files and directories (with Construct)

Besides mocking the filesystem, another option is to have tests interact with actual files and directories on disk. The advantages are that the test code can be simpler to write and you don’t have to use any special filesystem methods.

Of course, as always, you want the test itself to contain all the relevant setup and teardown – you don’t want your tests to depend upon some set of files that have no explicit connection to the test itself (or create files that aren’t cleaned up).

To make this easy, we created a new library called Construct. Construct makes test setup simple by providing helpers to create temporary files and directories. It takes care of the cleanup by automatically deleting the directories and files that are created within the test. And because it creates regular files and directories, you can use plain old Ruby filesystem methods in your code and tests.

To install Construct, simply run:

# gem install devver-construct --source http://gems.github.com

Using Construct, you can write code like this:

require 'construct'

class ExampleTest < Test::Unit::TestCase
  include Construct::Helpers

  def test_example
    within_construct do |construct|
      construct.directory 'alice/rabbithole' do |dir|
        dir.file 'white_rabbit.txt', "I'm late!"
        assert_equal "I'm late!", File.read('white_rabbit.txt')
      end
    end
  end

end

Let’s look at each line in more detail.

    within_construct do |construct|

When you call

within_construct

, a temporary directory is created. All files and directories are, by default, created within that temporary directory and the temporary directory is always deleted before

within_construct

completes.

The block argument (

construct

) is a Pathname object with some additional methods (

#directory

and

#file

, which I’ll explain below). You can use this object to get the path to the temporary directory created by Construct and easily create files and directories.

Note that, by default, the working directory is changed to the temp dir within the block provided to

within_construct

.

      construct.directory 'alice/rabbithole' do |dir|

Here we are using the

construct

object to create a new directory within the temp directory. As you can see, you can create nested directories like

alice/rabbithole

in one step. The block argument (

dir

) is again a Pathname object with the same added functionality noted above.

Just like before, the working directory is changed to the newly created directory (in this case,

alice/rabbithole

) within the block.

        dir.file 'white_rabbit.txt', "I'm late!"

Here we use the

dir

object to create a file. In this case, the file will be empty. However, it’s easy to provide file contents using either an optional parameter or the return value of the supplied block:

within_construct do |construct|
  construct.file('foo.txt','Here is some content')
  construct.file('bar.txt') do
  <<-EOS
  The block will return this string, which will be used as the content.
  EOS
  end
end

As a more real-world example, here’s how you could use Construct to start testing the

#rewrite_file!

method we looked at before:

require 'test/unit'
require 'construct'
require 'shoulda'

class RewriterTest < Test::Unit::TestCase
  include Construct::Helpers

  context "#rewrite_file!" do

    should "alter each line in file" do
      within_construct do |c|
        c.file('bar/foo.txt',"a\nb\nc\n")
        Rewriter.new.rewrite_file!('bar/foo.txt') do |line|
          line.upcase
        end
        assert_equal "A\nB\nC\n", File.read('bar/foo.txt')
      end
    end

    should "not alter file if exception is raised" do
      within_construct do |c|
        c.file('foo.txt', "1\n2\nX\n")
        assert_raises ArgumentError do
          Rewriter.new.rewrite_file!('foo.txt') do |line|
            Integer(line)*2
          end
        end
        assert_equal "1\n2\nX\n", File.read('foo.txt')
      end
    end

  end

end

You can learn more at the project page (both the README and the tests have more examples).

(As an aside, since Construct changes the working directory, it doesn’t play nicely with

ruby-debug

. Specifically, if you place a breakpoint within a block, you’ll see the message “No sourcefile available for test/unit/foo_test.rb” and you won’t be able to view the source. If anyone knows an easy way to make

Dir.chdir

work with

ruby-debug

, I’d very much appreciate some help!)

Conclusion

We’ve been moving our filesystem tests over to using Construct and so far have found it to be very useful. How do you test interactions with the filesystem? Do you use one of the above approaches, or something else? Or do you skip testing the filesystem altogether?

Written by Ben

August 25, 2009 at 9:51 am

Posted in Hacking, Testing

Tagged with , ,

Devver adds Postgres and SQLite database support

We are working hard to quickly expand our compatibility on Ruby projects. With that goal driving us, we are happy to announce support for Postgres and SQLite databases. With the addition of these database options, along with our existing support for MySQL, Devver now supports all of the most popular databases commonly used with Ruby. These three databases are the default databases tested against ActiveRecord and we expect will cover the majority of the Ruby community.

To begin working with Postgres or SQLite on Devver all you need to do is have a database.yml with the test environment set to the adapter of your choice. If we don’t support your favorite database, you can still request a beta invite and let us know which database you want us to support. If we just added support for your database, perhaps we can speed up your project on Devver, so request a beta invite.

Written by DanM

July 6, 2009 at 12:24 pm

Posted in Development, Devver, Ruby, Testing

Tagged with , ,

Tracking down open files with lsof

The other day I was running in a weird error on Devver. After running around twenty test runs on the system, the component that actually runs individual unit tests was crashing due to “Too many open files – (Errno::EMFILE)”

Unfortunately, I didn’t know much more than that. Which files were being kept open? I knew that this component loaded quite a few files, and that by default, OS X only allows 256 open file descriptors (

ulimit -n

will tell you the default on your system). If this was a valid case of needing to load more files, I could just up the limit using

ulimit -n <bigger_number>

.

Fortunately, a quick Google or two pointed the way to

lsof

. Unfortunately, my Unix-fu is never nearly as good as I wish and I didn’t know much about this handy utility. But I quickly discovered that it’s very useful for tracking down problems like this. I quickly used

ps

to find the PID of the Devver process and then a quick

lsof -p <PID>

displayed all the files that the process had open. So easy!

Sure enough, there were a ton of redundant file handles to the file that we use to store information about the Devver run. Armed with this information, it was easy to find the buggy code where we called File.open but failed to ever close the file.

Unfortunately, I still don’t know how to write a good unit test for this case. I guess I could do something ugly like call sytem(“lsof -p pid | wc -l”) before and after calling the code and make sure the number of descriptors stays constant, but that’s really ugly. Is there a way to test this within Ruby? I’m open to ideas.

Still, it’s always good to learn more about a powerful Unix tool. I’m constanly amazed by the power and depth of the Unit tool set.

Written by Ben

October 9, 2008 at 12:23 pm

Ruby Code Quality Tools

Update: Devver now offers a hosted metrics service for Ruby developers which can give you useful feedback about your code. Check out Caliper, to get started with metrics for your project.

This is the third post in my series of Ruby tools articles. This time I look at Ruby code quality tools. Rubyists like Ruby because the code can look so nice, simple, and sometimes beautiful. Unfortunately not all code is so great, in fact often the code I write doesn’t look good. Fortunately while a great language can help you to write great code, great tools can help as well. As code grows it is easy for code bloat, dead code, or confusing complexities to slip in. The tools I review below can help with all of these problems. I recommend finding the one or two code quality tools you like best and starting to integrate them more into your development process.

Roodi


Roodi gives you a bunch of interesting warnings about your Ruby code. We are about to release some code, so I took the opportunity to fix up anything Roodi complained about. It helped identify refactoring opportunities, both with long methods, and overly complex methods. The code and tests became cleaner and more granular after breaking some of the methods down. I even found and fixed one silly performance issue that was easy to see after refactoring, which improved the speed of our code. Spending some time with Roodi looks like it could easily improve the quality and readability of most Ruby projects with very little effort. I didn’t solve every problem because in one case I just didn’t think the method could be simplified anymore, but the majority of the suggestions were right on. Below is an example session with Roodi

dmayer$ sudo gem install roodi
dmayer$ roodi lib/client/syncer.rb
lib/client/syncer.rb:136 - Block cyclomatic complexity is 5.  It should be 4 or less.
lib/client/syncer.rb:61 - Method name "excluded" has a cyclomatic complexity is 10.  It should be 8 or less.
lib/client/syncer.rb:101 - Method name "should_be_excluded?" has a cyclomatic complexity is 9.  It should be 8 or less.
lib/client/syncer.rb:132 - Method name "find_changed_files" has a cyclomatic complexity is 10.  It should be 8 or less.
lib/client/syncer.rb:68 - Rescue block should not be empty.
lib/client/syncer.rb:61 - Method name "excluded" has 25 lines.  It should have 20 or less.
lib/client/syncer.rb:132 - Method name "find_changed_files" has 27 lines.  It should have 20 or less.
Found 7 errors.

After Refactoring:

~/projects/gridtest/trunk dmayer$ roodi lib/client/syncer.rb
lib/client/syncer.rb:148 - Block cyclomatic complexity is 5.  It should be 4 or less.
lib/client/syncer.rb:82 - Rescue block should not be empty.
Found 2 errors.

I did have one problem with Roodi – the errors about rescue blocks just seemed to be incorrect. For code like the little example below it kept throwing the error even though I obviously am doing some work in the rescue code.

Roodi output: lib/client/syncer.rb:68 - Rescue block should not be empty.
begin
  socket = TCPSocket.new(server_ip,server_port)
  socket.close
  return true
rescue Errno::ECONNREFUSED
  return false
end

Dust


Dust detects unused code like unused variables,branches, and blocks. I look forward to see how the project progresses. Right now there doesn’t seem to be much out there on the web, and the README is pretty bare bones. Once you can pass it some files to scan, I think this will be something really useful. For now I didn’t think there wasn’t much I could actually do besides check it out. Kevin, who also helped create the very cool Heckle, does claim that code scanning is coming soon, so I look forward to doing a more detailed write up eventually.

Flog


Flog gives feedback about the quality of your code by scoring code using the ABC metric. Using Flog to help guide refactoring, code cleanup, and testing efforts can be highly effective. It is a little easier to understand the reports after reading how Flog scores your code, and what is a good Flog score. Once you get used to working with Flog you will likely want to run it often against your whole project after making any significant changes. There are two easy ways to do this a handy Flog Rake task or MetricFu which works with both Flog and Saikuro.

Running Flog against any subset of a project is easy, here I am running it against our client libraries

find ./lib/client/ -name \*.rb | xargs flog -n -m &gt; flog.log

Here some example Flog output when run against our client code.

Total score = 1364.52395469781

Client#send_tests: (64.3)
    14.3: assignment
    13.9: puts
    10.7: branch
    10.5: send
     4.7: send_quit
     3.4: message
     3.4: now
     2.0: create_queue_test_msg
     1.9: create_run_msg
     1.9: test_files
     1.8: dump
     1.7: each
     1.7: report_start
     1.7: length
     1.7: get_tests
     1.7: -
     1.7: open
     1.7: load_file
     1.6: empty?
     1.6: nil?
     1.6: use_cache
     1.6: exists?
ModClient#send_file: (32.0)
    12.4: branch
     5.4: +
     4.3: assignment
     3.9: send
     3.1: puts
     2.9: ==
     2.9: exists?
     2.9: directory?
     1.9: strftime
     1.8: to_s
     1.5: read
     1.5: create_file_msg
     1.4: info
Syncer#sync: (30.8)
    13.2: assignment
     8.6: branch
     3.6: inspect
     3.2: info
     3.0: puts
     2.8: +
     2.6: empty?
     1.7: map
     1.5: now
     1.5: length
     1.4: send_files
     1.3: max
     1.3: >
     1.3: find_changed_files
     1.3: write_sync_time
Syncer#find_changed_files: (26.2)
    15.6: assignment
     8.7: branch
     3.5: <<
     1.8: to_s
     1.7: get_relative_path
     1.7: >
     1.7: mtime
     1.6: exists?
     1.6: ==
     1.5: prune
     1.4: should_be_excluded?
     1.3: get_removed_files
     1.3: find
... and so on ...

Saikuro


Saikuro is another code complexity tool. It seems to give a little less information than some of the others. It does generate nice HTML reports. Like other code complexity tools it can be helpful to discover the most complex parts of your projects for refactoring and to help focus your testing. I liked the way Flog broke things down for me into a bit more detail, but either is a useful tool and I am sure it is a matter of preference depending on what you are looking for.

saikuro screenshot
Saikuro Screenshot

Written by DanM

October 1, 2008 at 10:04 pm

Posted in Development, Ruby, Testing

Ruby Test Quality Tools

Update: Devver now offers a hosted metrics service for Ruby developers which can give you useful feedback about your code. Check out Caliper, to get started with metrics for your project.

This is the second post in my series of Ruby tools articles. This time I am focused on Ruby test quality tools. Devver is always really interested in testing, and obviously the quality of a project’s tests is important. We are always looking at ways to add even more value to the investment teams put in with testing. Simply knowing that you are writing higher quality tests helps increase the value returned on the time invested in testing. I haven’t found many tools to help with test quality, but these tools are a great help to any Ruby tester.

Heckle


Heckle is an interesting tool to do mutation testing of your tests. Heckle currently supports Test:Unit and RSpec, but does have a number of issues. I had to run it on a few different files and methods before I got some useful output that helped me improve my testing. The first problem was it crashing when I passed it entire files (crashing the majority of the time). I then began passing it single methods I was curious about, which still occasionally caused Heckle to get into an infinite loop case. This is a noted problem in Heckle, but -T and providing a timeout should solve that issue. In my case it was actually not an infinite loop timing error, but an error when attempting to rewrite the code, which lead to a continual failure loop that wouldn’t time out. When I found a class and method that Heckle could test I got some good results. I found one badly written test case, and one case that was never tested. Lets run through a simple Heckle example.

#install heckle
dmayer$ sudo gem install heckle

#example of the infinite loop Error Heckle run
heckle Syncer should_be_excluded? --tests test/unit/client/syncer_test.rb -v
Setting timeout at 5 seconds.
Initial tests pass. Let's rumble.

**********************************************************************
***  Syncer#should_be_excluded? loaded with 13 possible mutations
**********************************************************************
...
2 mutations remaining...
Replacing Syncer#should_be_excluded? with:

2 mutations remaining...
Replacing Syncer#should_be_excluded? with:
... loops forever ...

#Heckle run against our Client class and the process method
dmayer$ heckle Client process --tests test/unit/client/client_test.rb
Initial tests pass. Let's rumble.

**********************************************************************
***  Client#process loaded with 9 possible mutations
**********************************************************************

9 mutations remaining...
8 mutations remaining...
7 mutations remaining...
6 mutations remaining...
5 mutations remaining...
4 mutations remaining...
3 mutations remaining...
2 mutations remaining...
1 mutations remaining...

The following mutations didn't cause test failures:

--- original
+++ mutation

 def process(command)

   case command
   when @buffer.Ready then
     process_ready
-  when @buffer.SetID then
+  when nil then
     process_set_id(command)
   when @buffer.InitProject then
     process_init_project
   when @buffer.Result then
     process_result(command)
   when @buffer.Goodbye then
     kill_event_loop
   when @buffer.Done then
     process_done
   when @buffer.Error then
     process_error(command)
   else
     @log.error("client ignoring invalid command #{command}") if @log
   end
 end

--- original
+++ mutation
 def process(command)
   case command
   when @buffer.Ready then
     process_ready
   when @buffer.SetID then
     process_set_id(command)
   when @buffer.InitProject then
     process_init_project
   when @buffer.Result then
     process_result(command)
   when @buffer.Goodbye then
     kill_event_loop
   when @buffer.Done then
     process_done
   when @buffer.Error then
     process_error(command)
   else
-    @log.error("client ignoring invalid command #{command}") if @log
+    nil if @log
   end
 end

Heckle Results:

Passed    :   0
Failed    :   1
Thick Skin:   0

Improve the tests and try again.

#Tests added / changed to improve Heckle results
  def test_process_process_loop__random_result
    Client.any_instance.expects(:start_tls).returns(true)
    client = Client.new({})
    client.stubs(:send_data)
    client.log = stub_everything
    client.log.expects(:error).with("client ignoring invalid command this is random")
    client.process("this is random")
  end

  def test_process_process_loop__set_id
    Client.any_instance.expects(:start_tls).returns(true)
    client = Client.new({})
    client.stubs(:send_data)
    client.log = stub_everything
    cmd = DataBuffer.new.create_set_ids_msg("4")
    client.expects(:process_set_id).with(cmd)
    client.process(cmd)
  end

#A final Heckle run, showing successful results
dmayer$ heckle Client process --tests test/unit/client/client_test.rb
Initial tests pass. Let's rumble.

**********************************************************************
*** Client#process loaded with 9 possible mutations
**********************************************************************

9 mutations remaining...
8 mutations remaining...
7 mutations remaining...
6 mutations remaining...
5 mutations remaining...
4 mutations remaining...
3 mutations remaining...
2 mutations remaining...
1 mutations remaining...
No mutants survived. Cool!

Heckle Results:

Passed : 1
Failed : 0
Thick Skin: 0

All heckling was thwarted! YAY!!!

rcov


rcov is a code coverage tool for Ruby. If you are doing testing you should probably be monitoring your coverage with a code coverage tool. I don't know of a better tool for code coverage than rcov. It is simple to use and generates beautiful, easy-to-read HTML charts showing the current coverage broken down by file. An easy way to make you project more stable is to occasionally spend some time increasing the coverage you have on your project. I have always found it a great way to get back into a project if you have been off of it for awhile. You just need to find some weak coverage points and get to work.
Rcov Screenshot
rcov screenshot

Written by DanM

September 30, 2008 at 9:57 am

Posted in Development, Ruby, Testing

Follow

Get every new post delivered to your Inbox.