The Devver Blog

A Boulder startup improving the way developers work.

Archive for the ‘Development’ Category

Speeding up multi-browser Selenium Testing using concurrency

I haven’t used Selenium for awhile, so I took some time to dig into the options to get some mainline tests running against Caliper in multiple browsers. I wanted to be able to test a variety of browsers against our staging server before pushing new releases. Eventually this could be integrated into Continuous Integration (CI) or Continuous Deployment (CD).

The state of Selenium testing for Rails is currently in flux:

So there are multiple gems / frameworks:

I decided to investigate several options to determine which is the best approach for our tests.


I originally wrote a couple example tests using the selenium-on-rails plugin. This allows you to browse to your local development web server at ‘/selenium’ and run tests in the browser using the Selenium test runner. It is simple and the most basic Selenium mode, but it obviously has limitations. It wasn’t easy to run many different browsers using this plugin, or use with Selenium-RC, and the plugin was fairly dated. This lead me to try simplest next thing, selenium-client

open '/'
assert_title 'Hosted Ruby/Rails metrics - Caliper'
verify_text_present 'Recently Generated Metrics'

click_and_wait "css=#projects a:contains('Projects')"
verify_text_present 'Browse Projects'

click_and_wait "css=#add-project a:contains('Add Project')"
verify_text_present 'Add Project'

type 'repo','git://'
click_and_wait "css=#submit-project"
verify_text_present 'sinatra/sinatra'
wait_for_element_present "css=#hotspots-summary"
verify_text_present 'View full Hot Spots report'

view this gist


I quickly converted my selenium-on-rails tests to selenium-client tests, with some small modifications. To run tests using selenium-client, you need to run a selenium-RC server. I setup Sauce RC on my machine and was ready to go. I configured the tests to run locally on a single browser (Firefox). Once that was working I wanted to run the same tests in multiple browsers. I found that it was easy to dynamically create a test for each browser type and run them using selenium-RC, but that it was increadly slow, since tests run one after another and not concurrently. Also, you need to install each browser (plus multiple versions) on your machine. This led me to use Sauce Labs’ OnDemand. '/'
assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
assert browser.text?('Recently Generated Metrics') "css=#projects a:contains('Projects')", :wait_for => :page
assert browser.text?('Browse Projects') "css=#add-project a:contains('Add Project')", :wait_for => :page
assert browser.text?('Add Project')

browser.type 'repo','git://' "css=#submit-project", :wait_for => :page
assert browser.text?('sinatra/sinatra')
browser.wait_for_element "css=#hotspots-summary"
assert browser.text?('View full Hot Spots report')

view this gist

Using Selenium-RC and Sauce Labs Concurrently

Running on all the browsers Sauce Labs offers (12) took 910 seconds. Which is cool, but way too slow, and since I am just running the same tests over in different browsers, I decided that it should be done concurrently. If you are running your own Selenium-RC server this will slow down a lot as your machine has to start and run all of the various browsers, so this approach isn’t recommended on your own Selenium-RC setup, unless you configure Selenium-Grid. If you are using¬† Sauce Labs, the tests run concurrently with no slow down. After switching to concurrently running my Selenium tests, run time went down to 70 seconds.

My main goal was to make it easy to write pretty standard tests a single time, but be able to change the number of browsers I ran them on and the server I targeted. One approach that has been offered explains how to setup Cucumber to run Selenium tests against multiple browsers. This basically runs the rake task over and over for each browser environment.

Althought this works, I also wanted to run all my tests concurrently. One option would be to concurrently run all of the Rake tasks and join the results. Joining the results is difficult to do cleanly or you end up outputting the full rake test output once per browser (ugly when running 12 times). I took a slightly different approach which just wraps any Selenium-based test in a run_in_browsers block. Depending on the options set, the code can run a single browser against your locally hosted application, or many browsers against a staging or production server. Then simply create a separate Rake task for each of the configurations you expect to use (against local selenium-RC and Sauce Labs on demand).

I am pretty happy with the solution I have for now. It is simple and fast and gives another layer of assurances that Caliper is running as expected. Adding additional tests is simple, as is integrating the solution into our CI stack. There are likely many ways to solve the concurrent selenium testing problem, but I was able to go from no Selenium tests to a fast multi-browser solution in about a day, which works for me. There are downsides to the approach, the error output isn’t exactly the same when run concurrently, but it is pretty close.¬† As opposed to seeing multiple errors for each test, you get a single error per test which includes the details about what browsers the error occurred on.

In the future I would recommend closely watching Webrat and Capybara which I would likely use to drive the Selenium tests. I think the eventual merge will lead to the best solution in terms of flexibility. At the moment Capybara doesn’t support selenium-RC, and the tests I originally wrote didn’t convert to the Webrat API as easily as directly to selenium-client (although setting up Webrat to use Selenium looks pretty simple). The example code given could likely be adapted easily to work with existing Webrat tests.

namespace :test do
  namespace :selenium do

    desc "selenium against staging server"
    task :staging do
      exec "bash -c 'SELENIUM_BROWSERS=all SELENIUM_URL=  ruby test/acceptance/walkthrough.rb'"

    desc "selenium against local server"
    task :local do
      exec "bash -c 'SELENIUM_BROWSERS=one SELENIUM_RC_URL=localhost SELENIUM_URL=http://localhost:3000/ ruby test/acceptance/walkthrough.rb'"

view this gist

require "rubygems"
require "test/unit"
gem "selenium-client", ">=1.2.16"
require "selenium/client"
require 'threadify'

class ExampleTest  1
      errors = []
      browsers.threadify(browsers.length) do |browser_spec|
          run_browser(browser_spec, block)
        rescue => error
          type = browser_spec.match(/browser\": \"(.*)\", /)[1]
          version = browser_spec.match(/browser-version\": \"(.*)\",/)[1]
          errors < type, :version => version, :error => error}
      message = ""
      errors.each_with_index do |error, index|
        message +="\t[#{index+1}]: #{error[:error].message} occurred in #{error[:browser]}, version #{error[:version]}\n"
      assert_equal 0, errors.length, "Expected zero failures or errors, but got #{errors.length}\n #{message}"
      run_browser(browsers[0], block)

  def run_browser(browser_spec, block)
    browser =
                                           :host => selenium_rc_url,
                                           :port => 4444,
                                           :browser => browser_spec,
                                           :url => test_url,
                                           :timeout_in_second => 120)

  def test_basic_walkthrough
    run_in_all_browsers do |browser| '/'
      assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
      assert browser.text?('Recently Generated Metrics') "css=#projects a:contains('Projects')", :wait_for => :page
      assert browser.text?('Browse Projects') "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://' "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')
      browser.wait_for_element "css=#hotspots-summary"
      assert browser.text?('View full Hot Spots report')

  def test_generate_new_metrics
    run_in_all_browsers do |browser| '/' "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://' "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra') "css=#fetch"
      assert browser.text?('sinatra/sinatra')


view this gist

Written by DanM

April 8, 2010 at 10:07 am

Announcing Caliper Community Statistics

For the past few months, we’ve been building Caliper to help you easily generate code metrics for your Ruby projects. We’ve recently added another dimension of metrics information: community statistics for all the Ruby projects that are currently in Caliper.

The idea of community statistics is two-fold. From a practical perspective, you can now compare your project’s metrics to the community. For example, Flog measures the complexity of methods. Many people wonder exactly defines a good Flog score for an individual method. In Jake Scruggs’ opinion, a score of 0-10 is “Awesome”, while a score of 11-20 is “Good enough”. That sounds correct, but with Caliper’s community metrics, we can also compare the average Flog scores for entire projects to see what defines a good average score.

To do so, we calculate the average Flog method score for each project and plot those averages on a histogram, like so:


Looking at the data, we see that a lot of projects have an average Flog score between 6 and 12 (the mean is 10.3 and the max is is 21.3).

If your project’s average Flog score is 9, does that mean it has only “Awesome” methods, Flog-wise? Well, remember that we’re looking at the average score for each project. I suspect that in most projects, lots of tiny methods are pulling down the average, but there are still plenty of big, nasty methods. It would be interesting to look at the community statistics for maximum Flog score per project or see a histogram of the Flog scores for all methods across all projects (watch this space!).

Since several of the metrics (like Reek, which detects code smells) have scores that grow in proportion to the number of lines of code, we divide the raw score by each project’s lines of code. As a result, we can sensibly compare your project to other projects, no matter what the difference in size.

The second reason we’re calculating community statistics is so we can discover trends across the Ruby community. For example, we can compare the ratio of lines of application code to test code. It’s interesting to note that a significant portion of projects in Caliper have no tests, but that, for the projects that do have tests, most of them have a code:test ratio in the neighborhood of 2:1.


Other interesting observations from our initial analysis:
* A lot of projects (mostly small ones) have no Flay duplications.
* Many smaller projects have no Reek smells, but the average project has about 1 smell per 9 lines of code.

Want to do your own analysis? We’ve built a scatter plotter so you can see if any two metrics have any correlation. For instance, note the correlation between code complexity and code smells.

Here’s a scatter plot of that data (zoomed in):


Over the coming weeks, we’ll improve the graphs we have and add new graphs that expose interesting trends. But we need your help! Please let us know if you spot problems, have ideas for new graphs, or have any questions. Additionally, please add your project to Caliper so it can be included in our community statistics. Finally, feel free to grab the raw stats from our alpha API* and play around yourself!

* Quick summary:


for JSON,

curl -H 'Accept:text/x-yaml'

for YAML. More details. API is under development, so please send us feedback!

Written by Ben

November 19, 2009 at 8:37 am

Posted in Development, Ruby, Tools

Tagged with ,

Improving Code using Metric_fu

Often, when people see code metrics they think, “that is interesting, I don’t know what to do with it.” I think metrics are great, but when you can really use them to improve your project’s code, that makes them even more valuable. metric_fu provides a bunch of great metric information, which can be very useful. But if you don’t know what parts of it are actionable it’s merely interesting instead of useful.

One thing when looking at code metrics to keep in mind is that a single metric may not be as interesting. If you look at a metric trends over time it might help give you more meaningful information. Showing this trending information is one of our goals with Caliper. Metrics can be your friend watching over the project and like having a second set of eyes on how the code is progressing, alerting you to problem areas before they get out of control. Working with code over time, it can be hard to keep everything in your head (I know I can’t). As the size of the code base increases it can be difficult to keep track of all the places where duplication or complexity is building up in the code. Addressing the problem areas as they are revealed by code metrics can keep them from getting out of hand, making future additions to the code easier.

I want to show how metrics can drive changes and improve the code base by working on a real project. I figured there was no better place to look than pointing metric_fu at our own website source and fixing up some of the most notable problem areas. We have had our backend code under metric_fu for awhile, but hadn’t been following the metrics on our Merb code. This, along with some spiked features that ended up turning into Caliper, led to some areas getting a little out of control.

Flay Score before cleanup

When going through metric_fu the first thing I wanted to start to work on was making the code a bit more DRY. The team and I were starting to notice a bit more duplication in the code than we liked. I brought up the Flay results for code duplication and found that four databases models shared some of the same methods.

Flay highlighted the duplication. Since we are planning on making some changes to how we handle timestamps soon, it seemed like a good place to start cleaning up. Below are the methods that existed in all four models. A third method ‘update_time’ existed in two of the four models.

 def self.pad_num(number, max_digits = 15)
    "%%0%di" % max_digits % number.to_i

  def get_time

Nearly all of our DB tables store time in a way that can be sorted with SimpleDB queries. We wanted to change our time to be stored as UTC in the ISO 8601 format. Before changing to the ISO format, it was easy to pull these methods into a helper module and include it in all the database models.

module TimeHelper

  module ClassMethods
    def pad_num(number, max_digits = 15)
      "%%0%di" % max_digits % number.to_i

  def get_time

  def update_time
    self.time = self.class.pad_num(


Besides reducing the duplication across the DB models, it also made it much easier to include another time method update_time, which was in two of the DB models. This consolidated all the DB time logic into one file, so changing the time format to UTC ISO 8601 will be a snap. While this is a trivial example of a obvious refactoring it is easy to see how helper methods can often end up duplicated across classes. Flay can come in really handy at pointing out duplication that over time that can occur.

Flog gives a score showing how complex the measured code is. The higher the score the greater the complexity. The more complex code is the harder it is to read and it likely contains higher defect density. After removing some duplication from the DB models I found our worst database model based on Flog scores was our MetricsData model. It included an incredibly bad high flog score of 149 for a single method.

File Total score Methods Average score Highest score
/lib/sdb/metrics_data.rb 327 12 27 149

The method in question was extract_data_from_yaml, and after a little refactoring it was easy to make extract_data_from_yaml drop from a score of 149 to a series of smaller methods with the largest score being extract_flog_data! (33.6). The method was doing too much work and was frequently being changed. The method was extracting the data from 6 different metric tools and creating summary of the data.

The method went from a sprawling 42 lines of code to a cleaner and smaller method of 10 lines and a collection of helper methods that look something like the below code:

  def self.extract_data_from_yaml(yml_metrics_data)
    metrics_data = {|hash, key| hash[key] = {}}
    extract_flog_data!(metrics_data, yml_metrics_data)
    extract_flay_data!(metrics_data, yml_metrics_data)
    extract_reek_data!(metrics_data, yml_metrics_data)
    extract_roodi_data!(metrics_data, yml_metrics_data)
    extract_saikuro_data!(metrics_data, yml_metrics_data)
    extract_churn_data!(metrics_data, yml_metrics_data)

  def self.extract_flog_data!(metrics_data, yml_metrics_data)
    metrics_data[:flog][:description] = 'measures code complexity'
    metrics_data[:flog]["average method score"] = Devver::Maybe(yml_metrics_data)[:flog][:average].value(N_A)
    metrics_data[:flog]["total score"]   = Devver::Maybe(yml_metrics_data)[:flog][:total].value(N_A)
    metrics_data[:flog]["worst file"] = Devver::Maybe(yml_metrics_data)[:flog][:pages].first[:path].fmap {|x|}.value(N_A)

Churn gives you an idea of files that might be in need of a refactoring. Often if a file is changing a lot it means that the code is doing too much, and would be more stable and reliable if broken up into smaller components. Looking through our churn results, it looks like we might need another layout to accommodate some of the different styles on the site. Another thing that jumps out is that both the TestStats and Caliper controller have fairly high churn. The Caliper controller has been growing fairly large as it has been doing double duty for user facing features and admin features, which should be split up. TestStats is admin controller code that also has been growing in size and should be split up into more isolated cases.

churn results

Churn gave me an idea of where might be worth focusing my effort. Diving in to the other metrics made it clear that the Caliper controller needed some attention.

The Flog, Reek, and Roodi Scores for Caliper Controller:

File Total score Methods Average score Highest score
/app/controllers/caliper.rb 214 14 15 42

reek before cleanup

Roodi Report
app/controllers/caliper.rb:34 - Method name "index" has a cyclomatic complexity is 14.  It should be 8 or less.
app/controllers/caliper.rb:38 - Rescue block should not be empty.
app/controllers/caliper.rb:51 - Rescue block should not be empty.
app/controllers/caliper.rb:77 - Rescue block should not be empty.
app/controllers/caliper.rb:113 - Rescue block should not be empty.
app/controllers/caliper.rb:149 - Rescue block should not be empty.
app/controllers/caliper.rb:34 - Method name "index" has 36 lines.  It should have 20 or less.

Found 7 errors.

Roodi and Reek both tell you about design and readability problems in your code. The screenshot of our Reek ‘code smells’ in the Caliper controller should show how it had gotten out of hand. The code smells filled an entire browser page! Roodi similarly had many complaints about the Caliper controller. Flog was also showing the file was getting a bit more complex than it should be. After picking off some of the worst Roodi and Reek complaints and splitting up methods with high Flog scores, the code had become easily readable and understandable at a glance. In fact I nearly cut the Reek complaints in half for the controller.

Reek after cleanup

Refactoring one controller, which had been quickly hacked together and growing out of control, brought it from a dizzying 203 LOC to 138 LOC. The metrics drove me to refactor long methods (52 LOC => 3 methods the largest being 23 LOC), rename unclear variable names (s => stat, p => project), move some helpers methods out of the controller into the helper class where they belong. Yes, all these refactorings and good code designs can be done without metrics, but it can be easy to overlook bad code smells when they start small, metrics can give you an early warning that a section of code is becoming unmanageable and likely prone to higher defect rates. The smaller file was a huge improvement in terms of cyclomatic complexity, LOC, code duplication, and more importantly, readability.

Obviously I think code metrics are cool, and that your projects can be improved by paying attention to them as part of the development lifecycle. I wrote about metric_fu so that anyone can try these metrics out on their projects. I think metric_fu is awesome, and my interest in Ruby tools is part of what drove us to build Caliper, which is really the easiest way try out metrics for your project. Currently, you can think of it as hosted metric_fu, but we are hoping to go even further and make the metrics clearly actionable to users.

In the end, yep, this is a bit of a plug for a product I helped build, but it is really because I think code metrics can be a great tool to help anyone with their development. So submit your repo in and give Caliper hosted Ruby metrics a shot. We are trying to make metrics more actionable and useful for all Ruby developers out, so we would love to here from you with any ideas about how to improve Caliper, please contact us.

Written by DanM

October 27, 2009 at 10:30 pm

A Dozen (or so) Ways to Start Subprocesses in Ruby: Part 3

In part 1 and part 2 of this series, we took a look at some of Ruby’s built-in ways to start subprocesses. In this article we’ll branch out a bit, and examine some of the tools available to us in Ruby’s Standard Library. In the process, we’ll demonstrate some lesser-known libraries.


First, though, let’s recap some of our boilerplate code. Here’s the preamble code which is common to all of the demonstrations in this article:

require 'rbconfig'

$stdout.sync = true

def hello(source, expect_input)
  puts "[child] Hello from #{source}"
  if expect_input
    puts "[child] Standard input contains: \"#{$stdin.readline.chomp}\""
    puts "[child] No stdin, or stdin is same as parent's"
  $stderr.puts "[child] Hello, standard error"
  puts "[child] DONE"

THIS_FILE = File.expand_path(__FILE__)

RUBY = File.join(Config::CONFIG['bindir'], Config::CONFIG['ruby_install_name'])

#hello is the method which we will be calling in a Ruby subprocess. It reads some text from STDIN and writes to both STDOUT and STDERR.

THIS_FILE and RUBY contain full paths for the demo source file and the the Ruby interpreter, respectively.

Method #6: Open3

The Open3 library defines a single method, Open3#popen3(). #popen3() behaves similarly to the Kernel#popen() method we encountered in part 2. If you remember from that article, one drawback to the #popen() method was that it did not give us a way to capture the child process’ STDERR stream. "]Open3#popen3() addresses this deficiency.

Open3#popen3() is used very similarly to Kernel#popen() (or Kernel#open() with a ‘|’ argument). The difference is that in addition to STDIN and STDOUT handles, popen3() yields a STDERR handle as well.

puts "6. Open3"
require 'open3'
include Open3
popen3(RUBY, '-r', THIS_FILE, '-e', 'hello("Open3", true)') do
  |stdin, stdout, stderr|
  stdin.write("hello from parent")
  stdin.close_write"\n").each do |line|
    puts "[parent] stdout: #{line}"
  end"\n").each do |line|
    puts "[parent] stderr: #{line}"
puts "---"

When we execute this code, the result shows that we have captured the subprocess’ STDERR output:

6. Open3
[parent] stdout: [child] Hello from Open3
[parent] stdout: [child] Standard input contains: "hello from parent"
[parent] stdout: [child] DONE
[parent] stderr: [child] Hello, standard error

Method #7: PTY

All of the methods we have considered up to this point have shared a common limitation: they are not very well-suited to interfacing with highly interactive subprocesses. They work well for “filter”-style commands, which read some input, produce some output, and then exit. But when used with interactive subprocesses which wait for input, produce some output, and then wait for more input (etc.), their use can result in deadlocks. In a typical deadlock scenario, the expected output is never produced because input is still stuck in the input buffer, and the program hangs forever as a result. This is why, in previous examples, we have been careful to call #close_write on subprocess input handles before reading any output.

Ruby ships with a little-known and poorly-documented standard library called “pty”. The pty library is an interface to BSD pty devices. What is a pty device? In BSD-influenced UNIXen, such as Linux or OS X, a pty is a “pseudoterminal”. In other words, it’s a terminal device that isn’t attached to a physical terminal. If you’ve used a terminal program in Linux or OS X, you’ve probably used a pty without realizing it. GUI Terminal emulators, such as xterm, GNOME Terminal, and often use a pty device behind the scenes to communicate with the OS.

What does this mean for us? It means if we’re running Ruby on UNIX, we have the ability to start our subprocesses inside a virtual terminal. We can then read from and write to that terminal as if our program were a user sitting in front of a terminal, typing in commands and reading responses.

Here’s how it’s used:

puts "7. PTY"
require 'pty'
PTY.spawn(RUBY, '-r', THIS_FILE, '-e', 'hello("PTY", true)') do
  |output, input, pid|
  input.write("hello from parent\n")
  buffer = ""
  output.readpartial(1024, buffer) until buffer =~ /DONE/
  buffer.split("\n").each do |line|
    puts "[parent] output: #{line}"
puts "---"

And here is the output:

7. PTY
[parent] output: [child] Hello from PTY
[parent] output: hello from parent
[parent] output: [child] Standard input contains: "hello from parent"
[parent] output: [child] Hello, standard error
[parent] output: [child] DONE

There are a few of points to note about this code. First, we don’t need to call #close_write or #flush on the process input handle. However, the newline at the end of “Hello from parent” is essential. By default, UNIX terminal devices buffer input until they see a newline. If we left off the newline, the subprocess would never finish waiting for input.

Second, because the subprocess is running asynchronously and independently from the parent process, we have no way of knowing exactly when it has finished reading input and producing output of its own. We deal with this by buffering output until we see a marker (“DONE”).

Third, you may notice that “hello from parent” appears twice in the output – once as part of the parent process output, and once as part of the child output. That’s because another default behaviour for UNIX terminals is to echo any input they receive back to the user. This is what enables you to see what you’ve just typed when working at the command line.

You can alter these default terminal device behaviours using the Ruby “termios” gem.

Note that both STDOUT and STDERR were captured in the subprocess output. From the perspective of the pty user, standard output and standard error streams are indistinguishable – it’s all just output. That means using pty is probably the only way to run a subprocess and capture standard error and standard output interleaved in the same way we would see if we ran the process manually from a terminal window. Depending on the application, this may be a feature or a drawback.

You can execute PTY.spawn() without a block, in which case it returns an array of output, input, and PID. If you choose to experiment with this style of calling PTY.spawn(), be aware that you may need to rescue the PTY::ChildExited exception, which is thrown whenever the child process finally exits.

If you’re interested in reading more code which uses the pty library, the Standard Library also includes a library called “expect.rb”. expect.rb is a basic Ruby reimplementation of the classic “expect” utility written using pty.

Method #8: Shell

More obscure even than the pty library is Ruby’s Shell library. Shell is, to my knowledge, totally undocumented and rarely used. Which is a shame, because it implements some interesting ideas.

Shell is an attempt to emulate a basic UNIX-style shell environment as an internal DSL within Ruby. Shell commands become Ruby methods, command-line flags become method parameters, and IO redirection is accomplished via Ruby operators.

Here’s an invocation of our standard example subprocess using Shell:

puts "8. Shell"
require 'shell'
Shell.def_system_command :ruby, RUBY
shell =
input  = 'Hello from parent'
process = shell.transact do
  echo(input) | ruby('-r', THIS_FILE, '-e', 'hello("shell.rb", true)')
output = process.to_s
output.split("\n").each do |line|
  puts "[parent] output: #{line}"
puts "---"

And here is the output:

8. Shell
[child] Hello, standard error
[parent] output: [child] Hello from shell.rb
[parent] output: [child] Standard input contains: "Hello from parent"
[parent] output: [child] DONE

We start by defining the Ruby executable as a shell command by calling Shell.def_system_command. Then we instantiate a new Shell object. We construct the subprocess within a Shell#transact block. To have the process read a string from the parent process, we set up a pipeline from the echo built-in command to the Ruby invocation. Finally, we ensure the process is finished and collect its output by calling #to_s on the transaction.

Note that the child process’ STDERR stream is shared with the parent, not captured as part of the process output.

There is a lot going on here, and it’s only a very basic example of Shell’s capabilities. The Shell library contains many Ruby-friendly reimplementations of common UNIX userspace commands, and a lot of machinery for coordinating pipelines of concurrent processes. If your interest is piqued I recommend reading over the Shell source code and experimenting within IRB. A word of caution, however: the Shell library isn’t maintained as far as I know, and I ran into a couple of outright bugs in the process of constructing the above example. It may not be suitable for use in production code.


In this article we’ve looked at three Ruby standard libraries for executing subprocesses. In the next and final article we’ll examine some publicly available Rubygems that provide even more powerful tools for starting, stopping, and interacting with subprocesses within Ruby.

Written by avdi

October 12, 2009 at 1:11 pm

Devver adds Postgres and SQLite database support

We are working hard to quickly expand our compatibility on Ruby projects. With that goal driving us, we are happy to announce support for Postgres and SQLite databases. With the addition of these database options, along with our existing support for MySQL, Devver now supports all of the most popular databases commonly used with Ruby. These three databases are the default databases tested against ActiveRecord and we expect will cover the majority of the Ruby community.

To begin working with Postgres or SQLite on Devver all you need to do is have a database.yml with the test environment set to the adapter of your choice. If we don’t support your favorite database, you can still request a beta invite and let us know which database you want us to support. If we just added support for your database, perhaps we can speed up your project on Devver, so request a beta invite.

Written by DanM

July 6, 2009 at 12:24 pm

Posted in Development, Devver, Ruby, Testing

Tagged with , ,

SimpleDB DataMapper Adapter: Progress Report

From the beginning of Devver, we decided we wanted to work with some new technologies and we wanted to be able to scale easily. After looking at options AWS seemed to have many technologies that could help us build and scale a system like Devver. One of these technologies was SimpleDB. One of the other new things we decided to try was DataMapper (DM) rather than the more familiar ActiveRecord. This eventually let me to work on my own SimpleDB DataMapper adapter.

Searching for ways to work with SDB using Ruby, we found a SimpleDB DM adapter by Jeremy Boles. It worked well initially but as our needs grew (and to make it compatible with the current version of DM) it became necessary to add and update the features of the adapter. These changes lived hidden in our project’s code for awhile, for no other reason than we were too lazy to really commit it all back on GitHub. Recently though there has been a renewed interest about working with on SimpleDB with Ruby. I started pushing the code updates on GitHub, then I got a couple requests and suggestions here and there to improve the adapter. One of these suggestions cam from Ara Howard, who is doing impressive work of his own on Ruby and AWS, specifically SimpleDB. His suggestion on moving from the aws_sdb gem to right_aws, which along with other changes improved performance significantly (1.6x on write, up to 36x on reading large queries over the default limit of 100 objects). Besides performance improvements, we have recently added limit and sorting support to the adapter.

#new right_aws branch using AWS select
$ ruby scripts/simple_benchmark.rb
      user     system      total        real
creating 200 users
 1.020000   0.240000   1.260000 ( 35.715608)
Finding all users age 25 (all of them), 100 Times
 59.280000   8.640000  67.920000 ( 99.727380)

#old aws_sdb using query with attributes
$ ruby scripts/simple_benchmark.rb
      user     system      total        real
creating 200 users
  1.290000   0.530000   1.820000 ( 52.916103)
Finding all users age 25 (all of them), 100 Times
  356.640000  53.090000 409.730000 (3574.260988)

view this gist

As I added features, testing the adapter also became slow, (over a minute a run) because the functional tests actually connect to and use SimpleDB. Since Devver is all about speeding up Ruby tests, I decided to get the tests running on Devver. It was actually very easy and sped up the test suite from 1 minute and 8 seconds down to 28 seconds. You can check out how much Devver speeds up the results yourself.

We are currently using the SimpleDB adapter to power our website as well as the Devver backend service. It has been working well for us, but we know that it doesn’t cover everyone’s needs. Next time you are creating a simple project, give SimpleDB a look, we would love feedback about the DM adapter, and it would be great to get some other people contributing to the project. If anyone does fork my SDB Adapter Github repo, feel free to send me pull requests. Also, let me know if you want to try using Devver as you hack on the adapter, it can really speed up testing, and I would be happy to give out a free account.

Lastly, at a recent Boulder Ruby users group meet up, the group did a code review for the adapter. It went well and I should finish cleaning up the code and get the improvements suggested by the group committed to GitHub soon.

Update: The refactorings suggested at the code review are now live on GitHub.

Written by DanM

June 22, 2009 at 11:27 am

Spellcheck your files with Aspell and Rake

We recently redid our website. The new site included a new design and much more content explaining what we do. We wanted a quick way to check over everything and make sure we didn’t miss any spelling errors or typos. First I started looking for a web service that could scan the site for spelling errors. I found, which is nice but would only catch errors once they were live. It also can’t scan all of the pages which require being logged in.

I was pairing with Avdi who thought we should just run Aspell, which worked out great. We were originally trying to just create a simple Emacs macro to go through all our HTML files and check them but in the end created simple Rake tasks, which makes it really easy to integrate spellcheck into CI. After Avdi figured out the commands we needed to use on each file to get the information we needed from Aspell, it was easy to just wrap the command using Rake’s FileList. To keep everyone on the same setup, we created a local dictionary of words to ignore or accept and keep that checked into source control as well.

The final solution grabs all the files you want to spell check, then runs them through Aspell with HTML filtering. We have two tasks: one that runs in interactive mode the the user can fix mistakes and one mode for CI that just fails if it finds any errors.

def run_spellcheck(file,interactive=false)
  if interactive
    cmd = "aspell -p ./config/devver_dictionary -H check #{file}"
    puts cmd
    cmd = "aspell -p ./config/devver_dictionary -H list  'spellcheck:interactive'

namespace :spellcheck do
  files = FileList['app/views/**/*.html.erb']

  desc "Spellcheck interactive"
  task :interactive do
    files.each do |file|
    puts "spelling check complete"

  desc "Spellcheck for ci"
  task :ci do
    files.each do |file|
      success, results = run_spellcheck(file)
      unless success
        puts results
        exit 1
    puts "no spelling errors"
    exit 0

view this gist

Written by DanM

May 26, 2009 at 8:33 am