The Devver Blog

A Boulder startup improving the way developers work.

Archive for the ‘Ruby’ Category

Devver.next

We announced about two weeks ago that Devver and Caliper would be shutting down. The Caliper service will be shut down on April 30th, and Devver will be ceasing operations. We shared some of our thoughts about lessons learned while working on Devver.

Now people have been asking what Ben and I will be doing next. Honestly, at the moment it remains a mystery as much to us as to anyone. Both Ben and I have been working on startups together for over 3 years, something we had talked about doing together since high school. After our experience we both plan to take a bit of time off, to work on open source, personal projects, learn new things, and maybe catch up on some hobbies that have been neglected. Since Devver and the structure around it will be disappearing, we wanted to share our personal contact info in case anyone wants to get in touch with us. We will be looking at new work to get involved with sometime in May. Feel free to contact either of us if there is an opportunity one of us might be interested in.

Ben Brinckerhoff can be found online at bbrinck.com, and his email is ben@bbrinck.com

Dan Mayer can be found online at mayerdan.com, and his email is dan@mayerdan.com

We have learned an amazing amount over the last couple of years. We both feel like this has been an amazing opportunity and one last time want to thank everyone for their support. Thanks to the Ruby community, all the awesome Techstars teams, the startup community, our friends, families, and investors. We never would have made it this far and lasted this long in the startup world without all of you.

Next? Life is a journey, and we are excited to see whatever the future brings us. Thanks for all the good times, knowledge learned, and all the amazing people we met along the way.

Written by DanM

April 30, 2010 at 8:45 am

Posted in Boulder, Devver, Ruby, TechStars

Tagged with ,

Lessons Learned

As we’ve begun to wrap things up here at Devver, we’ve had the chance to reflect a bit on our experience. Although shutting down is not the outcome we wanted, it’s clear to both Dan and I that doing a startup has been an amazing learning experience. While we still have a lot to learn, we wanted to share some of the most important lessons we’ve learned during this time.

The community

When we started Devver, we were hesitant to ask for feedback and help. We quickly found that people are incredibly helpful and generous with their time. Users were willing to take a chance and use our products while giving us valuable feedback. Fellow Rubyists gave us ideas and helped us with technical problems. Mentors made time for meetings and introduced us to others who could assist us. And other entrepreneurs, both new and seasoned, were happy to share stories, compare experiences, and offer support.

If you are working on a startup, don’t be afraid to ask for help! The vast majority of people want to help you succeed, provided that you respect them and their time. That means you need to prepare adequately (do your research and ask good questions), figure out their preferred methods of communication (e.g. don’t call if they prefer email), show up on time, don’t overburden them, and thank them. And when other people need your help, give back!

You can build awesome relationships with various communities on your own, but we strongly recommend joining a mentorship program like TechStars. The program accelerated the process of connecting with mentors, users, and other entrepreneurs by providing an amazing community during the summer (and to this day). The advice, introductions, and support have been simply incredible.

Founding team

Dan and I are both technical founders. Looking back, it would have been to our advantage to have a third founder who really loved the business aspect of running a startup.

There is a belief (among technical founders) that technical founders are sufficient for a successful startup. Or, put more harshly, that you can teach a hacker business, but you can’t teach a businessman how to hack“. I don’t want to argue whether that’s true or not. Clearly there are examples of technical founders being sufficient to get a company going, but my point is that having solely technical founders is non-optimal. You can teach a hacker business, but you can’t make him or her get excited about it, which means it may not get the time or attention it deserves.

Hackers are passionate about, well, hacking. And so we tend to measure progress in terms of features completed or lines of code written. Clearly, code needs to be written, but ideally a startup would have a founder who is working on important non-technical tasks: talking with customers, measuring key metrics, developing distribution channels, etc. I’m not advocating that only one founder works on these tasks while technical founders ignore customer development – everyone needs to get involved. Rather, I’m pointing out that given a choice, technical founders will tend to solve problems technically and having a founder who has the opposite default is valuable.

Remote teams

We embraced working remotely: we hired Avdi to work in Pennsylvania while Dan and I lived in Boulder and later on, Dan moved to Washington, DC. There are many benefits to having a distributed team, but two stood out in our experience. First, we could hire top talent without having to worry about location (in fact, our flexibility regarding location was very attractive to most candidates we interviewed). Secondly, being in different locations allowed every team member to work with minimal distractions, which is invaluable when it comes to efficiently writing good code.

That said, communication was a challenge. To ensure we were all synced up, we had a daily standup as well as a weekly review. When Dan moved to DC, he and I scheduled another weekly meeting with no set agenda to just bring up all the issues, large and small, that were on our minds. We also all got together in the same location every few months to work in the same room and rekindle our team energy.

Also, pair programming was difficult to do remotely and we never came up with a great solution. As a result, we spent less than a day pairing a week on average.

The most significant drawback to a remote team is the administrative hassle. It’s a pain to manage payroll, unemployment, insurance, etc in one state. It’s a freaking nightmare to manage in three states (well, two states and a district), even though we paid a payroll service to take care of it. Apparently, once your startup gets larger, there are companies that will manage this with minimal hassle, but for a small team, it was a major annoyance and distraction.

Product development

Most of the mistakes we made developing our test accelerator and, later, Caliper boiled down to one thing: we should have focused more on customer development and finding a minimum viable product (MVP).

The first thing we worked on was our Ruby test accelerator. At the time, we thought we had found our MVP: we had made encouraging technical progress and we had talked to several potential customers who were excited about the product we were building. Anything simpler seems “too simple” to be interesting.

Our mistake at that point was to go “heads down” and focus on building the accelerator while minimizing our contact with users and customers (after all, we knew how great it was and time spent talking to customers was time we could be hacking!). We should have asking, “Is there an even simpler version of this product that we can deliver sooner to learn more about pricing, market size, and technical challenges?”

If we had done so, we would have discovered:

  • whether the need was great enough (and if the solution was good enough) to convince people to open their wallets
  • that while a few users acutely felt the pain of slow tests, most didn’t care about acceleration. However, many of those users did want a “simpler” application – non-accelerated Ruby cloud testing.
  • the primary technical challenge was not accelerating tests, it was configuring servers for customers’ Rails applications. Not only did we spend time focusing on the wrong technical challenges, we also made architectural decisions that actually made it harder to solve this core problem.

After eventually discovering that setup and configuration was our primary adoption problem (and after trying and failing to implement various strategies to make it simple and easy), we tried to move to the other end of the spectrum. Caliper was designed to provide value with zero setup or configuration – users just provided a link to source code and instantly got valuable data.

Unfortunately, we again made the mistake of focusing on engineering first and customer development second. We released our first version to some moderate success and then proceeded to continue to churn out features without really understanding customer needs. Only later on, after finally engaging potential customers did we realize that market was too small and price point was to low to have Caliper sustain our company by itself.

Conclusion

This is by no means a comprehensive list, but it is our hope that other startups and founders-to-be can learn from our experiences, both mistakes and successes. Doing a startup has been an incredible learning experience for both Dan and I and we look forward to learning more in the future – both first-hand and from the amazing group of entrepreneurs and hackers that we’ve been privileged enough to know.

Written by Ben

April 26, 2010 at 11:04 am

Speeding up multi-browser Selenium Testing using concurrency

I haven’t used Selenium for awhile, so I took some time to dig into the options to get some mainline tests running against Caliper in multiple browsers. I wanted to be able to test a variety of browsers against our staging server before pushing new releases. Eventually this could be integrated into Continuous Integration (CI) or Continuous Deployment (CD).

The state of Selenium testing for Rails is currently in flux:

So there are multiple gems / frameworks:

I decided to investigate several options to determine which is the best approach for our tests.

selenium-on-rails

I originally wrote a couple example tests using the selenium-on-rails plugin. This allows you to browse to your local development web server at ‘/selenium’ and run tests in the browser using the Selenium test runner. It is simple and the most basic Selenium mode, but it obviously has limitations. It wasn’t easy to run many different browsers using this plugin, or use with Selenium-RC, and the plugin was fairly dated. This lead me to try simplest next thing, selenium-client

open '/'
assert_title 'Hosted Ruby/Rails metrics - Caliper'
verify_text_present 'Recently Generated Metrics'

click_and_wait "css=#projects a:contains('Projects')"
verify_text_present 'Browse Projects'

click_and_wait "css=#add-project a:contains('Add Project')"
verify_text_present 'Add Project'

type 'repo','git://github.com/sinatra/sinatra.git'
click_and_wait "css=#submit-project"
verify_text_present 'sinatra/sinatra'
wait_for_element_present "css=#hotspots-summary"
verify_text_present 'View full Hot Spots report'

view this gist

selenium-client

I quickly converted my selenium-on-rails tests to selenium-client tests, with some small modifications. To run tests using selenium-client, you need to run a selenium-RC server. I setup Sauce RC on my machine and was ready to go. I configured the tests to run locally on a single browser (Firefox). Once that was working I wanted to run the same tests in multiple browsers. I found that it was easy to dynamically create a test for each browser type and run them using selenium-RC, but that it was increadly slow, since tests run one after another and not concurrently. Also, you need to install each browser (plus multiple versions) on your machine. This led me to use Sauce Labs’ OnDemand.

browser.open '/'
assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
assert browser.text?('Recently Generated Metrics')

browser.click "css=#projects a:contains('Projects')", :wait_for => :page
assert browser.text?('Browse Projects')

browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
assert browser.text?('Add Project')

browser.type 'repo','git://github.com/sinatra/sinatra.git'
browser.click "css=#submit-project", :wait_for => :page
assert browser.text?('sinatra/sinatra')
browser.wait_for_element "css=#hotspots-summary"
assert browser.text?('View full Hot Spots report')

view this gist

Using Selenium-RC and Sauce Labs Concurrently

Running on all the browsers Sauce Labs offers (12) took 910 seconds. Which is cool, but way too slow, and since I am just running the same tests over in different browsers, I decided that it should be done concurrently. If you are running your own Selenium-RC server this will slow down a lot as your machine has to start and run all of the various browsers, so this approach isn’t recommended on your own Selenium-RC setup, unless you configure Selenium-Grid. If you are using¬† Sauce Labs, the tests run concurrently with no slow down. After switching to concurrently running my Selenium tests, run time went down to 70 seconds.

My main goal was to make it easy to write pretty standard tests a single time, but be able to change the number of browsers I ran them on and the server I targeted. One approach that has been offered explains how to setup Cucumber to run Selenium tests against multiple browsers. This basically runs the rake task over and over for each browser environment.

Althought this works, I also wanted to run all my tests concurrently. One option would be to concurrently run all of the Rake tasks and join the results. Joining the results is difficult to do cleanly or you end up outputting the full rake test output once per browser (ugly when running 12 times). I took a slightly different approach which just wraps any Selenium-based test in a run_in_browsers block. Depending on the options set, the code can run a single browser against your locally hosted application, or many browsers against a staging or production server. Then simply create a separate Rake task for each of the configurations you expect to use (against local selenium-RC and Sauce Labs on demand).

I am pretty happy with the solution I have for now. It is simple and fast and gives another layer of assurances that Caliper is running as expected. Adding additional tests is simple, as is integrating the solution into our CI stack. There are likely many ways to solve the concurrent selenium testing problem, but I was able to go from no Selenium tests to a fast multi-browser solution in about a day, which works for me. There are downsides to the approach, the error output isn’t exactly the same when run concurrently, but it is pretty close.¬† As opposed to seeing multiple errors for each test, you get a single error per test which includes the details about what browsers the error occurred on.

In the future I would recommend closely watching Webrat and Capybara which I would likely use to drive the Selenium tests. I think the eventual merge will lead to the best solution in terms of flexibility. At the moment Capybara doesn’t support selenium-RC, and the tests I originally wrote didn’t convert to the Webrat API as easily as directly to selenium-client (although setting up Webrat to use Selenium looks pretty simple). The example code given could likely be adapted easily to work with existing Webrat tests.

namespace :test do
  namespace :selenium do

    desc "selenium against staging server"
    task :staging do
      exec "bash -c 'SELENIUM_BROWSERS=all SELENIUM_RC_URL=saucelabs.com SELENIUM_URL=http://caliper-staging.heroku.com/  ruby test/acceptance/walkthrough.rb'"
    end

    desc "selenium against local server"
    task :local do
      exec "bash -c 'SELENIUM_BROWSERS=one SELENIUM_RC_URL=localhost SELENIUM_URL=http://localhost:3000/ ruby test/acceptance/walkthrough.rb'"
    end
  end
end

view this gist

require "rubygems"
require "test/unit"
gem "selenium-client", ">=1.2.16"
require "selenium/client"
require 'threadify'

class ExampleTest  1
      errors = []
      browsers.threadify(browsers.length) do |browser_spec|
        begin
          run_browser(browser_spec, block)
        rescue => error
          type = browser_spec.match(/browser\": \"(.*)\", /)[1]
          version = browser_spec.match(/browser-version\": \"(.*)\",/)[1]
          errors < type, :version => version, :error => error}
        end
      end
      message = ""
      errors.each_with_index do |error, index|
        message +="\t[#{index+1}]: #{error[:error].message} occurred in #{error[:browser]}, version #{error[:version]}\n"
      end
      assert_equal 0, errors.length, "Expected zero failures or errors, but got #{errors.length}\n #{message}"
    else
      run_browser(browsers[0], block)
    end
  end

  def run_browser(browser_spec, block)
    browser = Selenium::Client::Driver.new(
                                           :host => selenium_rc_url,
                                           :port => 4444,
                                           :browser => browser_spec,
                                           :url => test_url,
                                           :timeout_in_second => 120)
    browser.start_new_browser_session
    begin
      block.call(browser)
    ensure
      browser.close_current_browser_session
    end
  end

  def test_basic_walkthrough
    run_in_all_browsers do |browser|
      browser.open '/'
      assert_equal 'Hosted Ruby/Rails metrics - Caliper', browser.title
      assert browser.text?('Recently Generated Metrics')

      browser.click "css=#projects a:contains('Projects')", :wait_for => :page
      assert browser.text?('Browse Projects')

      browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://github.com/sinatra/sinatra.git'
      browser.click "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')
      browser.wait_for_element "css=#hotspots-summary"
      assert browser.text?('View full Hot Spots report')
    end
  end

  def test_generate_new_metrics
    run_in_all_browsers do |browser|
      browser.open '/'
      browser.click "css=#add-project a:contains('Add Project')", :wait_for => :page
      assert browser.text?('Add Project')

      browser.type 'repo','git://github.com/sinatra/sinatra.git'
      browser.click "css=#submit-project", :wait_for => :page
      assert browser.text?('sinatra/sinatra')

      browser.click "css=#fetch"
      browser.wait_for_page
      assert browser.text?('sinatra/sinatra')
    end
  end

end

view this gist

Written by DanM

April 8, 2010 at 10:07 am

Playing with Processing, Making Snow

What is Processing?

“Processing is an open source programming language and environment for people who want to program images, animation, and interactions.”
-Processing.org

I wanted to play around with doing some visual programming and had played with Processing in the past. I recently had been reading about Ruby-Processing and wanted to give it a shot. First, I went to look for some Ruby-Processing tutorials, and I had recently heard about the presentation by Jeff Casimir about the ‘Art of Code‘ (slides and code), using Ruby-Processing. I went through those examples and decided I wanted to modify it to display snowflakes in the spirit of winter. After a bit of searching I found a project that generated a Penrose snow flake using Ruby-Processing. I figured I could modify the programs to get a nice snow flake screen saver type app. The result is my app Processing-Snow, and is shown in the screen shot below.

Processing-Snow

Playing around with Ruby-Processing is a lot of fun, I highly recommend spending a couple hours to make a tiny app. I built my Snow app in about an hour and a half. Then I spent a bit of time using Caliper to improve the metrics. For such a small project there wasn’t a lot to improve, but it still helped me to do some refactoring. To get an idea of the code you can view Processing-Snow’s Metrics.

Feel free to fork Processing-Snow on GitHub and read about how to run it with in the projectss README.

Written by DanM

December 23, 2009 at 8:16 pm

Posted in Hacking, Misc, Ruby

Announcing Caliper Community Statistics

For the past few months, we’ve been building Caliper to help you easily generate code metrics for your Ruby projects. We’ve recently added another dimension of metrics information: community statistics for all the Ruby projects that are currently in Caliper.

The idea of community statistics is two-fold. From a practical perspective, you can now compare your project’s metrics to the community. For example, Flog measures the complexity of methods. Many people wonder exactly defines a good Flog score for an individual method. In Jake Scruggs’ opinion, a score of 0-10 is “Awesome”, while a score of 11-20 is “Good enough”. That sounds correct, but with Caliper’s community metrics, we can also compare the average Flog scores for entire projects to see what defines a good average score.

To do so, we calculate the average Flog method score for each project and plot those averages on a histogram, like so:

flog_average_method_histogram

Looking at the data, we see that a lot of projects have an average Flog score between 6 and 12 (the mean is 10.3 and the max is is 21.3).

If your project’s average Flog score is 9, does that mean it has only “Awesome” methods, Flog-wise? Well, remember that we’re looking at the average score for each project. I suspect that in most projects, lots of tiny methods are pulling down the average, but there are still plenty of big, nasty methods. It would be interesting to look at the community statistics for maximum Flog score per project or see a histogram of the Flog scores for all methods across all projects (watch this space!).

Since several of the metrics (like Reek, which detects code smells) have scores that grow in proportion to the number of lines of code, we divide the raw score by each project’s lines of code. As a result, we can sensibly compare your project to other projects, no matter what the difference in size.

The second reason we’re calculating community statistics is so we can discover trends across the Ruby community. For example, we can compare the ratio of lines of application code to test code. It’s interesting to note that a significant portion of projects in Caliper have no tests, but that, for the projects that do have tests, most of them have a code:test ratio in the neighborhood of 2:1.

code_to_test_ratio_histogram

Other interesting observations from our initial analysis:
* A lot of projects (mostly small ones) have no Flay duplications.
* Many smaller projects have no Reek smells, but the average project has about 1 smell per 9 lines of code.

Want to do your own analysis? We’ve built a scatter plotter so you can see if any two metrics have any correlation. For instance, note the correlation between code complexity and code smells.

Here’s a scatter plot of that data (zoomed in):

scatter_plot

Over the coming weeks, we’ll improve the graphs we have and add new graphs that expose interesting trends. But we need your help! Please let us know if you spot problems, have ideas for new graphs, or have any questions. Additionally, please add your project to Caliper so it can be included in our community statistics. Finally, feel free to grab the raw stats from our alpha API* and play around yourself!

* Quick summary:

curl http://api.devver.net/metrics

for JSON,

curl -H 'Accept:text/x-yaml' http://api.devver.net/metrics

for YAML. More details. API is under development, so please send us feedback!

Written by Ben

November 19, 2009 at 8:37 am

Posted in Development, Ruby, Tools

Tagged with ,

Improving Code using Metric_fu

Often, when people see code metrics they think, “that is interesting, I don’t know what to do with it.” I think metrics are great, but when you can really use them to improve your project’s code, that makes them even more valuable. metric_fu provides a bunch of great metric information, which can be very useful. But if you don’t know what parts of it are actionable it’s merely interesting instead of useful.

One thing when looking at code metrics to keep in mind is that a single metric may not be as interesting. If you look at a metric trends over time it might help give you more meaningful information. Showing this trending information is one of our goals with Caliper. Metrics can be your friend watching over the project and like having a second set of eyes on how the code is progressing, alerting you to problem areas before they get out of control. Working with code over time, it can be hard to keep everything in your head (I know I can’t). As the size of the code base increases it can be difficult to keep track of all the places where duplication or complexity is building up in the code. Addressing the problem areas as they are revealed by code metrics can keep them from getting out of hand, making future additions to the code easier.

I want to show how metrics can drive changes and improve the code base by working on a real project. I figured there was no better place to look than pointing metric_fu at our own devver.net website source and fixing up some of the most notable problem areas. We have had our backend code under metric_fu for awhile, but hadn’t been following the metrics on our Merb code. This, along with some spiked features that ended up turning into Caliper, led to some areas getting a little out of control.

Flay Score before cleanup

When going through metric_fu the first thing I wanted to start to work on was making the code a bit more DRY. The team and I were starting to notice a bit more duplication in the code than we liked. I brought up the Flay results for code duplication and found that four databases models shared some of the same methods.

Flay highlighted the duplication. Since we are planning on making some changes to how we handle timestamps soon, it seemed like a good place to start cleaning up. Below are the methods that existed in all four models. A third method ‘update_time’ existed in two of the four models.

 def self.pad_num(number, max_digits = 15)
    "%%0%di" % max_digits % number.to_i
  end

  def get_time
      Time.at(self.time.to_i)
  end

Nearly all of our DB tables store time in a way that can be sorted with SimpleDB queries. We wanted to change our time to be stored as UTC in the ISO 8601 format. Before changing to the ISO format, it was easy to pull these methods into a helper module and include it in all the database models.

module TimeHelper

  module ClassMethods
    def pad_num(number, max_digits = 15)
      "%%0%di" % max_digits % number.to_i
    end
  end

  def get_time
      Time.at(self.time.to_i)
  end

  def update_time
    self.time = self.class.pad_num(Time.now.to_i)
  end

end

Besides reducing the duplication across the DB models, it also made it much easier to include another time method update_time, which was in two of the DB models. This consolidated all the DB time logic into one file, so changing the time format to UTC ISO 8601 will be a snap. While this is a trivial example of a obvious refactoring it is easy to see how helper methods can often end up duplicated across classes. Flay can come in really handy at pointing out duplication that over time that can occur.

Flog gives a score showing how complex the measured code is. The higher the score the greater the complexity. The more complex code is the harder it is to read and it likely contains higher defect density. After removing some duplication from the DB models I found our worst database model based on Flog scores was our MetricsData model. It included an incredibly bad high flog score of 149 for a single method.

File Total score Methods Average score Highest score
/lib/sdb/metrics_data.rb 327 12 27 149

The method in question was extract_data_from_yaml, and after a little refactoring it was easy to make extract_data_from_yaml drop from a score of 149 to a series of smaller methods with the largest score being extract_flog_data! (33.6). The method was doing too much work and was frequently being changed. The method was extracting the data from 6 different metric tools and creating summary of the data.

The method went from a sprawling 42 lines of code to a cleaner and smaller method of 10 lines and a collection of helper methods that look something like the below code:

  def self.extract_data_from_yaml(yml_metrics_data)
    metrics_data = Hash.new {|hash, key| hash[key] = {}}
    extract_flog_data!(metrics_data, yml_metrics_data)
    extract_flay_data!(metrics_data, yml_metrics_data)
    extract_reek_data!(metrics_data, yml_metrics_data)
    extract_roodi_data!(metrics_data, yml_metrics_data)
    extract_saikuro_data!(metrics_data, yml_metrics_data)
    extract_churn_data!(metrics_data, yml_metrics_data)
    metrics_data
  end

  def self.extract_flog_data!(metrics_data, yml_metrics_data)
    metrics_data[:flog][:description] = 'measures code complexity'
    metrics_data[:flog]["average method score"] = Devver::Maybe(yml_metrics_data)[:flog][:average].value(N_A)
    metrics_data[:flog]["total score"]   = Devver::Maybe(yml_metrics_data)[:flog][:total].value(N_A)
    metrics_data[:flog]["worst file"] = Devver::Maybe(yml_metrics_data)[:flog][:pages].first[:path].fmap {|x| Pathname.new(x)}.value(N_A)
  end

Churn gives you an idea of files that might be in need of a refactoring. Often if a file is changing a lot it means that the code is doing too much, and would be more stable and reliable if broken up into smaller components. Looking through our churn results, it looks like we might need another layout to accommodate some of the different styles on the site. Another thing that jumps out is that both the TestStats and Caliper controller have fairly high churn. The Caliper controller has been growing fairly large as it has been doing double duty for user facing features and admin features, which should be split up. TestStats is admin controller code that also has been growing in size and should be split up into more isolated cases.

churn results

Churn gave me an idea of where might be worth focusing my effort. Diving in to the other metrics made it clear that the Caliper controller needed some attention.

The Flog, Reek, and Roodi Scores for Caliper Controller:

File Total score Methods Average score Highest score
/app/controllers/caliper.rb 214 14 15 42

reek before cleanup

Roodi Report
app/controllers/caliper.rb:34 - Method name "index" has a cyclomatic complexity is 14.  It should be 8 or less.
app/controllers/caliper.rb:38 - Rescue block should not be empty.
app/controllers/caliper.rb:51 - Rescue block should not be empty.
app/controllers/caliper.rb:77 - Rescue block should not be empty.
app/controllers/caliper.rb:113 - Rescue block should not be empty.
app/controllers/caliper.rb:149 - Rescue block should not be empty.
app/controllers/caliper.rb:34 - Method name "index" has 36 lines.  It should have 20 or less.

Found 7 errors.

Roodi and Reek both tell you about design and readability problems in your code. The screenshot of our Reek ‘code smells’ in the Caliper controller should show how it had gotten out of hand. The code smells filled an entire browser page! Roodi similarly had many complaints about the Caliper controller. Flog was also showing the file was getting a bit more complex than it should be. After picking off some of the worst Roodi and Reek complaints and splitting up methods with high Flog scores, the code had become easily readable and understandable at a glance. In fact I nearly cut the Reek complaints in half for the controller.

Reek after cleanup

Refactoring one controller, which had been quickly hacked together and growing out of control, brought it from a dizzying 203 LOC to 138 LOC. The metrics drove me to refactor long methods (52 LOC => 3 methods the largest being 23 LOC), rename unclear variable names (s => stat, p => project), move some helpers methods out of the controller into the helper class where they belong. Yes, all these refactorings and good code designs can be done without metrics, but it can be easy to overlook bad code smells when they start small, metrics can give you an early warning that a section of code is becoming unmanageable and likely prone to higher defect rates. The smaller file was a huge improvement in terms of cyclomatic complexity, LOC, code duplication, and more importantly, readability.

Obviously I think code metrics are cool, and that your projects can be improved by paying attention to them as part of the development lifecycle. I wrote about metric_fu so that anyone can try these metrics out on their projects. I think metric_fu is awesome, and my interest in Ruby tools is part of what drove us to build Caliper, which is really the easiest way try out metrics for your project. Currently, you can think of it as hosted metric_fu, but we are hoping to go even further and make the metrics clearly actionable to users.

In the end, yep, this is a bit of a plug for a product I helped build, but it is really because I think code metrics can be a great tool to help anyone with their development. So submit your repo in and give Caliper hosted Ruby metrics a shot. We are trying to make metrics more actionable and useful for all Ruby developers out, so we would love to here from you with any ideas about how to improve Caliper, please contact us.

Written by DanM

October 27, 2009 at 10:30 pm

A Dozen (or so) Ways to Start Subprocesses in Ruby: Part 3

In part 1 and part 2 of this series, we took a look at some of Ruby’s built-in ways to start subprocesses. In this article we’ll branch out a bit, and examine some of the tools available to us in Ruby’s Standard Library. In the process, we’ll demonstrate some lesser-known libraries.

Helpers

First, though, let’s recap some of our boilerplate code. Here’s the preamble code which is common to all of the demonstrations in this article:

require 'rbconfig'

$stdout.sync = true

def hello(source, expect_input)
  puts "[child] Hello from #{source}"
  if expect_input
    puts "[child] Standard input contains: \"#{$stdin.readline.chomp}\""
  else
    puts "[child] No stdin, or stdin is same as parent's"
  end
  $stderr.puts "[child] Hello, standard error"
  puts "[child] DONE"
end

THIS_FILE = File.expand_path(__FILE__)

RUBY = File.join(Config::CONFIG['bindir'], Config::CONFIG['ruby_install_name'])

#hello is the method which we will be calling in a Ruby subprocess. It reads some text from STDIN and writes to both STDOUT and STDERR.

THIS_FILE and RUBY contain full paths for the demo source file and the the Ruby interpreter, respectively.

Method #6: Open3

The Open3 library defines a single method, Open3#popen3(). #popen3() behaves similarly to the Kernel#popen() method we encountered in part 2. If you remember from that article, one drawback to the #popen() method was that it did not give us a way to capture the child process’ STDERR stream. "]Open3#popen3() addresses this deficiency.

Open3#popen3() is used very similarly to Kernel#popen() (or Kernel#open() with a ‘|’ argument). The difference is that in addition to STDIN and STDOUT handles, popen3() yields a STDERR handle as well.

puts "6. Open3"
require 'open3'
include Open3
popen3(RUBY, '-r', THIS_FILE, '-e', 'hello("Open3", true)') do
  |stdin, stdout, stderr|
  stdin.write("hello from parent")
  stdin.close_write
  stdout.read.split("\n").each do |line|
    puts "[parent] stdout: #{line}"
  end
  stderr.read.split("\n").each do |line|
    puts "[parent] stderr: #{line}"
  end
end
puts "---"

When we execute this code, the result shows that we have captured the subprocess’ STDERR output:

6. Open3
[parent] stdout: [child] Hello from Open3
[parent] stdout: [child] Standard input contains: "hello from parent"
[parent] stdout: [child] DONE
[parent] stderr: [child] Hello, standard error
---

Method #7: PTY

All of the methods we have considered up to this point have shared a common limitation: they are not very well-suited to interfacing with highly interactive subprocesses. They work well for “filter”-style commands, which read some input, produce some output, and then exit. But when used with interactive subprocesses which wait for input, produce some output, and then wait for more input (etc.), their use can result in deadlocks. In a typical deadlock scenario, the expected output is never produced because input is still stuck in the input buffer, and the program hangs forever as a result. This is why, in previous examples, we have been careful to call #close_write on subprocess input handles before reading any output.

Ruby ships with a little-known and poorly-documented standard library called “pty”. The pty library is an interface to BSD pty devices. What is a pty device? In BSD-influenced UNIXen, such as Linux or OS X, a pty is a “pseudoterminal”. In other words, it’s a terminal device that isn’t attached to a physical terminal. If you’ve used a terminal program in Linux or OS X, you’ve probably used a pty without realizing it. GUI Terminal emulators, such as xterm, GNOME Terminal, and Terminal.app often use a pty device behind the scenes to communicate with the OS.

What does this mean for us? It means if we’re running Ruby on UNIX, we have the ability to start our subprocesses inside a virtual terminal. We can then read from and write to that terminal as if our program were a user sitting in front of a terminal, typing in commands and reading responses.

Here’s how it’s used:

puts "7. PTY"
require 'pty'
PTY.spawn(RUBY, '-r', THIS_FILE, '-e', 'hello("PTY", true)') do
  |output, input, pid|
  input.write("hello from parent\n")
  buffer = ""
  output.readpartial(1024, buffer) until buffer =~ /DONE/
  buffer.split("\n").each do |line|
    puts "[parent] output: #{line}"
  end
end
puts "---"

And here is the output:

7. PTY
[parent] output: [child] Hello from PTY
[parent] output: hello from parent
[parent] output: [child] Standard input contains: "hello from parent"
[parent] output: [child] Hello, standard error
[parent] output: [child] DONE
---

There are a few of points to note about this code. First, we don’t need to call #close_write or #flush on the process input handle. However, the newline at the end of “Hello from parent” is essential. By default, UNIX terminal devices buffer input until they see a newline. If we left off the newline, the subprocess would never finish waiting for input.

Second, because the subprocess is running asynchronously and independently from the parent process, we have no way of knowing exactly when it has finished reading input and producing output of its own. We deal with this by buffering output until we see a marker (“DONE”).

Third, you may notice that “hello from parent” appears twice in the output – once as part of the parent process output, and once as part of the child output. That’s because another default behaviour for UNIX terminals is to echo any input they receive back to the user. This is what enables you to see what you’ve just typed when working at the command line.

You can alter these default terminal device behaviours using the Ruby “termios” gem.

Note that both STDOUT and STDERR were captured in the subprocess output. From the perspective of the pty user, standard output and standard error streams are indistinguishable – it’s all just output. That means using pty is probably the only way to run a subprocess and capture standard error and standard output interleaved in the same way we would see if we ran the process manually from a terminal window. Depending on the application, this may be a feature or a drawback.

You can execute PTY.spawn() without a block, in which case it returns an array of output, input, and PID. If you choose to experiment with this style of calling PTY.spawn(), be aware that you may need to rescue the PTY::ChildExited exception, which is thrown whenever the child process finally exits.

If you’re interested in reading more code which uses the pty library, the Standard Library also includes a library called “expect.rb”. expect.rb is a basic Ruby reimplementation of the classic “expect” utility written using pty.

Method #8: Shell

More obscure even than the pty library is Ruby’s Shell library. Shell is, to my knowledge, totally undocumented and rarely used. Which is a shame, because it implements some interesting ideas.

Shell is an attempt to emulate a basic UNIX-style shell environment as an internal DSL within Ruby. Shell commands become Ruby methods, command-line flags become method parameters, and IO redirection is accomplished via Ruby operators.

Here’s an invocation of our standard example subprocess using Shell:

puts "8. Shell"
require 'shell'
Shell.def_system_command :ruby, RUBY
shell = Shell.new
input  = 'Hello from parent'
process = shell.transact do
  echo(input) | ruby('-r', THIS_FILE, '-e', 'hello("shell.rb", true)')
end
output = process.to_s
output.split("\n").each do |line|
  puts "[parent] output: #{line}"
end
puts "---"

And here is the output:

8. Shell
[child] Hello, standard error
[parent] output: [child] Hello from shell.rb
[parent] output: [child] Standard input contains: "Hello from parent"
[parent] output: [child] DONE
---

We start by defining the Ruby executable as a shell command by calling Shell.def_system_command. Then we instantiate a new Shell object. We construct the subprocess within a Shell#transact block. To have the process read a string from the parent process, we set up a pipeline from the echo built-in command to the Ruby invocation. Finally, we ensure the process is finished and collect its output by calling #to_s on the transaction.

Note that the child process’ STDERR stream is shared with the parent, not captured as part of the process output.

There is a lot going on here, and it’s only a very basic example of Shell’s capabilities. The Shell library contains many Ruby-friendly reimplementations of common UNIX userspace commands, and a lot of machinery for coordinating pipelines of concurrent processes. If your interest is piqued I recommend reading over the Shell source code and experimenting within IRB. A word of caution, however: the Shell library isn’t maintained as far as I know, and I ran into a couple of outright bugs in the process of constructing the above example. It may not be suitable for use in production code.

Conclusion

In this article we’ve looked at three Ruby standard libraries for executing subprocesses. In the next and final article we’ll examine some publicly available Rubygems that provide even more powerful tools for starting, stopping, and interacting with subprocesses within Ruby.

Written by avdi

October 12, 2009 at 1:11 pm

Lone Star Ruby Conf 2009 Wrapup Review

I recently went to the Lone Star Ruby Conference (LSRC), in Austin TX. It was great to be able to put faces to many people I had interacted with in the Ruby community via blogs and twitter. I also got to meet Matz and briefly talk with him, which was cool. Meeting someone who created a language which is such a large part of your day to day life is just an interesting experience. I enjoyed LSRC, and just wanted to give a quick summary of some of the talks that I saw and enjoyed. This is by no means full coverage of the event, but hopefully sharing my experience with others is worth something. If you are interested in seeing any of the talks keep an eye out for Confreaks, they taped the event and many of the talks should be coming online soon.

Dave Thomas
Dave was the first speaker for LSRC, and it was a great way to kick off the event. Dave gave a talk about Ruby not being perfect and that is why he likes it. I have heard Dave speak before, and I always enjoy his talks. It isn’t like you learn anything specific about Ruby development, but you learn about the Ruby community. Actually, Dave would say we are a collection of Ruby communities, and that having a collection of communities is a good thing. It was also interesting to hear Dave speak about the entire Zed, “Rails is a Ghetto” incident. Sometimes when you are angrily ranting around online, it is easy to forget that there are real people attached to things. Feelings can get hurt, and while Dave agrees there is some valid points in the post, I think it shows that it probably isn’t a good way to go about fixing them. Dave really loves Ruby and the weird things you can do with the language and it shows.

Glenn Vanderburg, Programming Intuition
Glenn talked about phyical emotions tied to code, such as a sense of touch or smell. The talk generally just evoked memories of Paul Graham’s “Hackers and Painters” in my head, in fact Glenn talked about PG during his talk. The best programmers talk about code as if they can see it. The talk explored ways to feel the code and react to it. It tried to promote the idea that it is OK to just have a gut reaction that some code is a bad way to do things, because we should sense the code. Glenn also played a video showing Bobby McFerrin teaching the audience the Pentatonic scale, which I really enjoyed.

James Edward Gray II, Module Magic
James visited Japan recently and went to a Ruby conference, and he really enjoyed it. About half his talk was why Japan is awesome… He then found little ways to tie this back into his talk about Ruby and Modules. It covered some interesting topics like load order that many people just don’t know enough about, but use every day. Examples of the differences between include and extend. Modules are terrific at limiting scope, limit the scope of scary magic and keep it hidden away. I enjoyed talking with James a good amount through out the conference. I had never met him before LSRC, but I used to practice Ruby working on Ruby Quiz which he ran for a long time.

James has his slides are up, Module Magic

Fernand Galiana, R-House
Fernand gave a really cool and demo heavy talk about home automation. He has a web front end that lets him interact with all his technology. His house tweets most of the events that it runs. The web interface has a iPhone front in, so he can, on the go, change the temperature or turn off lights. I have always been a real home automation geek. When I was growing up, my dad loved playing with an X-10 system that we had in the house. I am really interested in playing with some of this stuff when I have my own place, mostly looking at ways I could use it to cut waste on my energy usage.

Mike Subelsky, Ruby for Startups: Battle Scars and Lessons Learned
* You Ain’t Gonna Need It (YAGNI), don’t worry about being super scaling at the beginning…
* Early days focus on learning more data about what your building and what your customers want concentrate on the first 80% solution.
* Don’t over build, over design, or over engineer.
* Eventually plan to move everything out of your web request, build it so that it will be easy to do in the future, but it isn’t worth doing at first. (delayed job, EM, etc)
* Make careful use of concurrency, prefer processes communicating via messages (SQS etc…) If you are doing threading in your software EM is your friend.
* Avoid touching your RDBMS when you are storing not critical data:
– Storing large text blogs in S3, message across SQS, tons of logging SDB
* Don’t test all the time at the beginning, it gets in the way of exploration… Things that is mission critical maybe should be BDD as it will be the most stable and least likely to change part of your code

Mike posted his slides on his blog, Ruby for Startups.

Jeremy Hinegardner, Playing nice with others. — Tools for mixed language environments

Jeremy wanted to show how easy it is to use some code to make it easy to work with a system that uses multiple languages. He brought up that most projects in the room utilize more than one language. That it will be more common as systems grow in complexity. He looked at a lot of queues, key value stores, and cache-like layers that can be talked to by a variety of language. He then showed some code that would quickly demonstrate how easy it was to work with some of these tools. Extra points because he talked about Beanstalkd, which I personally think is really cool. I think nearly everyone is starting to look at work queues, messaging systems, and non standard databases for their project and this was a good overview of options that are out there.

Yukihiro Matsumoto (Matz), Keynote and Q&A
Matz gave a talk about why we, as a community, love Ruby. In this talk there weren’t really any takeaways that were specifically about Ruby code but more about the community and why Ruby is fun. He spent a good amount of time talking about Quality Without A Name, QWAN. More interesting than the talk was the Q&A session. I thought the most interesting question was why Ruby isn’t on Git yet. He said the teams doesn’t have time to convert all the tools they use from SVN to git. He also mentioned that the git project tracking SVN is very close to the SVN master and is a good way to work on the Ruby code.

Evan Light, TDD: More than just “testing”
Evan first covered that the tools we as a community keep getting excited about aren’t really what matters. What matters is TDD technique. After discussing why tools aren’t as important for awhile, Evan began live coding with the audience. Something I thought was pretty impressive as it would be difficult to do. It made for a weird pair programming exercise with the entire audience trying to drive. Which sometimes worked well and sometimes lead to conflicting ideas / discussion (which made for interesting debate). It was over all a really interesting session, but it is hard to pull out specific tidbits of wisdom from the talk.

Jake Scruggs, What’s the Right Level of Testing?
I have known of Jake for awhile from his work on the excellent Metric Fu gem. Jake explored what the right level of testing for a project is, from his experience on his last nine projects over the years. He explored what worked, what didn’t and what sometimes works but only depending on the people and the project. I think it comes to this conclusion: what works for one project won’t work for all projects. Having some testing and getting the team on a similar testing goal will make things much better. He also stressed the importance of metrics along with testing (really? From the metric-fu guy? Haha). If testing is old and broken, causing team backlash, low morale, and gridlock, it might be better to lessen the testing burden or throw away difficult to maintain tests. Getting rid of them and getting them out of the way, might be worth more than the value the tests were providing. In general he isn’t big into view testing, he likes to avoid slow testing. He likes to have a small ‘smoke screen’ of integration tests, to help verify the system is all working together. In the end, what is the right level of testing for a project? The answer: what level of assurance does the given project really need? In a start-up you probably don’t need a huge level of assurance, speed matters and market feedback matter more. If your building parts for a rocket or medical devices it is entirely different.

I enjoyed this talk quite a bit, and it inspired me to fix our broken metric_fu setup and start tracking on projects metrics again. Jake also wrote a good roundup of LSRC

Corey Donohoe @atmos, think simple
Corey gave interesting quick little thoughts and ideas about how to stay productive, happy, learn more, do more, fail less, and keep things simple and interesting… Honestly with something like 120+ slides, I can’t even begin to summarize this talk. I checked around and couldn’t find his slides online, but they couldn’t really do the talk justice anyways. Keep your eyes peeled for the video as it was a fun talk, which I enjoyed. Until then here is a post he made about heading to LSRC.

Joseph Wilk, Outside-in development with Cucumber
Cucumber is something I keep hearing and reading about, but haven’t really gotten a chance to play with it myself. Joseph’s talk was a good split between a quick intro to Cucumber, and diving in deeper to actually show testing examples and how it worked. From the talk it sounded to me like Cucumber was mostly a DSL to talk between the customer and the developer/tester. I don’t know if that is how others would describe it. I thought Cucumber was an odd mix of English and and Ruby, but it helps effectively tell a story. Since returning form LSRC, I have started working on my first Cucumber test.

Yehuda Katz, Bundler
This was just a lightening talk about Bundler, which I had read about briefly online. Seeing the work that was done for this blew me away. I can honestly say I hope this takes over the Ruby world. We have been dealing with so many problems related to gems at Devver, and if Bundler becomes a standard, it would make the Ruby community a better place. I am really excited about this so go check out the Bundler project now.

Rich Kilmer, Encoding Domains
The final keynote of the event was about encoding domains. I didn’t really know what to expect going into this talk, but I was happily surprised. Rich talked about really encapsulating a domain in Ruby and then being able to make the entire programming logic much simpler. He gave compelling examples of working with knowledge workers in the field and just writing code with them to express their domain of knowledge in Ruby code. Live coding with the domain with experts he jokingly called “syntax driven development” – you write code with them until it doesn’t raise syntax errors. Rich spoke energetically and keep a tiring audience paying attention to his stories about projects he has worked on through out the years. Just hearing about people who have created successful projects who have been working with Ruby in the industry this long is interesting. I thought it had great little pieces of knowledge that were shared during the talk, but again this was a talk where it was to hard to pull out tiny bits of information, so I recommend looking for the video when it is released.

Final Thoughts
LSRC was a good time besides hearing all the speakers. In fact like most conferences some of the best knowledge sharing happened during breaks, at dinner, and in the evenings. It also gave me a chance to get to know some of the community better than just faceless Twitter avatars. It was fun to talk with Ruby people about things that had nothing to do with Ruby. I also am interested in possibly living in Austin at some point in my life so it was great to check it out a bit. Friday night after the conference I went out with a large group of Rubyists to Ruby’s BBQ, which was delicious. We ate outside with good food, good conversation, and live music playing next door. As we were leaving someone pointed out that the guitarist playing next door was Jimmy Vaughn, brother of the even more famous Stevie Ray Vaughan. We went over to listen to the show and have a beer, which quickly changed into political speeches and cheers. Suddenly I realized we were at a libertarian political rally. I never expected to end up at a Texan political rally with a bunch of Rubyists, but I had a good time.

Hopefully the next Ruby conference I attend with be as enjoyable as LSRC was, congrats to everyone who helped put the conference together and all those that attended the event and made it worth while.

Written by DanM

September 3, 2009 at 3:58 pm

Using Devver on OS X 10.6 “Snow Leopard”

If you’re trying to use Devver and you see this error:

/Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/em/connection.rb:302:in `start_tls': undefined method `set_tls_parms' for EventMachine:Module (NoMethodError)
from /Library/Ruby/Gems/1.8/gems/devver-2.4.1/lib/client/mod_client.rb:32:in `post_init'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/em/connection.rb:43:in `new'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/em/connection.rb:36:in `instance_eval'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/em/connection.rb:36:in `new'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/eventmachine.rb:716:in `bind_connect'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/eventmachine.rb:723:in `connect'
from /Library/Ruby/Gems/1.8/gems/devver-2.4.1/lib/client/client.rb:125:in `push_tests'
from /Library/Ruby/Gems/1.8/gems/devver-2.4.1/bin/devver:145
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/eventmachine.rb:1503:in `call'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/eventmachine.rb:1503:in `event_callback'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/pr_eventmachine.rb:342:in `run_timers'
from (eval):44:in `each'
from (eval):44:in `each'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/pr_eventmachine.rb:339:in `run_timers'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/pr_eventmachine.rb:322:in `run'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/pr_eventmachine.rb:318:in `loop'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/pr_eventmachine.rb:318:in `run'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/pr_eventmachine.rb:64:in `run_machine'
from /Library/Ruby/Gems/1.8/gems/eventmachine-0.12.8/lib/eventmachine.rb:242:in `run'
from /Library/Ruby/Gems/1.8/gems/devver-2.4.1/bin/devver:144
from /usr/bin/devver:19:in `load'
from /usr/bin/devver:19

… it’s because EventMachine (and other compiled gems) may act a little funky after installing Apple’s latest OS.

In this post, I’ll go through some steps that should help you resolve this error (… assuming you use MacPorts. If you don’t, please tailor these directions accordingly). But first:

  • Warning: The directions below may leave your system in an unknown state. Please, please, please backup everything before going further. Time Machine is your friend!
  • Disclaimer: These steps worked for me, but I can’t guarantee they’ll work for you. If you have any problems, please let me know in the comments or email support@devver.net.
  • Note: If you already installed Devver before upgrading to Snow Leopard, things will likely continue to work fine for you for now – but these directions may be helpful when you upgrade Devver clients.

OK, now that we’ve got that out of the way …

1. Upgrade MacPorts

Try running something like

port list installed

. If you get an error like this:

dlopen(/opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib, 10): no suitable image found.  Did find:
/opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib: no matching architecture in universal wrapper
while executing
"load /opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib"
("package ifneeded Pextlib 1.0" script)
invoked from within
"package require Pextlib 1.0"
(file "/opt/local/bin/port" line 40)

You need to install the latest version of MacPorts. Go here and download the ‘Snow Leopard’ dmg and install.

2. Reinstall openssl and ruby

Now that MacPorts is installed, do this:

sudo port sync
sudo port upgrade openssl
sudo port upgrade ruby

You could also try

sudo port upgrade ruby186

if you want Ruby 1.8.6, but I haven’t tried this myself.

3. Reinstall EventMachine

sudo gem uninstall eventmachine
sudo gem install eventmachine

And you’re done!

Now you should be able to run Devver normally. If that didn’t work, please let me know!

Bonus: Reinstall other gems

Matt Aimonetti has written a handy script (Update: here is my fork, with a new gem path an an important bug fix on line eight) that will find the gems you should reinstall once you’ve done the above steps. The only change you’ll need to make is to replace the gem path:

#Dir.glob('/Library/Ruby/Gems/**/*.bundle').map do |path|
Dir.glob('/opt/local/lib/ruby/gems/**/*.bundle').map do |path|

For more info, you can find a great guide on resolving Ruby-related problems when upgrading to Snow Leopard at rubyonrails.org

Written by Ben

September 3, 2009 at 1:59 pm

Posted in Ruby

Announcing Devver as a Lone Star Ruby Conference Sponsor

We are very happy to be a sponsor of LSRC. I am especially excited because that means I get to attend the event. I am looking forward to getting a chance to meet another Ruby community as I have never been to Austin Texas, and it seems like there are a lot of exciting things going on with the Ruby community. Find me and come by to talk about Ruby, testing, or Devver. Devver is also currently hiring, so if you are attending the conference and interested in highly distributed Ruby systems, definitely come talk to us. It is great to get to participate in events like these and spend time with the amazing Ruby community which is so supportive of new ideas, good code and testing, and startups.

Check out some of the great things that will be going on at Lone Star Ruby Conf this year.

I am particularly excited about:

  • Mike Subelsky: Ruby for Startups: Battle Scars and Lessons Learned
  • Larry Diehl: Dataflow: Declarative concurrency in Ruby
  • Ian Warshak: Rails in the Cloud
  • Jeremy Hinegardner: Playing nice with others. — Tools for mixed language environments.
  • Evan Light: TDD: More than just “testing”
  • Jake Scruggs: What’s the Right Level of Testing?
  • Corey Donohoe, atmos: think simple
  • Pradeep Elankumaran: Fast and Scalable Front/Back-end Services using Ruby and XMPP
  • Danny Blitz: Herding Tigers – Software and the Art of War
  • Looking forward to meeting everyone in Austin, shoot me an email at dan@devver.net or message me on twitter @danmayer so we can meet up at the conference in person.

Written by DanM

July 20, 2009 at 9:15 am

Posted in Devver, Ruby

Tagged with

Follow

Get every new post delivered to your Inbox.