The Devver Blog

A Boulder startup improving the way developers work.

Archive for July 2008

Useful Gem Shortcuts Tip

When you are working with projects in the command line all day, it can be really annoying to have to remember the exact location of everything. Often when programming against a gem in Ruby, it can be really useful to read over the documentation. Adding a couple lines to your .bash_profile can make loading up the documentation files for your gems much easier. Thanks to Stephen Celis for sharing this great tip.

Written by DanM

July 29, 2008 at 9:08 am

Friday fun: What the hell is up with ABC News photos?

ABC News is a respectable news organization, right? I thought so too. That’s why I’m mystified by the abundance of shockingly bad Photoshopped pictures on their website.

It’s been my hobby to collect these visual Frankensteins for the past few months. They range from simply poor in quality:

to tacky:

to just plain strange:

There are also not particularly subtle. If they have a story about money or the economy, they either frame an image with money:

or, better yet, just put a handful of Benjamins in front of said image.

I swear to you that all of these were taken directly from ABCs website. In fact, there was a gem featuring Obama and Hillary in clearly-Photoshopped cowboy hats (I believe it was for the Texas primary), but sadly, I cannot find it.

In fact, I’m not even sure Photoshop was involved. Given the images I’ve seen, it appears ABC executive gave a poor intern a set of safety scissors, a vat of Elmer’s glue, and an old flatbed scanner and let them go to town.

Finally, there are images that are so weird that I defy you to guess the story they accompanied.

Maybe I can get Flux Capacity to combine a bunch of images from together to give us all a terrifying vision of the ‘Alice in Wonderland’/Soundgarden-video world that ABC News lives in.

Written by Ben

July 25, 2008 at 8:35 am

Posted in Uncategorized

Learning RSpec and Merb

WARNING: This is basically completely out of date Merb changed very fast before 1.0. please see for current information!

We have been trying to work with some different Ruby technologies lately. We are moving to RSpec from Test::Unit, because we believe it has several advantages. It also seems all the cool projects are moving to RSpec: Rubinius, Typo, Mephisto, and of course Merb.

In learning these two technologies together, I have found a few resources that I found to be really useful. I thought it would be good to share the information for anyone looking to write specs for their Merb projects.

If you are first learning Merb and want to create a basic project and learn to test with Rspec along with development, I can’t recommend enough that you follow the Merb Slapp tutorial. This is a great source for Merb basics that is very up to date, and gives good examples of RSpec tests.

If you are new to Merb, the newest documentation will be your friend. I also recommend checking out the Merb Wiki. For RSpec, specifically check out these wiki pages: Merb Controller Specs, Merb Model Specs, and Merb View Specs.

There were some things I had to search and stumble around a bit for, session variables and mock objects. The reason I needed to mock the session was that a user is expected to be logged in verified by a session variable before allowing the action to continue. I needed a mock object of my ProjectWriter, because it normally makes live calls to a web service. These are easy to do, but are both done differently than with Test::Unit with Rails. I found out about RSpec mocking and Merb session mocking at the links provided.

Here is some code that demonstrates mocking both sessions and model objects.

#create a mock object named ProjectWriter
project_writer = mock("ProjectWriter")
#mock expects this call
@controller = dispatch_to(Project, :index) do |controller|
  #mock the session hash
  controller.stub!(:session).and_return({:logged_in => true})
  #return my mocked object
  #we aren't testing the view don't render it
  controller.stub!(:render) # don't render this action

@controller.should respond_successfully

Written by DanM

July 24, 2008 at 2:22 pm

Crappy email filtering on GoDaddy

People are complaining that GoDaddy censored a security site, bid against customers in domain auctions (more), and totally screwed up .me registrations. There’s even an entire site dedicated to exposing GoDaddy’s problems.

We had been lucky enough to never really have any problems with them. Since everything worked for us (even though we hated their UI with a passion) we have stuck around. Moving 20+ domains to another registrar just seems like a hassle. We run most of our services on our own server so we didn’t think we had much of a problem.

One service we didn’t run was our own mail servers. We didn’t want to use GoDaddy’s webmail, but thought just having GoDaddy forward mail would work fine. We forwarded all mail to our Gmail accounts, and after that it was simple to send and receive from Gmail, essentially taking GoDaddy out of the picture. We had run GoDaddy email forwarding for a few sites for over a year with no problems.

In the last couple months we started having problems with our email. We were getting reports from friends having problems emailing us. People were receiving random bounced emails, some emails when retried never would reach us, others would arrive successfully the second try. We were concerned but the issue seemed to occur rarely and usually simply resending would solve the problem.

Then it seems the problem got worse. After sending out an email and not hearing back, we received emails explaining that every time anyone responded to our last email it would bounce back. The bounce message warned that the message was a spam or virus and it would not be delivered. Hmmm… not good. We looked into it and found that we couldn’t send the original email to each other, without getting bounced error responses either. The email include a video link to download our presentation from DropBox, which GoDaddy filtered as spam. We had the same issue receiving responses to emails with RightScale’s developers. After making sure I hadn’t accidentally turned on any spam protection for our forwarded email accounts I called GoDaddy support.

Convincing GoDaddy that emails were being marked as spam by their servers, as opposed to other email servers took awhile. Finally after talking to internal GoDaddy tech support they acknowledged it was their email system. They explained that all emails including forwarding accounts go through a GoDaddy-wide spam and virus scanner which won’t let anything flagged through. I explained I wanted to disable their filtering and trust Google to do my spam filtering for me after my mail is forwarded. This was not an option as the shared filter is in place for all GoDaddy email. I then asked about the criteria of flagging emails, which I know in our case contained no spam links or viruses. I was told that if a single virus or spam message was sent from a domain it would block all emails linking to that domain. This is clearly a bad policy.

Blocking all of DropBox is an example of why this shared filter is bound to block valid emails. DropBox allows any arbitrary files to be uploaded by users, of course some virus-infected file has ended up on their domain. After some virus-infected file hosted on DropBox was emailed to a GoDaddy user, it was added to the blocked domain list for GoDaddy’s email filter. As a result we can never mail a presentation video file hosted on DropBox.

That’s pretty amazing, but the best part is there is simply no way to opt out. There was no way to get domains removed from their blacklist. One of the scariest things about this is that they only filter incoming mail, so we can email out supposedly virus-filled emails, but then if anyone hits reply it will bounce because our link will be a part of the response message. This leaves the sender blind to the problem. Who knows how many people we have emailed in the last 3 months that tried and failed to ever respond to us.

After learning all of this, I did the only reasonable thing. I switched mail providers so our email wouldn’t touch GoDaddy at all. We are now hosting email accounts with Gmail for your domain. It was easy enough to set up, and we have already seen that we can send supposedly risky emails through our new email servers. I guess I should start listening to everyone’s horror stories about GoDaddy and the next time I purchase a domain, I can slowly start moving away from the terrible beast.

Written by DanM

July 22, 2008 at 3:22 pm

Posted in Misc, Tips & Tricks

Our San Francisco Wrap Up

We had a very exhausting but incredibly useful day last Wednesday. After being invited to give a talk at Pivotal Labs, so we planned a quick trip to San Francisco and arranged other talks with various Ruby developers. We arrived in SF a couple hours before our first meeting, so we headed to a coffee shop to grab some breakfast. We booted up 15 servers so we could do a live demo of Devver. Then we headed out to meet up with Aman Gupta from Kickball Labs (no site yet). Aman contributes to various projects including EventMachine and Ramaze. We ended up discussing distributed Ruby messaging systems and Ruby web frameworks, which seemed to be a shared interest.

Next we headed over to Pivatol Labs to prepare to give our demo. Pivotal has an awesome setup so it was cool just to check out their office. I particularly liked the flat screen displaying the current status of all their projects and the Wii/Rockband setup. Pivotal recorded our entire session, so we will link to the video when they put that up on the web. I think this was the best session we have had yet talking with developers about Devver. The group asked a lot of questions, and really shared what the pain points are for their teams in terms of Ruby development. We are hoping to get a few things finished and then be able to find some more time to talk with the Pivotal Labs teams. Thanks again to Pivotal for inviting us out in the first place.

While at Pivotal, we got a chance to talk to some people from other Ruby shops around SF and a friend of a friend Todd Sampson from MyBlogLog. One person we got to talk with was James Lindenbaum from Heroku. Heroku is also working on some awesome things in Ruby, like the Rush Ruby shell. We are also a fan of Heroku because they are also a big proponent of Ruby testing, so it was cool to hear their thoughts on what we were up to.

We grabbed some lunch provided by Pivotal and started running over to Yahoo’s Brickhouse for our next talk. We met up with the FireEagle team, which is working on some cool location based stuff in Ruby. FireEagle is the largest Ruby team within Yahoo, so it was great to get to hear their opinions on Ruby development and their team’s process. We sat around talking about current Ruby tools, and what kinds of tools and code statistics they would like to see. Seeing more overall statistics on a project seemed like a minor but important theme of our SF trip.

The last part of our SF trip was a happy hour with the SF Ruby Meetup Group. Unfortunately, due to some poor planning on my part, it fell apart. Some Rubyist from the group recommended meeting at Thirsty Bear. The bar happened to have a private party that night. I arrived about 10 minutes before we were supposed to meet, with no real access to email. So I left information at Thirsty Bear about heading to the nearest bar, hoping some Ruby users would find me… only one person from the Ruby Meetup managed to find me, John Mount of Venue Software. So I guess I might have to try this again next time we visit.

Thanks to everyone we got to chat with, we had a great and fast visit. We learned so much we actually have had to take a step back and really think about the best direction to pursue. I think the information we learned during our brief trip will help shape some of our decisions for a time to come.

Written by DanM

July 20, 2008 at 3:05 pm

Posted in Devver, Misc, Ruby

Devver in the Boulder County Business Report

Devver along with all the other TechStars 2008 teams were featured in the Boulder County Business Report. It is a nice write up to give a little introduction of each of this years companies.

Devver in the BCBR

Written by DanM

July 18, 2008 at 2:14 pm

Posted in Uncategorized

Product Management with Niel Robertson

Update: Changed Project to Product, I didn’t realize the differences. Niel pointed out that they aren’t the same in another thread.
Niel sent the actually Power Point which you can check out for yourself, product-management-v1.

We have met with Niel Robertson before, but this was the first time we got to see him present a topic. He gave a talk at TechStars informing groups why Product Management (PM) for startups is important. In fact, he went as far as saying the number one thing that goes wrong at startups might be PM. Niel described PM as a process for delivering the right features at the right time. He went on to discuss why PM can become stale, be ignored, and is often hated because it is associated with excessive documentation, which it shouldn’t be. Mentioning more than once that the best feature requests, requirements, and specs are often just one sentence.

Niel presented PM as a general evolving process, and specifically avoided specific PM systems or methodologies. Niel said he is totally document agnostic and all systems have positives and negatives in terms of PM. It came across that it wasn’t important what system was used, but that everyone at every stage of the project must write things down. He described that ideally every test case should be tied all the way back through spec, requirement, and finally feature. One thing that Niel specifically did say was worth doing was completing a feature review before QA. Have an engineer walk through the project and show each of the requirements being fulfilled, have the product manager and QA lead attend this feature review.

His basic points about why every product can benefit from PM are:

  • It takes 30 secs to write a requirement
  • 2 minutes to clarify in discussion
  • 5 min to for an engineer to spec
  • Hours or days to prototype, write code, integrate, or deploy it

Because of these points he doesn’t want to hear a team is small and agile so following a PM process would slow them down. In the long run following a process will be faster. That point really hit home with a concrete example he gave, “If a project is UI intensive absolutely do not start coding, go to a white board.” This resonated with me, because we made a huge mistake with my last startup, by getting into the prototype without spending enough time thinking about UI. He said, “Don’t tell me that product management is something you do at a bigger company that has more complexity.”

As my notes are often just a list of key points that caught my attention, I will do as I often do and share some favorites.

  • How to write a good requirement… “The user should be able to…”
  • How to respond with a good spec… “The user can do that by… doing X… List the exceptions”
  • A spec is well written when QA can figure out how to test a feature based on the spec.
  • Doesn’t encourage people to shotgun tons of things to market. “When I make spaghetti, I try not to throw all of it against the wall.”
  • On gathering data, go out and talk to people getting more data points about the problem you are solving until you start hearing the same things and can’t learn more from talking. Then go work on it, knowing all this data.
  • Niel doesn’t recommend a developer also taking on the role of PM, as there needs to be a tension between who represents the user and who implements the product.
  • “The PM should be the most empowered employee in your company… Yes, even more than the CEO”

There were a ton of other good thoughts on what exactly a PM does, and how to select a good PM, but as always I can’t really do a presentation justice. Thanks Niel for the great talk.

Update: great discussion going on at Hacker News
Niel sent his Power Point slides after a few people asked for them, product-management-v1.

Written by DanM

July 10, 2008 at 8:12 am

Posted in TechStars

What's your CPD?

A little under two weeks ago, we had the opportunity to give a quick presentation/demo of Devver to our fellow TechStars and a few investors.

We worked on the presentation/demo a fair amount. We tweaked the slides to convey both our current focus as well as our broader vision. We created an entire fake website to show how Devver would look to our users. We crunched some numbers so we could impress everyone with the time savings of using Devver. And we practiced, over and over again, what we would say so our message would come across clearly and powerfully.

And what did people take away from the whole thing? Dots. Freaking dots.

“These must have been some nice-looking dots!” you say. Oh, no. We’re not talking Apple-esque, shiny, colorful, Web 2.0 dots. We’re talking plain-old-ASCII-period-character-displayed-on-the-command-line dots.

So what made these dots so memorable?

You see, one part of our demo was showing how Devver can significantly speed up unit testing. For those of you that don’t know, developers write unit tests to automatically check their code for bugs. Just think of them as a custom diagnostic suite that programmers write and run – a lot.

According to our survey, most Ruby developers run their unit tests using a command line tool called Rake, which (and this is key) prints out a dot every time a test is completed. Here is Rake running a very small test suite for Flexmock

$ rake
(in /Users/ben/code/gridtest/trunk/oss/flexmock/trunk)
Loaded suite
Finished in 0.405401 seconds.

325 tests, 526 assertions, 0 failures, 0 errors

Our demo was simply two terminal windows. In one, we showed how things work today: we ran Rake on a single laptop on a big, slow suite of tests. Because it was running on just one machine, each test took awhile, so the dots printed … very … slowly.

In the second terminal window, we ran the exact same tests on Devver. Since Devver runs the tests on a ton of machines simultaneously, the tests execute very quickly and therefore the dots printed very, very quickly (in fact, Devver ran the test suite around 75% faster).

Afterward, I talked to a lot of people who said things like “I finally get what you guys are doing” or “Now I see why your system could save time” or my personal favorite, “I don’t understand unit testing, but I could tellsomething good was happening – those dots were moving really fast!”

People liked the dots so much that now there is a running joke is that Devver is just a company that makes dots print on the terminal – we exist to decrease companies’ “cost per dot”, or CPD.

The lesson for us is that while it’s important to explain our vision to users and investors, sometimes simply showing them the most basic thing is what really leaves an impression. As we refine our presentation, we’ll make sure to keep those dots.

Written by Ben

July 9, 2008 at 11:33 am

Posted in Devver

Devver is on Twitter

After being asked by just about everyone what our company’s Twitter account was, we finally created one. We now have our very own Devver Twitter account. You can follow us to see what we are up to, and comment about Devver on Twitter with the all-powerful @devver symbol. Hopefully we will have some interesting thoughts and updates to broadcast out to the world. If you are into Ruby or development tools, here is one more way to keep tabs on what we are up to.

Written by DanM

July 8, 2008 at 6:19 pm

Posted in Devver, Misc

Tips for Unit Testing

For the past few weeks, I’ve been doing a series of posts on my thoughts on unit testing. Although I originally published them in little, bite-sized posts, I wanted to collect them all here in one massive post for those of you with bigger reading appetites.

I also wanted to add one thought to sort of tie all these tips together. Unit testing is all about improving productivity. It’s important to realize that the ROI for testing looks something like this:

this graph is very exact

A very professional-looking graph. I guess ROI should really be ‘benefit’, but whatever, you get it.

If you are just getting started with unit testing, you’re at the bottom of the curve, so you’re going to sink a lot of time into testing without much benefit. Similarly, once you’ve done a lot of testing on a project, trying to test that last little bit may require more time than it’s worth. The goal of these tips is to help you maximize the benefit-to-time ratio, wherever you may be in this curve.

We’re big on automated testing here at Devver, but I know a lot of companies aren’t as into it. There’s been plenty written about all the reasons you should be writing tests, but over the next week or so, I’ll give you some tips on how to get started (and if you’ve already got some tests, how to improve and expand your test suite).

I can’t claim to have come up with these best practices, so I’ll litter this post with links to those resources that have taught me something.

A quick word about terminology. When I say “tests” I mean any type of automated tests, which may include unit, functional, integration or any other types of tests. When I say “production code” I simply mean the code goes into the actual product – i.e. the code being tested.

Tip 1: You’ll probably suck at testing
Writing tests can be frustrating at first. It is usually a lot harder and more time consuming than you’d expect. Unfortunately, some developers assume that the cost of writing tests is fixed and conclude that the benefits can’t possible justify the time spent – so they quit writing tests.

Writing test code is an art unto itself. There are a whole new set of tricks and skills to learn and it’s difficult to do correcty right away. Stick with it. The better you get, the faster you’ll write tests, and the more your tests will pay off.

Tip 2: Most code is not written to be tested
Another surprising thing you’ll find when you start testing is that your production code is not very testable. This isn’t surprising – if there were no tests previously, there was no reason to design for testability. This will make your first tests way harder to write and less valuable (i.e. they are less likely to catch real bugs)

There are a few tricks to get around this. First, try testing only new code or just test a smaller side project to start to get the hang of it. When you’re ready to start testing your legacy application, try the following.

1. Write a few very high-level tests. These tests will likely exercise almost the whole system and will interact with the application at the highest-level interface.
2. Refactor out one component of the application so it is more decoupled and testable
3. Continually run your high-level tests to make sure you haven’t broken anything major
4. Write more focused tests for the component you pulled out in step #2
5. Go back to step #2

If you need more help with this, pick up a copy of Working Effectively w/ Legacy Code. There is also some additional information here.

Again, stick with it. As you write more tests, your application will be more testable (bonus: it’s likely be easier to understand, more loosely coupled, easier to refactor, and more DRY as well!). As it becomes more testable, it’ll be easier to write additional tests. This creates a positive loop where things get better and easier as you go.

Tip 3: Test code isn’t production code
Another common mistake is to treat test code just like production code. For instance, you’d like your code production code to be as dry as possible. But in test code, it’s actually more important for tests to be readable and independent than to be dry. As a result, you’ll want your tests to be more “moist” than dry. Specifically, you’ll want to use literals a lot more in test code than you would in production.

In general, the most important properties of good tests are:

Independent – No test should affect the outcome of any other test. Put another way, you should be able to run your tests in any order and always have the same outcome. A corollary of this is that setup/teardown methods are evil (both because they increase dependence and they decrease readability)
Readable – The intent of each test should be immediately obvious (both by it’s name and by its code).
Fast – Each test should run as quickly as possible, so the entire suite is also fast. The faster the suite, the more you’ll run the tests, and the greater benefit you’ll get (because you’ll catch regressions quickly)
Precise – Each test should focus on testing one thing (and only one thing) well*. Ideally, if a test fails, you should know exactly what part of your production code broke by just glancing at the name of the test. Also, if your tests are precise, it’s less likely that a change in your code will require you to change many different tests. In practice, precise tests are short and only have one assertion or expectation per test.

*Note: this doesn’t apply to integration tests, which should make sure all components play nicely together.

Tip 4: Always write one test

When writing new code, it’s easy to avoid testing because it seems so daunting to test all the functionality. Rather than thinking of testing as an all-or-nothing proposition, try to write just one good test for the new functionality.

You’ll find that having just one test is much, much better than having no tests at all. Why? First of all, it’ll catch catastrophic errors, even if it doesn’t catch bugs in edge cases. Secondly, writing even one test may force you to refactor your production code slightly to make it more testable (which in turn, makes future tests easier to write). Finally, it gives you “test momentum”. If you have no tests, you’ll be inclined to delay testing, since there is more overhead to get started. But if you already have just one test in place, it’ll be much easier to add tests as you think of them (and to write regression tests as you find bugs).

By the way, don’t worry about testing at exactly the right level. Having one functional test is way better than having no tests at all. You can always come back and break the “bigger” test down into more targeted, precise tests.

Tip 5: Improve your tests over time

Here’s a terrible idea – decide you are going to spend a whole week building a test suite for your project. First of all, you’ll likely just get frustrated and burn out on testing. Secondly, you’ll probably write bad tests at first, so even if you get a bunch of tests written, you’re going to need to go back and rewrite them one you figure out how slow, brittle, or unreadable they are.

As they say, the best writing is rewriting. You should try out new techniques (and rewrite) old test code. But it’s OK to have patchwork tests.

You just found out fixtures suck? (they do). Or that those ‘setup’ methods make your tests less readable? Are you excited about using mocks? Great, apply your new technique to some new tests, rewrite a few old tests, and call it a day. Don’t try to rewrite your whole suite, because you’ll be kicking yourself when you rewrite your suite again after you decide technique X isn’t perfect in all cases.

Just like in production code, good practices take awhile to bake and prove themselves. See how maintainable, easy to understand, easy to read a new technique is. You can always move more tests over.

Tip 6: Don’t be dogmatic

There are a lot of best practices for testing that may or may not apply to your situation. Should you have one assertion per test? Should you use mocks and stubs? Should you use Test Driven Development? Or Behavior Driven Development? Should you do interaction or state-based testing? While all of these practices have real benefits, remember that their applicability and value depends largely on your project, schedule, and team.

Don’t be afraid to play, but don’t feel like you need to convert everything to the one, true way to test. It’s fine to have a suite that mixes and matches these best practices. In other words, context is king.

Tip 7: Be reasonable

There are lots of reasons why tests are great, but if your practices aren’t ultimately making your code better and you more productive, it’s not worth it. You have to always think about the return on your time investment.

There are domains in which automated testing is very difficult and doesn’t provide a lot of value, like GUI testing. I would recommend writing tests for the interface that the GUI calls, but actually testing that things show up correctly is quite tricky and error prone.

Also, 100% code coverage shouldn’t necessarily be your goal. As you get better at writing tests, I think you’ll find they provide a lot of value, but at some point, covering that last small percentage of code may require way more effort than it’s worth.

Tip 8: Keep learning!

Just like learning new programming languages makes you a better developer, learning about new testing approaches, libraries, and tools will make you a better tester. The state of the art of testing is changing very rapidly these days – new frameworks and techniques are released almost every month. Keep looking at example code and trying out new stuff.

For instance, here’s a few tools that you may not be using but are very cool: Heckle and RushCheck

Finally, if you want to learn more, subscribe to Jay Field’s blog – he has lots of good (if sometimes controversial) thoughts about testing.

And with that, I’ll wrap up this series on testing. If you have your own testing tips, please share them!

At Devver, we’re building some awesome, cloud-based tools for Ruby hackers. If you’re interested, sign up for our mailing list.

Written by Ben

July 7, 2008 at 4:31 pm

Posted in Testing


Get every new post delivered to your Inbox.