Logo Devoh

Simulating API Responses

I've been working on a project recently which has involved writing an API client for an API I don't actually have access to. This has mostly meant implementing to the documentation provided, but it has also required the development of a simulation service to emulate the responses from the real API while the application is under development.

The client library maps the API responses to domain objects defined with Virtus, so my first thought was to use factory_girl to build the data for the simulator responses. When I started looking into this, I realized that factory_girl is heavily geared towards the testing use case, and as such its definitions are stored into the global FactoryGirl context.

Since the simulation data might be used in an app that has its own factory_girl factories, I didn't want to risk any naming conflicts. It also just seemed wrong to define the simulator factories in that manner. What I wanted was a way to obtain a one-off factory definition without polluting that global namespace.

After poking around in the factory_girl internals a bit, I found that you can get a Factory instance by doing something like this:

factory = FactoryGirl::Factory.new(MyModel)

There isn't a mechanism built into the Factory for defining it, however. The definition is stored on an attribute on the factory, but internally FactoryGirl uses a proxy object to provide a #method_missing DSL that maps method names to attributes on the model.

proxy = FactoryGirl::DefinitionProxy.new(factory.definition)
proxy.instance_eval &block

Once this has been done, the factory can be built using a particular strategy. Since all I'm concerned about here is having the attributes populated, I went with the Build strategy:

factory.run(FactoryGirl::Strategy::Build, overrides, &block)

I ended up encapsulating all this into my own Simulator object, which can be used to define a separate simulator for each domain model.

require 'factory_girl'

class Simulator
  def initialize(model, options={}, &block)
    self.model = model
    self.strategy = options.fetch(:strategy) { FactoryGirl::Strategy::Build }
    self.factory = define(&block)
  end

  def build(overrides={}, &block)
    factory.run(strategy, overrides, &block)
  end

private

  attr_accessor :model
  attr_accessor :strategy
  attr_accessor :factory

  def define(&block)
    FactoryGirl::Factory.new(model).tap do |factory|
      proxy = FactoryGirl::DefinitionProxy.new(factory.definition)
      proxy.instance_eval &block
    end
  end
end

A Simulator can be defined like so:

simulator = Simulator.new(MyModel) do
  value { rand(100) }
end

my_model = simulator.build # => #<MyModel:0x007fcdbc8d40c8 @value=42>

This can then be wrapped up in an object of its own:

class MyModelSimulator < Simulator
  def initialize
    super(MyModel) do
      value { rand(100) }
    end
  end
end

This becomes useful in all sorts of scenarios, including cases where you need to test the consumption of the data returned by the API client but don't want to hit the actual API to do so.

allow_any_instance_of(MyClient)
  .to receive(:my_model)
  .and_return(MyModelSimulator.new.build)

You could also use this to build a simulated version of the API.

# GET /simulator/my_model
def show
  render json: MyModelSimulator.new.build, serializer: MyModelSerializer
end

This approach shows potential, and I believe it merits further investigation. It isn't as reliable as using VCR to record actual server responses, but the ease of setup and the speed and flexibility it affords might be worth the trade-off, since ultimately the responsibility of testing the client-server contract falls on the client library, not the application using it.

and tagged with ruby

Global .gitignore

Git is really good at keeping a perfect history of the changes to your files. That's a great quality to have when those files are code, but not so great when a bit of application configuration slips in.

Here are some things you probably don't want in version control:

  • API keys and secret tokens for external services
  • environment configuration unique to a developer's particular computer
  • editor-specific files (e.g. *.swp Vim swap files)
  • OS-specific files (e.g. .DS_Store on OS X)

It's common to add these sorts of things to the project's .gitignore file, but I often find I'm ignoring the same files over and over again as I start on new projects. This is where the global .gitignore comes in. It acts just like the per-project .gitignore files, but it's automatically picked up by every repository.

Do this right now:

touch ~/.gitignore
git config --global core.excludesfile ~/.gitignore

No really… I'll wait.

Now, the .gitignore file in your home directory acts just like the .gitignore files you're used to in the repositories, but it applies equally to all your repos. I wish Git enabled this out of the box, because I think the feature would get a lot more use if that were the case.

My ~/.gitignore

These are some of the entries in my global .gitignore that you might want to add to yours:

# Mac OS X hidden files
.DS_Store

# Vim swap files
.*.sw?

# Pow and Powder config
/.pow*

# RVM and rbenv
/.rvmrc
/.rbenv-version

# Bundler binstubs
/bin/

Additional Resources

  • The gitignore(5) man page has some details on the pattern format and globbing used in .gitignore files.
  • GitHub has a nice writeup on ignoring files, which also includes a useful bit about excluding files from individual repos.

More to Come

I want to start to documenting the development practices that I find useful, in the hope that they might also be useful to you. It's something I've been thinking about for a while now, and what better time to start then the beginning of a new year? This is the first in what I hope will be a series of similar posts with the tips and tricks that have become habits for me.

Local Environment Variables

Applications today rely on an ever-increasing number of external services, each requiring their own set of API keys and configuration parameters. It makes a lot of sense to store these settings in environment variables, so your configuration logic can be written once but still allow for tweaks among multiple environments

If you've ever used Heroku, you're not doubt already familiar with this concept, whether you know it or not. The heroku config commands are effectively just manipulating environment variables on your dynos. Most of Heroku's add-on services also inject environment variables for their own configuration.

Using environment variables is also a great idea for local development. By doing so, you can avoid adding sensitive API credentials into the code repository. Environment variables also allow all the developers on a project to have their own local settings without affecting the rest of the team.

Airbrake.configure do |config|
  config.api_key = ENV['AIRBRAKE_API_KEY']
end

If you're using Pow for local development, you may be familiar with the .powenv file, which can be used to expose environment variables to your local app. Unfortunately, this only gets us halfway to where we need to be. It sets up the environment for the app server, but what about the command line tools? It would be nice if we could use those same environment variables when running tests, Rake tasks, and the Rails console.

Making Environment Variables Accessible to Binstubs

Note: If you haven't read my post on Implicit Gemsets with rbenv yet, go read that first, as the material below builds on the concepts outlined there.

Well, as it turns out, Bundler has the ability to modify the shebang line used by the binstubs it outputs. We're going to use that to create a custom wrapper around the ruby executable that loads our environment variables before handing off execution to the Ruby interpreter.

Here's what the wrapper looks like:

#!/bin/bash

for file in .powrc .powenv; do
  if [ -f $file ]; then
    source $file
  fi
done

/usr/bin/env ruby $@

This script is responsible for loading the .powrc and .powenv files from our local application into the environment using the source command, which makes all the environment variables defined in those files available to the currently running shell. It then passes all the given arguments ($@) to the ruby command.

I've placed the script into the file ~/.bundle/powenv. You'll also want to make sure it's executable:

chmod +x ~/.bundle/powenv

Once you've done that, run the following command from a directory inside your app:

bundle config --local shebang "$HOME/.bundle/powenv # ruby"

There a few things you should note here.

  1. The "--local" flag dictates that this setting will only be applied to the Bundler configuration for this particular app. If you want to use it in all your apps, you can use --global instead.

  2. The "$HOME" variable will be replaced with the path to your home directory before the shebang line is written to the configuration file. This is important because the shebang line is not evaluated at runtime, so it won't be able to interpolate variables like $HOME or even the ~ home directory expansion.

  3. The "# ruby" bit at the end is a trick to let Ruby know that our binstubs are, indeed, Ruby scripts. Without it, Ruby will complain with an error like this:

     ruby: no Ruby script found in input (LoadError)

    Basically, it just needs to see the string "ruby" somewhere in the shebang line, so we add it as a comment at the end.

Now, run bundle to regenerate the binstubs with the new shebang, and you should be all set.

bundle
./bin/rake

From now on, all the commands you execute via binstubs will have access to the same environment variables as your app running under Pow.

and tagged with ruby

Configuring Rails Apps

Applications today rely on an increasing number of external services, each requiring their own set of API keys and configuration parameters. There are several different approaches you can take to organizing these settings in a Rails app. Here's the process I went through in figuring out how to best store those configuration values so they stay organized and easy to manage.

config/

Since Rails is all about convention over configuration, the config directory is probably a good place to start. It's obviously where the configuration is supposed to go, but there a handful of places under that directory to choose from.

config/application.rb

The config/application.rb file is the starting point of the Rails configuration process. There are a bunch of framework-level parameters in here that are generated along with the rest of the Rails app, but it's probably best to leave this for those types of settings.

class Application < Rails::Application
  config.encoding = 'utf-8'
end

If you can't configure it off the #config method, I'd argue that it doesn't belong here.

config/environments/*.rb

There are files under config/environments for each of the default environments in Rails (development, test, production). This breakdown provides us with a natural segregation among the various environments for settings that differ among them. For instance, in our staging environment we could use a service's sandbox credentials, so we don't end up polluting the production system or inadvertently emailing users while testing.

Like config/application.rb, though, these files are already full of Rails configuration parameters. While they may be a somewhat logical place to put other environment-specific settings, they've always felt like a better place for framework-level configuration to me. Using them for things like external service configuration is a bit awkward.

config/initializers

Creating separate files under config/initializers for each library and service is a good alternative location. Segregating by service keeps the configuration well-organized and easy to maintain, but for settings that change among environments, you'll end up with a lot of conditional blocks like this:

if Rails.env.production?
  # production configuration
elsif Rails.env.staging?
  # staging configuration
else
  # local configuration
end

One common way around this is to use a YAML file under config. The initializer can then load and parse this YAML file, using the values under the key that matches the environment name.

config = YAML.load_file(Rails.root.join('config/redis.yml'))[Rails.env]
Redis.current = Redis.new(host: config['host'], port: config['port'])

This brings with it its own set of issues, though, including the fact that you still have to either store your credentials in the repository or do the symlink shuffle during deployment.

Environment Variables

This is where environment variables come into play. They're great because they let you write your configuration logic once, but also allow different values to be used depending on the environment or even a specific server instance. If you've ever used Heroku, you may have noticed that their platform makes heavy use of environment variables for configuring both the platform itself and most of the add-ons.

Airbrake.configure do |config|
  config.api_key = ENV['AIRBRAKE_API_KEY']
end

Another handy side effect I've realized is that, especially with Heroku, you can easily change the value of these variables and restart the server to change all sorts of settings without having to deploy a new version of the app. I've found myself extending this concept even beyond service configuration and into business rules. For instance, storing constant values as environment variables lets you tweak your business rules after the app has been deployed.

Say I'm launching a beta, and there's a fixed waiting period before users are sent their invitation to join.

class Invite
  WAITING_PERIOD = ENV['WAITING_PERIOD'].to_f.days
end

I might start out with a value of zero to let users in right away, but if the beta starts picking up too many users, I can quickly dial back the load by increasing the wait time with a simple Heroku command:

heroku config:set WAITING_PERIOD=2

Unifying the Settings

This gets us pretty far. We now have a good place to store our configuration files and a nice method for storing values that may change, but I found myself with a growing number of these environment variables scattered throughout the config and now app directories. I began to add documentation to the README in the root of the app, but this quickly began to get out of hand, and I felt like I was constantly at risk of leaving something out that the next developer who came along would be stuck scratching his head over.

So, I starting thinking about how to centralize these configuration parameters in a way that not only increased their proximity to one another, but also acted as self-documentation of sorts. Initially I thought a library like Redis::Settings might be helpful for this, but the more I thought about, the less it made sense to me to have to rely on a server for storing these settings.

This led me to just try doing the simplest thing that came to mind: a module full of module methods, with each method corresponding to a single configuration parameter.

module Settings
  extend self

  def app_name
    ENV['APP_NAME']
  end
end

Now, wherever I had previously referenced ENV['APP_NAME'], it was just a matter of replacing it with Settings.app_name. Not only does it read better, but it's now a method so I can easily swap out the source of that bit of configuration data with anything I like (even Redis!) if the need should arise.

The WAITING_PERIOD example above can even encapsulate the type conversion:

module Settings
  extend self

  def waiting_period
    ENV['WAITING_PERIOD'].to_f.days
  end
end

But I still needed to be able to handle a couple other requirements:

  • default values for when the environment variable isn't defined
  • an indication of which values are required in specific environments

The first is easy enough in Ruby. I chose to accomplish it through the use of a method argument default value.

module Settings
  extend self

  def waiting_period(default=0)
    (ENV['APP_NAME'] || default).to_f.days
  end
end

The second, conditionally requiring some values to be defined based on the current environment, is simple enough, as well.

module Settings
  extend self

  RequiredValue = Class.new(StandardError)

  def app_name
    ENV['APP_NAME'] || required_in_production
  end

  def required_in_production
    raise RequiredValue if Rails.env.production?
  end
  private :required_in_production
end

With this in place, the application will refuse to start without the APP_NAME environment variable.

The Result

Using these techniques provides

I've been using this technique for all sort of magic values, and I'm happy with the results. It has satisfied all my initial requirements, and surprised me with some added benefits along the way, like the ability to encapsulating additional logic around the values.

I'm still using a single, monolithic module to encapsulate all the application's configuration, though, and at times this feels messy. I've resorted to using method prefixes like app_ and smtp_ to keep things tidy, but this might be a good indication that additional modules would be useful.

and tagged with ruby

Asset Pipeline Error Pages

Customizing the error templates in a Rails application is nothing new, but it's not something I've had to do very many times. When the need for customized error pages came up recently, I thought to myself, "Self, it would be really nice if I could use Haml and the asset pipeline for this." Here's how I was able to accomplish it.

Adding HTML to the Asset Pipeline

As our first order of business, there are two things that need to be accomplished within the asset pipeline:

  1. making it aware of HTML templates
  2. adding an engine for Haml template compilation

The first can be accomplished by adding the following lines to config/application.rb.

module RailsApp
  class Application < Rails::Application
    # HTML Assets
    config.assets.paths << Rails.root.join('app/assets/html')
    config.assets.register_mime_type('text/html', '.html')
  end
end

This adds a new asset search path, app/assets/html, where we can keep all our HTML templates, and it also adds the appropriate MIME type for the .html extension.

Teaching the asset pipeline to compile Haml templates is pretty straightforward, too. I already had a config/initializers/haml.rb in place, so I just added this to the end.

# Allow Haml assets in the asset pipeline.
Rails.application.assets.register_engine('.haml', Tilt::HamlTemplate)

Then, create the requisite error pages under app/assets/html. Here's what a 404.html.haml might look like.

!!!
%html{ html_attrs('en_us') }
  %head
    %title RailsApp — 404 Not Found
    %meta{ charset: 'UTF-8' }/
    = stylesheet_link_tag stylesheet_path('error')

  %body
    %header= image_tag('logo.png', alt: 'RailsApp')

    %section
      %header
        %h1
          = image_tag('alert.png', alt: '✗')
          Oops, we couldn’t find that!

      %p The page you’re looking for doesn’t exist.

    %footer
      %small 404 Not Found

There's a small weirdness here, in that we have to use the #stylesheet_path helper to ensure the asset host configuration is used and not just a relative path.

Next, you'll probably want to make sure the HTML templates are precompiled along with the rest of the assets. Adding this line to config/environments/production.rb will pick up all the HTML assets.

RailsApp::Application.configure do
  config.assets.precompile += %w(*.html)
end

Using the Asset Pipeline for Exceptions

Now that the asset pipeline is compiling our HTML assets, the only task left is hooking them into the standard exception handler in Rails.

The default exception handler serves the static HTML files found under public, but our error pages have found a new home under public/assets now. We can reconfigure the ActionDispatch::ShowExceptions middleware by swapping it out in config/initializers/exceptions.rb and giving it the new path.

RailsApp::Application.configure do
  config.middleware.swap(
    ActionDispatch::ShowExceptions,
    ActionDispatch::ShowExceptions,
    ActionDispatch::PublicExceptions.new(
      Rails.root.join('public/assets')
    )
  )
end

In this particular app, the assets are also being pushed to S3 with Asset Sync, so I've also created custom error and maintenance pages for Heroku using the same technique, but referencing the S3 URL instead (serving the assets from the app won't work when the app is down).

So far, this arrangement has worked out great. My only real complaint is that I can't use a layout template to be shared among all the different error pages. But I find the fact that I can use Haml and rely on other asset pipeline resources, like stylesheets and images, is enough of a gain for the additional configuration effort to be worth it.

and tagged with ruby

Implicit Gemsets with rbenv

Getting a new machine is always an exciting time. It's a chance to start fresh, free of the cruft that inevitably accumulates over years of use. When that opportunity recently presented itself to me, I decided to take full advantage of it by not copying over anything until I needed it, carefully evaluating each component along the way to make sure it was still the best choice.

The biggest change so far has been switching from compiling everything myself to using Homebrew. The other big change I decided to make, inspired by a recent talk at PDX.rb, was to switch from RVM to rbenv and ruby-build.

Concluding RVM

RVM has served me well over the years as I've used it to develop dozens of applications and gems. I don't really have any complaints about it from a usability perspective, but I have noted a few annoyances of late.

  • RVM is a fairly heavyweight solution to the problem of needing to run multiple versions of Ruby. It overrides the cd command, for instance, automatically evaluating the trusted .rvmrc shell scripts scattered throughout your system as you navigate the directory hierarchy.

  • While it's nice to be able to specify an application's required Ruby version in its repository, it isn't quite as nice to do the same with the gemset name, which seems to be the modus operandi. Everyone has their own gemset naming and segregation schemes, and having to resort to hacks like git update-index --assume-unchanged .rvmrc to override the settings in the repo's .rvmrc is a pain.

  • The ~/.rvm directory can grow considerably in size over time as new Ruby versions are installed and the same gems are installed over and over again across gemsets. At last check, mine had grown to over 9 GB in size.

All of these are non-issues in the grand scheme of things, and, if anything, enumerating them just goes to show how great RVM is at doing what it's supposed to do.

Introducing rbenv

After installing Homebrew, I used brew install rbenv ruby-build to install rbenv and ruby-build. I opted to attempt to forego gemsets completely, despite the fact that there's an rbenv-gemset plugin to provide that support for rbenv.

While it has evolved a bit throughout the experiment, my current strategy is to configure Bundler with the following settings:

bundle config --global bin bin
bundle config --global path .bundle

The first command configures Bundler to generate binstubs in the bin directory of the application. This part isn't really new to me. I had already been using this setting under RVM, and have gotten into the habit of always running commands as ./bin/rspec instead of bundle exec rspec.

The second tells Bundler to store the installed gems in the app's .bundle directory. This effectively causes each project with a Gemfile to be treated as an implicit gemset. To make sure this directory never finds its way into source control, I've added .bundle to my global ~/.gitignore (you may need to run git config --global core.excludesfile ~/.gitignore, too).

Issues

While this approach has worked remarkably well on the whole, there have been a few pain points during the transition.

  • I used to use RVM's rmv wrapper command to created isolated gemsets for global commands like heroku. With rbenv, I've had to resort to just using gem install and letting them live in the global GEM_HOME. In some ways, I actually prefer this to RVM's gem wrappers.

  • Because the gems are all tied to the bundle, the core Ruby commands like ruby and irb don't have access to them without using bundle exec. Through a lot of experimentation, I've found that using the rbenv-vars plugin and the following ~/.rbenv/vars file to be the best solution:

    GEM_PATH=.bundle
  • Switching Ruby versions in a project may require the binary gems to be recompiled. This probably isn't really an issue in most day-to-day development, but it's something to be aware of.

Benefits

There have also been a few beneficial side effects, too.

  • I appreciate not having to deal with .rvmrc files any more. It's nice not seeing the trust prompts when changing directories, or having to resort to tricks like the cd shuffle (cd .. ; cd -) and running rvm gemset list to make sure RVM is pointing to the right place.

  • The .rbenv-version file is an effective replacement for the .rvmrc file's Ruby version specification, without the cruft of including a gemset name, too.

  • I like that all the code for an app, including its dependencies, is contained within the application root. This makes installing, moving, and removing applications simpler, in the same way that .app bundles are convenient in OS X.

  • Having the gem source code contained within the project also means it can be more easily accessed. I can now use .bundle as the path, where I previously had to use rvm gemset dir.

Summary

In making this switch, I feel a bit like I've cheated on Wayne Seguin. He's one of the nicest guys you'll ever meet, and I truly respect the work that he does. I still think RVM is a valuable tool, and I hope it continues to be used by lots of developers.

My primary motivation for giving rbenv a try was to see if I could get away with less. I'm not sure the experiment could be deemed a success on that metric alone. Each project requires a number of compromises, some of which have been detailed here. On the whole, though, I'm enjoying this rbenv approach, and I don't feel a pressing need to switch back to RVM any time soon.

and tagged with ruby

Flexible Client Configuration

When authoring web service clients, Ruby Gem developers have a bad habit of forcing users to configure the client at the class level. This limits the flexibility of the client to a single set of configuration parameters at a time. Since most clients require authentication credentials, forcing the user to set them at the class level means only one account may be accessed with the client per application.

I mentioned this issue briefly in Raising the Bar, but I want to expand on it further here. I started work on a new client library this weekend, and decided to try to accommodate both of these scenarios the best I could:

  • class-level configuration to cover the most common use cases
  • instant-level configuration to allow multiple instances to be used when needed

The approach I settled on uses a top-level module to store a global instance for the former scenario and a client class that can be used directly to satisfy the latter.

Here's what a stripped-down client class might look like with a single configuration parameter for an API key.

module Service
  class Client
    attr_accessor :api_key

    def configure
      yield(self)
      self
    end
  end
end

This arrangement easily allows for our second use-case:

client = Service::Client.new

client.configure do |config|
  config.api_key = API_KEY
end

Now, about that class-level configuration. Here's the definition for the top-level module. The extend self line effectively means we want to treat all the methods defined in this module as module methods.

module Service
  extend self

  def respond_to?(method, include_private=false)
    client.respond_to?(method) || super
  end

  def method_missing(method, *args, &block)
    if client.respond_to?(method)
      client.send(method, *args, &block)
    else
      super
    end
  end

  def client
    @client ||= Client.new
  end
  private :client
end

There's a decent bit of code here, but most of it should look familiar to anyone who has used Ruby's method_missing before. The basic idea is to delegate the method calls on the Service module to a global instance of Service::Client stored as an instance variable on the Service module. In effect, this behaves a bit like a class variable, and it gives us the concept of a global instance of the client.

We can configure this global instance like so:

Service.configure do |config|
  config.api_key = API_KEY
end

Any other methods implemented on the client will be available for the user to call directly on Service module.

This approach gives us the best of both worlds by allowing both class-level and instance-level configuration of the client. Please spend the time to think about how to implement an approach like this in your own Gems, and someday it will pay off by bringing happiness to the lives of developers the world over.

Priorities Anew

I set out at the beginning of last year to produce a thing that would make money. That didn't end up happening, but I'm still happy with the outcome of this past year. Even amidst the planning and execution of a cross-country move, I still made time to develop one of the apps that I've been wanting for my own personal use, gave my first talk at a conference, released some new code, blogged in a more substantive manner, and have been reading and thinking a lot about the kind of projects that I would find fulfilling. I didn't make a single dollar online (unless you count Craigslist), I and didn't read as many books as I had hoped to, but the desire to make something significant has not waned.

Software development has always been a hobby of mine, but I'm going to think about this year's continued goal of a profitble endeavor more in keeping with Neven Mrgan's concept of "Focused Dabbling". I plan to work on apps that fill the needs that I have, because I think other people will find them useful, too, but also just because I enjoy working on them. Effectively, this means my projects are immediately profitable in two different ways: the enjoyment found in the building of the app and the utility of using it after it has been built.

In light of these positive side-effects, it wouldn't be the worse thing in the world if I ended up paying money out-of-pocket to fund them, but wouldn't it be great if they at least broke even because other people enjoyed them, too? Some moderate level of profitability would ensure that I'm not losing money on these joy endeavors. That's partly why another goal of mine is to only produce apps that have a clear revenue model up front.

With more and more apps being shut down due to lack of revenue (cf. "Don't Be a Free User"), I don't want to further that trend by building something that's unsustainable. Part of this is acting out of self-preservation to avoid producing a popular service that I have to pay to run myself, but I'm hoping it also means the users using these apps will have a clear need for them and are willing to put money on the line to prove it. This should make for better customers than your typical free users, who never contribute any value back into the products they use but have high expectations for them just the same. Money is a great equalizer, right? Forcing users to pay puts them all on equal footing from the perspectives of cost and support.

Now that I have a few apps in the pipeline, the challenge is going to be effectively executing on those concepts by fleshing out their revenue models and determining the best way to market them. Marketing is a completely new field for me, so I have a lot of learning still to do, but I'm enjoying every minute of it.

I do want to say a big thank you to people like Neven Mrgan and Maciej Cegłowski for continuing to inspire me to pursue my own passions. Hopefully I can repay them one day by inspiring someone else to do the same.

Postmarkdown

After building the initial version of Snail Drop, which I'm hoping to write about here soon, I had thrown up a blog for it on Tumblr. This was quick and easy, but I still had more work to do in theming it out to match the Snail Drop site, and it felt like I was doubling up on a lot of maintenance work by having the blog separate of the main app. What I really wanted was a simple way to bolt a blog onto a Rails app. In an ideal world, I'd be able to just throw a Markdown file into a directory à la Jekyll.

Well, there's at least one other person in the world who thinks the way I do, because I found just that in Postmarkdown. I tweeted about it the other day, but I'm so happy with it, I wanted to write more about it here.

The installation generator creates the directory app/posts for you, and the Gem also provides a generator that can be used to create new posts. It then ties those static Markdown files into the Rails app through a flexible routing helper and a controller, model, and views that are hidden away within the engine. It even provides a feed and a means of adding the feed link to your application layout.

I'd love to give a rundown on how it all works here, but there's nothing I could say that isn't already covered well in the Postmarkdown readme. I will say that it's not only easy to install, but it also provides configuration flexibility in all the right places, including generators for overriding individual parts of the engine. More projects like this, please! And thanks to the folks at Ennova for raising the bar and releasing such a fabulous piece of code.

and tagged with gem and ruby

Raising the Bar

Earlier this year, I submitted three talk proposals to RubyConf 2011. One of them was accepted, which I was thrilled about until I realized the conference was scheduled for the same day as our cross-country move to Portland. I somehow failed to check the dates for the conference, and had instead been working under the assumption that it would be in November again like it has been for the past few years.

Needless to say, preparing for a talk in the midst of packing and preparing for such a move was a challenge, and having to drive to New Orleans with all our possessions in tow further contributed to the craziness, but in the end it all worked out. It was nice spending a few final days with some of the Envy Labs crew before parting ways for the opposite coast, and the talk was a great learning experience on several levels.

I spoke on writing better RubyGems by following a series of conventions designed to work around issues that have been problematic in the past. Check out the abstract and the slides if you're interested, and I'll post a link to the video of the talk once it's up on the Confreaks site.

Et Al.

See the archives for more.