Ryan Lanciaux

Random programming blog.

Miscellaneous Jest Issues/Workarounds

I’ve been using Jest a bit lately and wanted to document some issues I’ve run into for future reference.

Debugging Jest Tests

Recently I had a test that was failing and from looking at the stack trace it wasn’t really clear why. I followed some advice I had seen about running jest in-band and then running node-inspector. Every time I tried to run node-inspector, however, it failed. Similar to the issue I encountered in my previous post, it appears that there is a forthcoming fix.

Until the fix makes it’s way into the release version, following the steps in this stackoverflow post should allow test debugging. Like the solution author, I’m not super thrilled about modifying the jest.js file outright but it’s nice to be able to debug the tests.

Mocking Third-Party Libraries (that aren’t CommonJS modules)

Another issue I’ve encountered is testing components that wrap third-party libraries that are not CommonJS modules. I tried a couple different hacks to try to shim the library in question into something that would load as a CommonJS module but ultimately was unsuccessful in the time I wanted to spend on that.

Thankfully, a post in the React Google Group led me to use Manual mocks as a way to work around these third-party libraries.

Creating a manual mock is pretty simple. Just create a new folder called __mocks__ at the same level as the __tests__ directory and create a CommonJS module with the same name/properties as the third-party module that is being mocked. Adding var someModule = require('moduleName') will cause someModule to get replaced with the mock when running through jest. Finally, set the third-party library as an external module in webpack.config.js and everything should be good-to-go for both the test and the “compiled” version of the code.

React + Jest Testing on Windows II

Last time I wrote about running Jest on Windows there was one thing I left out. The test output would show if the tests pass or fail but no data is displayed regarding why they fail. This is due to an issue with stdio in Windows, however, there appears to be a fix in the works.

Until that fix makes its way into a released version of Jest, you can simply copy bin/jest.js from Connor Malone’s branch on GitHub as a workaround. If using the file outright doesn’t sound desirable, any line that has process.exit(0) can be wrapped in a process.on('exit') block:

1
2
3
process.on('exit', function(){
  process.exit(0);
});

See the diff of the modified code here.

Azure Active Directory Authentication in Existing Project

Recently, I needed to add Azure Active directory authentication to an existing web project. There was an automated tool for Visual Studio 2012 but there does not seem to be a similar component for 2013. A lot of the advice I found suggested creating a new project and importing a bit of the code / config from the other application – that’s what I did here. What follows is not a how-to but rather a log of the steps I took to use AAD authentication (mostly for future reference).

References

First off there are some references that were missing in the project. I needed to add

  1. System.IdentityModel
  2. System.IdentityModelServices

In addition to the system references, the Microsoft Token Validation Extension should be installed from NuGet.

Code

  1. Copy over DatabaseIssuerNameRegistry.cs (I added this under utils)
  2. IssuingAuthorityKey.cs (model\tenant)
  3. Tenant.cs
  4. TenantDbContext.cs
  5. IdentityConfig.cs (This needs to be in the app_start directory)

Azure

On your Azure active directory settings you will need to add an application. Click on Applications -> Add -> URL: Localhost:Port (or real URL) and give it the ID of the site you are developing.

Config

Copy over the following config sections replacing any reference to ID / URL with the settings that were applied to the Application added in the Active Directory settings.

  1. configuration\configSections\system.identityModel
  2. configuration\configSections\system.identityModel.services
  3. configuration\location
  4. configuration.system.identityModel - The DatabaseIssuerNameRegistry should have the fully qualified name of the DatabaseIssuer class.
  5. configuration\system.web\authentication
  6. configuration\system.web\authorization
  7. configuration.system.identityModel.services
  8. configuration\appSettings
    1. ida:FederationMetadataLocation - Use your active directory path
    2. ida:Realm
    3. ida:AudienceUri
  9. configuration\system.webServer

Again this is not an exhaustive guide but rather a checklist for making sure the correct code/configuration is included in the existing project.

Test React Components Using Jest (on Windows)

I’m currently going through the process creating unit tests for Griddle and thought it would be good to document the steps I took to get Jest up and running. I did not find it as simple as typing npm install -g jest-cli, however it was not too bad.

My primary machine is running Windows 8 – these steps may be a bit different if you’re on Mac / Linux.

  1. Install Python - Install version 2.7 of Python and add it to your path or create a PYTHONPATH environment variable.
  2. Install Visual Studio (Express Edition is Fine) - Thankfully, this step was not required for me as I already use Visual Studio. We will need this for some of modules that are compiled when we are installing Jest. (Express editions available here – get one of the versions that has C++)
  3. Set Visual Studio Version Flags - this step tripped me up a bit at first. We need to tell node-gyp (something that is used for compiling addons) what version of Visual Studio we want to compile with. You can do this either through an environment variable GYP_MSVS_VERSION or the command line option --msvs_version. My environment variable looks a bit like this GYP_MSVS_VERSION=2013 but if you are using Express, I think you have to say GYP_MSVS_VERSION=2013e
  4. Install Jest-CLI - Now you can run the command on the Jest docs site npm install jest-cli --save-dev

At this point you should be ready to run Jest, however, I experienced some further trouble on Windows against React components. In the react example, the package.json contains "unmockedModulePathPatterns": ["<rootDir>/node_modules/react"] which is basically stating that we don’t want to mock React when running our tests. Unfortunately, it seemed like we need to change this path to just "unmockedModulePathPatterns": ["react"] in order for the test to run successfully (again on Windows – seems fine on other OS). See this GitHub issue for more on that.

For more reading on installing Jest’s requirements see:

Introducing Griddle: A React.js Grid

Many of the websites I have worked on have required a grid component. As I had been exploring React.js more it was made apparent that I was going to need a grid component for it to be a viable my projects. There are many great solutions for displaying grid data with React but many seem to rely on writing wrappers for components using jQuery or other libraries. While these solutions work well, I was hoping to render entirely with React. Additionally, I wanted to avoid a dependency on libraries like jQuery / Angular if I could help it.

I decided to try my hand at writing a grid to fit my requirements – the outcome is Griddle - a simple React.js grid.

What it is

Griddle is a configurable grid component for React.js. The main philosophy is that the grid should render quickly, contain a lot of the expected functionality and be simple to use without dictating how the rest of the code is structured. While Griddle is far from perfect I’m pretty happy with the initial outcome.

Where it’s going

As stated above Griddle is far from finished. There are a lot of things that need to be cleaned up and a good deal of functionality that needs to be added. The high-level road map is as follows:

  1. Tests - The initial version of this grid was mostly a coding session or two followed by some basic clean-up. Griddle should be sustainable and tests are big part of that.
  2. Metadata - Griddle should allow a more advanced column order, locked columns, column width, etc. Currently with the column order, for example, an initial order is set but hiding and showing the column will display this column at the end of the list.
  3. Additional User-configuration - The user should be able to drag columns around.
  4. Better sub-grid support - Currently sub-grids are constrained to have the same columns as the parent and are only one-level deep. Sub-grids should be able to have entirely different columns than the parent and should be able to be nested. Finally, sub-grids should be able to be loaded from the server.
  5. More responsive options - Columns should have an optional priority. When the grid gets below a certain size, some columns should drop off depending on the priority. Additionally there should be the option to stack certain columns when a grid gets below a specific size.
  6. Streaming Data - Similar to one result page per request, there should be an option to allow the grid to get the initial page and stream the rest of the data behind the scenes.

Conclusion

So that is basically Griddle. The priority of the road-map items could change but that is the current order. Please check it out and submit issues for anything you run into :)

Trying Out ReactJS With the Marvel API

I’ve recently started looking into ReactJS (Facebook’s front-end JavaScript library) for building web UIs. React has an interesting philosophy about how the UI should function and be defined. First off, while many frameworks have an entire system for interacting with the server, routing, etc, React is just the View portion (in a MV* application). Second, React does not employ 2-way data binding. Instead, it uses a one-way data flow where data is maintained in the parent items and is manually shared to its child components. Finally, React uses a Virtual DOM which they say helps with performance (I cannot speak to this first-hand but it seems logical – see here for more on React’s performance from someone who can speak more authoritatively on this).

One other thing that jumped out at me about React is how they recommend you build your UI. According to the documentation, you should start out with a design/mock-up and build a static version of the application. Once the static version is complete, figure out which components are available and how data should flow. Finally, toss your real data into your UI. See Thinking in React for more information on this.

The App

I generally like to have a goal in mind when learning a new language or framework (this goal doesn’t necessarily have to be useful). It was determined that working with the Marvel API would be a good way to test the framework since I wouldn’t have to write a fake API first – plus it seemed fun :)

The application should let a user search through the Marvel characters API and allow for the assembling of a team. The team members can later be removed from the list. We’re keeping it pretty simple for this example (wire-frame below).

Disclaimer: This was my first quick foray into using React. There is likely a better way to do some of the things I will be walking through here. Additionally, I know almost nothing about comic books so please don’t laugh that you can build a “Hero Team” out of heroes and villains, etc. (worst example ever).

Setup

Before we really get going, we need to perform some initial, setup tasks. As a side note, if you want to skip all this and head right to the code – it’s available here.

  1. Obtain a Marvel API key at http://developer.marvel.com/
  2. Add some version of Localhost to the referrers section on the Marvel website (we will need this for testing).
  3. Create some jQuery methods for interacting with the Marvel Character’s API (see developer.marvel.com for more on the specifics of the API).
  4. Add your public key as a JavaScript field named key. Something like window.key = "___________"; //this is your public key
  5. Create an HTML page and load the required scripts/styles
1
2
3
4
5
<link href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet">
<link href="styles/site.css" rel="stylesheet">
<script src="http://fb.me/react-0.10.0.js"></script>
<script src="http://fb.me/JSXTransformer-0.10.0.js"></script>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>

Notice we are including the React files. Also of note, for this example we’re simply loading everything from the CDN’s without a local fallback.

Determine Component Architecture

Taking a look at the wire-frame included above, we want to come up with what React components we will need. Each component should be responsible for it’s own content so there should ideally be little overlap. Additionally, as mentioned above, we are using a one-way data flow – we want to design our components as children of a main component.

  • HeroBox: HeroBox is the container for everything we will be creating with React (the Search / Search Results / Current team). If we take a look at our wire-frame, it consists of pretty much everything but the header section.
  • Hero: This is the individual Hero item.

  • HeroList: A list of the possible HeroItems (this is the left side of the HeroBox).
  • HeroForm: The search form.

  • CurrentTeam: The container for all of the Heroes / Villains in our current team.

  • CurrentTeamItem: An individual Hero/Villain partial that will be displayed in our CurrentTeam container.

Since HeroBox is the parent of all the other components, it will be the component that owns the state of our application. That is, everything will get its data from HeroBox and will write back to HeroBox if it needs to change the data.

React Components

We will need to start out by creating an intial React Component. To do that we can simply say var someComponent = React.createClass({ ... });. This React class, can contain custom methods / properties or override some of the default React methods. One of these default methods is the render() method which will build the DOM elements for the component. In our example we will be using JSX as the output of our Render method. JSX is simply a JavaScript XML syntax transform – what that means for us is that we can practically write HTML as the output of a render method. For example:

1
2
3
4
5
6
7
var someComponent = React.createClass({
  render: function(){
  return(
      <h1>Hello</h1>
  )
  }
});

When someComponent is rendered it would unsurprisingly write out <h1>Hello</h1> to the document. This is a bit basic for our example but the concept is necessary.

HeroBox

The HeroBox will be the first component we create because all of the other components will obtain it’s data through it. We will be spending the most time on this component because most of the React-specific stuff is occuring here (the code for this component is posted in its entirety while we will just highlight the interesting parts of the other components).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
var HeroBox = React.createClass({
  loadHeroes: function(){
      getCharacters().then(function(data){
          this.setState({data:data.data.results});
      }.bind(this));
  },
  loadHeroByName: function(name){
      getCharacters("?nameStartsWith=" + name).then(function(data){
          this.setState({data: data.data.results, currentTeam: this.state.currentTeam});
      }.bind(this));
  },
  addToTeam: function(item){
      this.state.currentTeam.push(item);
      this.setState({data:this.state.data, currentTeam: this.state.currentTeam});
  },
  getInitialState: function(){
      return{ data: [], currentTeam: []};
  },
  delete: function(item){
      this.state.currentTeam.splice(item, 1);
      this.setState({data: this.state.data, currentTeam: this.state.currentTeam})
  },
  componentWillMount: function(){
      this.loadHeroes();
      //this.loadHeroByName("Ajaxis");
  },
  render: function(){
      return(
          <div className="heroBox row">
              <div className="col-md-8">
                  <HeroForm onSearchSubmit={this.loadHeroByName} onCancel={this.loadHeroes}/> 
                  <HeroList data={this.state.data} addToTeam={this.addToTeam} /> 
              </div>
              <div className="col-md-4 teamWrapper">                
                  <CurrentTeam data={this.state.currentTeam} delete={this.delete} />
              </div>
          </div>
      )
  }
});
  • loadHeroes: method for obtaining a list of heroes starting at the first location in the Marvel API (if we were including pagination, this call would be used for browse functionality). Take special note of the setState method. We are using this method to trigger the UI updates (see React documentation on setState for more information)
  • loadHeroByName: Calls our jQuery method for interacting with the Marvel data with a given hero name
  • addToTeam: Adds a record to the current team State and calls setState (see description on setState).
  • getInitialState: Define the initial state of the component – be careful with this method on non-root components
  • delete: Remove a given item (by index) from the current team and re-render the component.
  • componentWillMount: This is a method that is invoked immediately before the rendering occurs. This is one of the methods I was a little iffy about as far as how I’m using it but it seems okay due to the demos.
  • render: The render method is simply the JSX representation of how we want to render this component. You may notice we’re using some elements that are not valid DOM elements, such as HeroForm / HeroList / CurrentTeam. These are elements we will be defining below. The attributes on the elements are how we are passing the properties from the HeroBox to the rest of the components.

HeroList

With this component we want to parse through the list of data from HeroBox and create a Hero component for each item. Additionally, this component should serve as the middle man between events on the HeroComponent and the HeroBox component.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var HeroList = React.createClass({
  addToTeam: function(item){
      //basically a passthru
      this.props.addToTeam(item);
  },
  render: function(){
      var that = this; 
      var nodes = this.props.data.map(function(hero, index){
          return <Hero key={index} name={hero.name} thumbnail={hero.thumbnail} description={hero.description} addToTeam={that.addToTeam}></Hero>;
      });

      return <div className="heroList">{nodes}</div>
  }
});

In this component we are using this.props.____ to access properties that were passed in from the render method on HeroBox. The render method of HeroBox contains <HeroList data={this.state.data} addToTeam={this.addToTeam} /> – this means we have this.props.addToTeam and this.props.data as available options here. The render function may look a little strange but it is basically iterating through our list of items and returning a Hero component for each one.

Hero

As we saw above, the parent component of this item defines what properties we have available. Since the Hero item is rendered as <Hero key={index} name={hero.name} thumbnail={hero.thumbnail} description={hero.description} addToTeam={that.addToTeam}></Hero>, we have key, name, thumbnail, description and an addToTeam method available on the object’s props. The Hero component is mostly just rendering out the properties, however, it is also handling button clicks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var Hero = React.createClass({
  ...
  handleClick: function(){
      var image = this.getImage();
      this.props.addToTeam({name: this.props.name, image: image })
  },
  render: function(){
      return (
          <div className="hero col-md-3">
              ...
                  <button type="button" className="addToTeam btn btn-primary" onClick={this.handleClick}>Add To Team</button>
              ...
          </div>
      );
  }
});

What’s happening when a user clicks on the “Add to Team” button is the onClick method, handleClick is called. From there, the handleClick method calls the addToTeam method from the HeroList which calls the addToTeam method on the HeroBox. The HeroBox method runs the setState function so our UI is kept up-to-date. This may seem like a lot of work to update the UI but its nice how clear and non-magical this is.

HeroForm

Similar to Hero, we’re mostly calling functions back on the HeroBox from this Component. We will call loadHeroByName (which is what is performing our search against the API) when the user submits the form and loadHeroes when the user presses cancel (for the sake of example – there is not a lot of the standard logic that should go on in reseting form states, etc).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var HeroForm = React.createClass({
  handleSubmit: function(){
      var name = this.refs.name.getDOMNode().value.trim();
      this.props.onSearchSubmit(name);
      this.refs.name.getDOMNode().value = '';
      return false;
  },
  handleCancel: function(){
      this.props.onCancel();
  },
  render: function(){
      return (
          <form className="searchForm row form-inline" onSubmit={this.handleSubmit}>
                  <input type="text" className="form-control" placeholder="Enter a Hero name" ref="name" />

                  <input type="submit" value="Search" className="btn btn-primary" />

                  <button type="button" className="btn" onClick={this.handleCancel}>Clear Results</button>
          </form>
      );
  }
});

This is all pretty standard to what we’ve seen so far except for the getDOMNode() and this.refs in the handleSubmit function. These statements are allowing us to interact with the data in the form. For more on this, see React’s documentation on the subject.

CurrentTeam / CurrentTeamItem

We are not going to go into detail on the Team Components – they are simply using the same techniques we’ve already encountered on the other Components. Please check out the project on GitHub for the code.

Finishing Up

Now that the components are created we need to write out our HeroBox component to the page.

index.html

1
2
3
4
5
<body>
  ...
  <div id="content" class="container-fluid"></content>
  <script type="text/jsx" src="scripts/heroes.js"></script>
</body>

heroes.js

1
2
3
React.renderComponent(
  <HeroBox />, document.getElementById('content')
);

Be sure to take a look at code for this project on GitHub.

Debugging Express Applications

Coming from the .NET world, I’ve grown accustomed to great debugging tools. My initial Node setup didn’t have a very good way to debug an application (outside of using DEBUG=express:* node .bin/www) and I wanted to resolve that. I had heard about node-inspector in several places and decided to give that a shot.

Node-inspector is a visual interface for the Node debugger that looks like just Chrome Developer Tools for Chrome / Opera. I use the Developer Tools quite frequently for debugging front-end code so it is a natural fit for my work-flow.

Setup

The guide on the github page for node-inspector is pretty good but I wanted to run through how I’m using it on my Express 4 application.

First, like the guides suggest, I ran npm install -g node-inspector. From there, I tried running the application (node --debug ./bin/www) and then running node-debug. Unfortunately, I mixed up node-debug and node-inspector a little bit and the inspector was throwing an EADDRINUSE error. Thankfully, Peter Lyons quickly answered a question I put on StackOverflow which straightened out the issue I was encountering. Apparently, you either want to use node --debug ___ and node-inspector or just node-debug ____ – using node --debug _____.js with the inspector’s node-debug option was causing conflicts as both were starting node’s debugger.

Starting the application with node --debug ./bin/www followed by node-inspector (in another terminal) worked painlessly. I could open up the inspector website (generally localhost:8080/debug?port=5858) and set breakpoints. When navigating through my node application, the code execution was stopping at the breakpoint and I could debug from there using the standard Chrome Developer tools.

Forever

One final thing I wanted to do is get all this working with Forever as it would be nice to be able to make changes to my code with out needing to restart the node server. I have encountered some weirdness with forever and node-inspector but it does seem to work okay. Starting forever generally doesn’t fire up the debugger. After some searching, I came across this StackOverflow post that suggests you have to run a custom command to start forever in debug mode forever -w -c 'node --debug' ./bin/www. From there, I could navigate to both the site I was trying to debug and the inspector page and all seemed to work.

Running Ssh-agent on Windows

There was one thing I didn’t mention in my previous post about running Octopress on a Vagrant machine – in the machine’s current state (with Windows as a host machine), we cannot deploy the site with a rake deploy command. The reason for this is we don’t have an ssh key available to the Vagrant box.

While we could create new keys on the Vagrant machine, this kind of seems to defeat part of the purpose of using Vagrant (setting up a development environment with little manual interaction). Additionally, we could simply share our host machine’s ~/.ssh folder with our vagrant machine but this also seems kind of messy.

Thankfully, there is a pretty simple way to get everything working to where we can use the host machine’s ssh key and that is through an ssh-agent. In the Vagrantfile we setup as part of the previous post, we are already giving our machine access to the ssh-agent with the following command config.ssh.forward_agent = true. The only problem with this forward_agent property is that you may not have an ssh-agent running (especially if you are on Windows). There are a couple things we can do to get around that…

  1. Install msysgit and manually say eval `ssh-agent` followed by ssh-add (assuming your keys are id_rsa/id_rsa.pub) – You’d connect to your Vagrant machine after running this command and would be able to deploy, however, there are a couple of problems with this method. First off, this is a manual process you’d have to remember every time you wish to deploy. Another issue is that you have an ssh-agent process that you need to remember to get rid of down the road.
  2. Use msysgit and .profile – Adding the eval `ssh-agent` and ssh-add to the .profile would allow us to automate the process of starting the agent when loading the terminal. That being said, using the eval script would be bad – it would create a new ssh-agent each time a new shell is loaded. Thankfully, GitHub has shared a solution to this problem.
  3. Use posh-git with PowerShell – Posh-git is a series of PowerShell scripts for git integration. Upon installing posh-git and running PowerShell, I was presented with my ssh key’s password prompt. After entering the password, it started an ssh-agent and everything was good-to-go.

I generally stick with option 2, as I am not much of a PowerShell user. It’s definitely nice to have the PowerShell option available as a backup, however. One thing I would really like to explore a bit more is making this working with cmder. I could not get the agent to run when using cmder (without having it launch PowerShell) but I did not spend much time on that yet.

Testing it out

If you want to test to make sure that your ssh-agent is running and getting shared to your vagrant machine…

  1. Fire up your terminal (either PowerShell with posh-git or msysgit with the github agent code added to your .profile)
  2. Navigate to the directory where your Vagrantfile is and vagrant up followed by vagrant ssh
  3. Once ssh’d into your vagrant machine type ssh -T git@github.com

If everything is working you should see:

Hi _______! You've successfully authenticated, but GitHub does not provide shell access.

Vagrantfile for Octopress

I’ve recently started using Vagrant for managing lightweight virtual machines for various projects. Vagrant is awesome because it allows you to:

  1. Configure an environment for a specific project / application – For instance, if you want to install Ruby / Rails and a mongo database, you can set up an environment specifically for your project. You don’t need to worry about messing up another project’s requirements because each project can have it’s own!
  2. Save system resources – Vagrant starts Virtual Machines in headless mode (no UI) – the VM I’m using for my blog (which we’ll see more in a second) is only using 512megs of RAM and it runs without any hiccups. Additionally, these VMs take virtually no hard-drive space when you are not using them. When you’re done using a machine, you can remove it, keeping only the Vagrantfile and provision scripts. Your scripts can be run later on and your environment will be setup exactly as it was the last time it was configured.
  3. Edit all your code from your host machine – Often times with development VMs, I would treat the machine as if it was a standalone computer (installing vim / sublime, etc. etc). Using Vagrant, however, you can edit the code on the host machine and simply run/serve the application with the VM (it should be noted you definitely could do this with standard VMs – it’s just a bit easier with Vagrant). As a developer who is pretty OCD about IDE configuration, this is a fantastic feature.
  4. Easily share machines with other developers – Vagrant cuts down on the need for sharing giant virtual machines between different computers / developers. You can simply share your Vagrantfile and provision scripts and you have the same environment on any machine (assuming that machine can run Vagrant, etc.).

Vagrant File

We are going to walk through the Vagrantfile and provisioning script I’m using for my blog. First off, the Vagrantfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "precise64"
  config.vm.box_url = "http://files.vagrantup.com/precise64.box"

  config.vm.provision :shell, :path => "bootstrap.sh"
  config.vm.network :private_network, ip: '10.0.33.36'
  config.ssh.forward_agent = true

  config.vm.synced_folder "../octopress", "/home/vagrant/octopress", create: false

  config.vm.provider :virtualbox do |vb|
    vb.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
    vb.customize ["modifyvm", :id, "--memory", "512"]
  end
end

Vagrant files are written in Ruby, however, you don’t need to know Ruby to use Vagrant – the configuration code is nothing too crazy. Lets walk through some of the more interesting parts of the Vagrantfile…

Box settings

The first thing we are doing in the configuration block is defining the type of machine to use. Precise64 is a 64bit Ubuntu 12.04 machine. I generally use this one but there are quite a few to choose from in the Vagrant Cloud. With box_url we are describing where this box can be downloaded if it is not currently available on the host machine.

Provision settings

Next, we are telling Vagrant to run this bootstrap.sh as part of it’s provisioning process. Provisioning is where we will define what the environment should look like so it’s not just a base Ubuntu machine. You can provision a Vagrant box with Chef, Puppet, etc. but for this post I’m just using a shell script (still learning Chef). We will take a look at this shell script in a little bit.

Network / Sync settings

Following the vm configuration, we are setting up the networking and folder options for our box. The vm.network property is stating that when there is a webserver running on this machine, we can access it on our host browser at ‘10.0.33.36’. The synced_folder property is stating that the folder octopress living in a sibling folder to the folder that the Vagrantfile is contained in should be accessible within the virtual machine as ~/octopress. The octopress directory already exists (and has it’s own github repo) so we do not want to recreate it.

Additional settings

Finally, in the provider block toward the bottom of this script we are adjusting the memory used and setting a property that allows us to use symbolic links.

Provisioning Script

As we talked about earlier, the provisioning script is what differentiates our box from a base Ubuntu machine. In the case of this example it’s basically just a shell script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/usr/bin/env bash
HOME="/home/vagrant"
PROV_FILE=.vagrant_provision.lock

#inspired by https://github.com/junwatu/nodejs-vagrant 
if [ -f $PROV_FILE ];
then
    echo "Already Provisioned"
else
    touch $PROV_FILE

    sudo apt-get install -y git make

    git clone https://github.com/sstephenson/rbenv.git $HOME/.rbenv

    # Install ruby-build
    git clone https://github.com/sstephenson/ruby-build.git $HOME/.rbenv/plugins/ruby-build

    $HOME/.rbenv/bin/rbenv install 1.9.3-p194
    $HOME/.rbenv/bin/rbenv global 1.9.3-p194

    #Add rbenv to PATH
    echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> $HOME/.profile
    echo 'eval "$(rbenv init -)"' >> $HOME/.profile

    #own rbenv as the vagrant user
    sudo chown -Rf vagrant $HOME/.rbenv

    #don't like doing this
    sudo su - vagrant -c "rbenv rehash && cd /home/vagrant/octopress/ && gem install bundler"
    sudo su - vagrant -c "cd /home/vagrant/octopress/ && bundle install"
fi

I’m not going to spend as much time on this as it’s not too interesting if you know shell scripting (and there is probably a better way to do a lot of this).

  1. Check to see if the provision lock exists. If it does it means our box is already setup and we shouldn’t configure our environment again.
  2. If the lock file does not exist we are create it
  3. Get git and make
  4. Install rbenv and Ruby 1.9.3-p194 (that was the version I was using when my blog was on an actual machine so I’ll stick with that for now)
  5. modify the path so it contains the Ruby defined in rbenv
  6. Change the ownership of the .rbenv file from the privileged user (sudo) to vagrant – if you don’t do this, you will not be able to use the gem files when you ssh into the box later on.
  7. Rehash rbenv so it uses the right Ruby version and install the bundler gem as the vagrant user
  8. Install the files required to run octopress (as it says in the comment, I really don’t like the sudo su - vagrant commands)

Running the machine

Once everything is setup, you can simply say vagrant up. Vagrant will then run through the Vagrantfile and the script to configure the environment. Once the configuration is complete, you can say vagrant ssh. Once you are ssh’d into the box, you can cd octopress, rake generate, rake preview, etc (see Octopress docs for more information). When finished, vagrant halt will shut down the VM. If you need to destroy the box, you simply can type vagrant destroy. Removing the machine does not remove the code in the synced folders or the Vagrant scripts. Running vagrant up will configure the machine all over again and your code will still be intact where you left off.

Finishing up

I have tossed this Octopress Vagrantfile and provision script on github. For more information on Vagrant, check out the Vagrant site. Of further note, I referenced junwatu’s Vagrant script when writing the Octopress script. Please feel free to submit pull requests for any corrections that you may have to this content.

Thoughts on Microsoft Surface

Last fall, I won a Microsoft Surface 2 as part of the Surface Remix Project contest. I always love to win gadgets but this was a bit more exciting to me as I am a hobby music producer (shameless link to some of my music). I was initially planning on using the device for the music app/remix blade, however, after I had used the device a little over a week, I realized that there was a lot more to the Surface than just another device trying to make waves in the tablet market. I have since purchased a Surface Pro (1) and am really liking it.

I want to be very clear here I’m stepping into a boundary that could make me sound very fanboy-ish. While I am generally a bit more fond of Microsoft technology than some (.NET developer by trade), I try to avoid using a gadget / language / whatever simply based on the brand. To put it another way, I am more of a fan of technology than any particular company – I like the advances that each competitor brings because overall it helps the consumer.

Now that I said all that, I want to discuss my initial thoughts on what I think Microsoft is bringing to the table with the Surface and where I hope that’s going…

Hybrid OS

Initially upgrading to Windows 8 at home had resulted in me switching to Ubuntu until 8.1 came out. My reaction may have been a bit extreme, but I really was not a fan of how many aspects of the OS. While 8.1 is a ton better, seeing the operating system on the tablet really made Windows feel a bit more like it was likely intended. On my desktop I found myself using the Windows UI (or UI style formerly known as Metro) as a task launcher and using mostly desktop apps. On the Surface, however, I kind of wish I could turn desktop mode off entirely. That wouldn’t workout so well on the Pro, but it would be cool if that could be kind of a combination of the two – just Windows UI when no keyboard/dock is attached and more like the desktop when docked.

I had always hoped that there would be a day when I would have one device that could function as my computer and phone (I guess kind of like the Ubuntu phone concept). While the Surface is not entirely where I would like this type of technology to end up, it is definitely a step in the right direction. As I said before, if it were entirely up to me, there would be some changes I would make to Windows 8 but it seems a step closer to making this a reality (still Windows phones would need to run the same OS – not just same kernel).

Niche Markets

As stated above, I won the Surface as part of a contest that Microsoft was having to promote their yet-to-be released Remix Cover. I think it’s fantastic that music producers are given a first-class experience in the Surface world. The remix blade feels like it’s a natural part of the Surface – not an add-on. I would love to see more things like this for the device.

Mobility

The weight of the Surface pales in comparison to any laptop I’ve ever owned – it’s almost an after-thought to pack it up and bring it when traveling. The type cover feels more natural to me than any iPad keyboard I’ve used and works well to protect the screen.

Combined with a dock, such as the Plugable UD-3900, I can run multiple monitors and hook up to a real keyboard / mouse. When I need to head out, I simply can unplug the dock from the USB port and use it as a tablet or laptop.

Processor

The Surface 2 felt pretty zippy but the fact that it ran Windows RT was a bit of a negative for me as a developer. The pro one has been fast enough so far for most web development tasks I’ve thrown at it. I wouldn’t necessarily play VM Inception with it but it’s worked out okay for me so far. I imagine the 2 with 8gb of RAM would fair even better.

Wrapping Up

I started this post in November – left it for a couple months and finally decided to finish it. My feelings toward the Surface are still the same. The pro seems like a fantastic developer machine (if you are in the windows realm) and the ability to have a specialized experience for niche applications makes it a great little device.