Ryan Lanciaux

Random programming blog.

Test React Components Using Jest (on Windows)

I’m currently going through the process creating unit tests for Griddle and thought it would be good to document the steps I took to get Jest up and running. I did not find it as simple as typing npm install -g jest-cli, however it was not too bad.

My primary machine is running Windows 8 – these steps may be a bit different if you’re on Mac / Linux.

  1. Install Python - Install version 2.7 of Python and add it to your path or create a PYTHONPATH environment variable.
  2. Install Visual Studio (Express Edition is Fine) - Thankfully, this step was not required for me as I already use Visual Studio. We will need this for some of modules that are compiled when we are installing Jest. (Express editions available here – get one of the versions that has C++)
  3. Set Visual Studio Version Flags - this step tripped me up a bit at first. We need to tell node-gyp (something that is used for compiling addons) what version of Visual Studio we want to compile with. You can do this either through an environment variable GYP_MSVS_VERSION or the command line option --msvs_version. My environment variable looks a bit like this GYP_MSVS_VERSION=2013 but if you are using Express, I think you have to say GYP_MSVS_VERSION=2013e
  4. Install Jest-CLI - Now you can run the command on the Jest docs site npm install jest-cli --save-dev

At this point you should be ready to run Jest, however, I experienced some further trouble on Windows against React components. In the react example, the package.json contains "unmockedModulePathPatterns": ["<rootDir>/node_modules/react"] which is basically stating that we don’t want to mock React when running our tests. Unfortunately, it seemed like we need to change this path to just "unmockedModulePathPatterns": ["react"] in order for the test to run successfully (again on Windows – seems fine on other OS). See this GitHub issue for more on that.

For more reading on installing Jest’s requirements see:

Introducing Griddle: A React.js Grid

Many of the websites I have worked on have required a grid component. As I had been exploring React.js more it was made apparent that I was going to need a grid component for it to be a viable my projects. There are many great solutions for displaying grid data with React but many seem to rely on writing wrappers for components using jQuery or other libraries. While these solutions work well, I was hoping to render entirely with React. Additionally, I wanted to avoid a dependency on libraries like jQuery / Angular if I could help it.

I decided to try my hand at writing a grid to fit my requirements – the outcome is Griddle - a simple React.js grid.

What it is

Griddle is a configurable grid component for React.js. The main philosophy is that the grid should render quickly, contain a lot of the expected functionality and be simple to use without dictating how the rest of the code is structured. While Griddle is far from perfect I’m pretty happy with the initial outcome.

Where it’s going

As stated above Griddle is far from finished. There are a lot of things that need to be cleaned up and a good deal of functionality that needs to be added. The high-level road map is as follows:

  1. Tests - The initial version of this grid was mostly a coding session or two followed by some basic clean-up. Griddle should be sustainable and tests are big part of that.
  2. Metadata - Griddle should allow a more advanced column order, locked columns, column width, etc. Currently with the column order, for example, an initial order is set but hiding and showing the column will display this column at the end of the list.
  3. Additional User-configuration - The user should be able to drag columns around.
  4. Better sub-grid support - Currently sub-grids are constrained to have the same columns as the parent and are only one-level deep. Sub-grids should be able to have entirely different columns than the parent and should be able to be nested. Finally, sub-grids should be able to be loaded from the server.
  5. More responsive options - Columns should have an optional priority. When the grid gets below a certain size, some columns should drop off depending on the priority. Additionally there should be the option to stack certain columns when a grid gets below a specific size.
  6. Streaming Data - Similar to one result page per request, there should be an option to allow the grid to get the initial page and stream the rest of the data behind the scenes.

Conclusion

So that is basically Griddle. The priority of the road-map items could change but that is the current order. Please check it out and submit issues for anything you run into :)

Trying Out ReactJS With the Marvel API

I’ve recently started looking into ReactJS (Facebook’s front-end JavaScript library) for building web UIs. React has an interesting philosophy about how the UI should function and be defined. First off, while many frameworks have an entire system for interacting with the server, routing, etc, React is just the View portion (in a MV* application). Second, React does not employ 2-way data binding. Instead, it uses a one-way data flow where data is maintained in the parent items and is manually shared to its child components. Finally, React uses a Virtual DOM which they say helps with performance (I cannot speak to this first-hand but it seems logical – see here for more on React’s performance from someone who can speak more authoritatively on this).

One other thing that jumped out at me about React is how they recommend you build your UI. According to the documentation, you should start out with a design/mock-up and build a static version of the application. Once the static version is complete, figure out which components are available and how data should flow. Finally, toss your real data into your UI. See Thinking in React for more information on this.

The App

I generally like to have a goal in mind when learning a new language or framework (this goal doesn’t necessarily have to be useful). It was determined that working with the Marvel API would be a good way to test the framework since I wouldn’t have to write a fake API first – plus it seemed fun :)

The application should let a user search through the Marvel characters API and allow for the assembling of a team. The team members can later be removed from the list. We’re keeping it pretty simple for this example (wire-frame below).

Disclaimer: This was my first quick foray into using React. There is likely a better way to do some of the things I will be walking through here. Additionally, I know almost nothing about comic books so please don’t laugh that you can build a “Hero Team” out of heroes and villains, etc. (worst example ever).

Setup

Before we really get going, we need to perform some initial, setup tasks. As a side note, if you want to skip all this and head right to the code – it’s available here.

  1. Obtain a Marvel API key at http://developer.marvel.com/
  2. Add some version of Localhost to the referrers section on the Marvel website (we will need this for testing).
  3. Create some jQuery methods for interacting with the Marvel Character’s API (see developer.marvel.com for more on the specifics of the API).
  4. Add your public key as a JavaScript field named key. Something like window.key = "___________"; //this is your public key
  5. Create an HTML page and load the required scripts/styles
1
2
3
4
5
<link href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet">
<link href="styles/site.css" rel="stylesheet">
<script src="http://fb.me/react-0.10.0.js"></script>
<script src="http://fb.me/JSXTransformer-0.10.0.js"></script>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>

Notice we are including the React files. Also of note, for this example we’re simply loading everything from the CDN’s without a local fallback.

Determine Component Architecture

Taking a look at the wire-frame included above, we want to come up with what React components we will need. Each component should be responsible for it’s own content so there should ideally be little overlap. Additionally, as mentioned above, we are using a one-way data flow – we want to design our components as children of a main component.

  • HeroBox: HeroBox is the container for everything we will be creating with React (the Search / Search Results / Current team). If we take a look at our wire-frame, it consists of pretty much everything but the header section.
  • Hero: This is the individual Hero item.

  • HeroList: A list of the possible HeroItems (this is the left side of the HeroBox).
  • HeroForm: The search form.

  • CurrentTeam: The container for all of the Heroes / Villains in our current team.

  • CurrentTeamItem: An individual Hero/Villain partial that will be displayed in our CurrentTeam container.

Since HeroBox is the parent of all the other components, it will be the component that owns the state of our application. That is, everything will get its data from HeroBox and will write back to HeroBox if it needs to change the data.

React Components

We will need to start out by creating an intial React Component. To do that we can simply say var someComponent = React.createClass({ ... });. This React class, can contain custom methods / properties or override some of the default React methods. One of these default methods is the render() method which will build the DOM elements for the component. In our example we will be using JSX as the output of our Render method. JSX is simply a JavaScript XML syntax transform – what that means for us is that we can practically write HTML as the output of a render method. For example:

1
2
3
4
5
6
7
var someComponent = React.createClass({
  render: function(){
  return(
      <h1>Hello</h1>
  )
  }
});

When someComponent is rendered it would unsurprisingly write out <h1>Hello</h1> to the document. This is a bit basic for our example but the concept is necessary.

HeroBox

The HeroBox will be the first component we create because all of the other components will obtain it’s data through it. We will be spending the most time on this component because most of the React-specific stuff is occuring here (the code for this component is posted in its entirety while we will just highlight the interesting parts of the other components).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
var HeroBox = React.createClass({
  loadHeroes: function(){
      getCharacters().then(function(data){
          this.setState({data:data.data.results});
      }.bind(this));
  },
  loadHeroByName: function(name){
      getCharacters("?nameStartsWith=" + name).then(function(data){
          this.setState({data: data.data.results, currentTeam: this.state.currentTeam});
      }.bind(this));
  },
  addToTeam: function(item){
      this.state.currentTeam.push(item);
      this.setState({data:this.state.data, currentTeam: this.state.currentTeam});
  },
  getInitialState: function(){
      return{ data: [], currentTeam: []};
  },
  delete: function(item){
      this.state.currentTeam.splice(item, 1);
      this.setState({data: this.state.data, currentTeam: this.state.currentTeam})
  },
  componentWillMount: function(){
      this.loadHeroes();
      //this.loadHeroByName("Ajaxis");
  },
  render: function(){
      return(
          <div className="heroBox row">
              <div className="col-md-8">
                  <HeroForm onSearchSubmit={this.loadHeroByName} onCancel={this.loadHeroes}/> 
                  <HeroList data={this.state.data} addToTeam={this.addToTeam} /> 
              </div>
              <div className="col-md-4 teamWrapper">                
                  <CurrentTeam data={this.state.currentTeam} delete={this.delete} />
              </div>
          </div>
      )
  }
});
  • loadHeroes: method for obtaining a list of heroes starting at the first location in the Marvel API (if we were including pagination, this call would be used for browse functionality). Take special note of the setState method. We are using this method to trigger the UI updates (see React documentation on setState for more information)
  • loadHeroByName: Calls our jQuery method for interacting with the Marvel data with a given hero name
  • addToTeam: Adds a record to the current team State and calls setState (see description on setState).
  • getInitialState: Define the initial state of the component – be careful with this method on non-root components
  • delete: Remove a given item (by index) from the current team and re-render the component.
  • componentWillMount: This is a method that is invoked immediately before the rendering occurs. This is one of the methods I was a little iffy about as far as how I’m using it but it seems okay due to the demos.
  • render: The render method is simply the JSX representation of how we want to render this component. You may notice we’re using some elements that are not valid DOM elements, such as HeroForm / HeroList / CurrentTeam. These are elements we will be defining below. The attributes on the elements are how we are passing the properties from the HeroBox to the rest of the components.

HeroList

With this component we want to parse through the list of data from HeroBox and create a Hero component for each item. Additionally, this component should serve as the middle man between events on the HeroComponent and the HeroBox component.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var HeroList = React.createClass({
  addToTeam: function(item){
      //basically a passthru
      this.props.addToTeam(item);
  },
  render: function(){
      var that = this; 
      var nodes = this.props.data.map(function(hero, index){
          return <Hero key={index} name={hero.name} thumbnail={hero.thumbnail} description={hero.description} addToTeam={that.addToTeam}></Hero>;
      });

      return <div className="heroList">{nodes}</div>
  }
});

In this component we are using this.props.____ to access properties that were passed in from the render method on HeroBox. The render method of HeroBox contains <HeroList data={this.state.data} addToTeam={this.addToTeam} /> – this means we have this.props.addToTeam and this.props.data as available options here. The render function may look a little strange but it is basically iterating through our list of items and returning a Hero component for each one.

Hero

As we saw above, the parent component of this item defines what properties we have available. Since the Hero item is rendered as <Hero key={index} name={hero.name} thumbnail={hero.thumbnail} description={hero.description} addToTeam={that.addToTeam}></Hero>, we have key, name, thumbnail, description and an addToTeam method available on the object’s props. The Hero component is mostly just rendering out the properties, however, it is also handling button clicks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var Hero = React.createClass({
  ...
  handleClick: function(){
      var image = this.getImage();
      this.props.addToTeam({name: this.props.name, image: image })
  },
  render: function(){
      return (
          <div className="hero col-md-3">
              ...
                  <button type="button" className="addToTeam btn btn-primary" onClick={this.handleClick}>Add To Team</button>
              ...
          </div>
      );
  }
});

What’s happening when a user clicks on the “Add to Team” button is the onClick method, handleClick is called. From there, the handleClick method calls the addToTeam method from the HeroList which calls the addToTeam method on the HeroBox. The HeroBox method runs the setState function so our UI is kept up-to-date. This may seem like a lot of work to update the UI but its nice how clear and non-magical this is.

HeroForm

Similar to Hero, we’re mostly calling functions back on the HeroBox from this Component. We will call loadHeroByName (which is what is performing our search against the API) when the user submits the form and loadHeroes when the user presses cancel (for the sake of example – there is not a lot of the standard logic that should go on in reseting form states, etc).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var HeroForm = React.createClass({
  handleSubmit: function(){
      var name = this.refs.name.getDOMNode().value.trim();
      this.props.onSearchSubmit(name);
      this.refs.name.getDOMNode().value = '';
      return false;
  },
  handleCancel: function(){
      this.props.onCancel();
  },
  render: function(){
      return (
          <form className="searchForm row form-inline" onSubmit={this.handleSubmit}>
                  <input type="text" className="form-control" placeholder="Enter a Hero name" ref="name" />

                  <input type="submit" value="Search" className="btn btn-primary" />

                  <button type="button" className="btn" onClick={this.handleCancel}>Clear Results</button>
          </form>
      );
  }
});

This is all pretty standard to what we’ve seen so far except for the getDOMNode() and this.refs in the handleSubmit function. These statements are allowing us to interact with the data in the form. For more on this, see React’s documentation on the subject.

CurrentTeam / CurrentTeamItem

We are not going to go into detail on the Team Components – they are simply using the same techniques we’ve already encountered on the other Components. Please check out the project on GitHub for the code.

Finishing Up

Now that the components are created we need to write out our HeroBox component to the page.

index.html

1
2
3
4
5
<body>
  ...
  <div id="content" class="container-fluid"></content>
  <script type="text/jsx" src="scripts/heroes.js"></script>
</body>

heroes.js

1
2
3
React.renderComponent(
  <HeroBox />, document.getElementById('content')
);

Be sure to take a look at code for this project on GitHub.

Debugging Express Applications

Coming from the .NET world, I’ve grown accustomed to great debugging tools. My initial Node setup didn’t have a very good way to debug an application (outside of using DEBUG=express:* node .bin/www) and I wanted to resolve that. I had heard about node-inspector in several places and decided to give that a shot.

Node-inspector is a visual interface for the Node debugger that looks like just Chrome Developer Tools for Chrome / Opera. I use the Developer Tools quite frequently for debugging front-end code so it is a natural fit for my work-flow.

Setup

The guide on the github page for node-inspector is pretty good but I wanted to run through how I’m using it on my Express 4 application.

First, like the guides suggest, I ran npm install -g node-inspector. From there, I tried running the application (node --debug ./bin/www) and then running node-debug. Unfortunately, I mixed up node-debug and node-inspector a little bit and the inspector was throwing an EADDRINUSE error. Thankfully, Peter Lyons quickly answered a question I put on StackOverflow which straightened out the issue I was encountering. Apparently, you either want to use node --debug ___ and node-inspector or just node-debug ____ – using node --debug _____.js with the inspector’s node-debug option was causing conflicts as both were starting node’s debugger.

Starting the application with node --debug ./bin/www followed by node-inspector (in another terminal) worked painlessly. I could open up the inspector website (generally localhost:8080/debug?port=5858) and set breakpoints. When navigating through my node application, the code execution was stopping at the breakpoint and I could debug from there using the standard Chrome Developer tools.

Forever

One final thing I wanted to do is get all this working with Forever as it would be nice to be able to make changes to my code with out needing to restart the node server. I have encountered some weirdness with forever and node-inspector but it does seem to work okay. Starting forever generally doesn’t fire up the debugger. After some searching, I came across this StackOverflow post that suggests you have to run a custom command to start forever in debug mode forever -w -c 'node --debug' ./bin/www. From there, I could navigate to both the site I was trying to debug and the inspector page and all seemed to work.

Running Ssh-agent on Windows

There was one thing I didn’t mention in my previous post about running Octopress on a Vagrant machine – in the machine’s current state (with Windows as a host machine), we cannot deploy the site with a rake deploy command. The reason for this is we don’t have an ssh key available to the Vagrant box.

While we could create new keys on the Vagrant machine, this kind of seems to defeat part of the purpose of using Vagrant (setting up a development environment with little manual interaction). Additionally, we could simply share our host machine’s ~/.ssh folder with our vagrant machine but this also seems kind of messy.

Thankfully, there is a pretty simple way to get everything working to where we can use the host machine’s ssh key and that is through an ssh-agent. In the Vagrantfile we setup as part of the previous post, we are already giving our machine access to the ssh-agent with the following command config.ssh.forward_agent = true. The only problem with this forward_agent property is that you may not have an ssh-agent running (especially if you are on Windows). There are a couple things we can do to get around that…

  1. Install msysgit and manually say eval `ssh-agent` followed by ssh-add (assuming your keys are id_rsa/id_rsa.pub) – You’d connect to your Vagrant machine after running this command and would be able to deploy, however, there are a couple of problems with this method. First off, this is a manual process you’d have to remember every time you wish to deploy. Another issue is that you have an ssh-agent process that you need to remember to get rid of down the road.
  2. Use msysgit and .profile – Adding the eval `ssh-agent` and ssh-add to the .profile would allow us to automate the process of starting the agent when loading the terminal. That being said, using the eval script would be bad – it would create a new ssh-agent each time a new shell is loaded. Thankfully, GitHub has shared a solution to this problem.
  3. Use posh-git with PowerShell – Posh-git is a series of PowerShell scripts for git integration. Upon installing posh-git and running PowerShell, I was presented with my ssh key’s password prompt. After entering the password, it started an ssh-agent and everything was good-to-go.

I generally stick with option 2, as I am not much of a PowerShell user. It’s definitely nice to have the PowerShell option available as a backup, however. One thing I would really like to explore a bit more is making this working with cmder. I could not get the agent to run when using cmder (without having it launch PowerShell) but I did not spend much time on that yet.

Testing it out

If you want to test to make sure that your ssh-agent is running and getting shared to your vagrant machine…

  1. Fire up your terminal (either PowerShell with posh-git or msysgit with the github agent code added to your .profile)
  2. Navigate to the directory where your Vagrantfile is and vagrant up followed by vagrant ssh
  3. Once ssh’d into your vagrant machine type ssh -T git@github.com

If everything is working you should see:

Hi _______! You've successfully authenticated, but GitHub does not provide shell access.

Vagrantfile for Octopress

I’ve recently started using Vagrant for managing lightweight virtual machines for various projects. Vagrant is awesome because it allows you to:

  1. Configure an environment for a specific project / application – For instance, if you want to install Ruby / Rails and a mongo database, you can set up an environment specifically for your project. You don’t need to worry about messing up another project’s requirements because each project can have it’s own!
  2. Save system resources – Vagrant starts Virtual Machines in headless mode (no UI) – the VM I’m using for my blog (which we’ll see more in a second) is only using 512megs of RAM and it runs without any hiccups. Additionally, these VMs take virtually no hard-drive space when you are not using them. When you’re done using a machine, you can remove it, keeping only the Vagrantfile and provision scripts. Your scripts can be run later on and your environment will be setup exactly as it was the last time it was configured.
  3. Edit all your code from your host machine – Often times with development VMs, I would treat the machine as if it was a standalone computer (installing vim / sublime, etc. etc). Using Vagrant, however, you can edit the code on the host machine and simply run/serve the application with the VM (it should be noted you definitely could do this with standard VMs – it’s just a bit easier with Vagrant). As a developer who is pretty OCD about IDE configuration, this is a fantastic feature.
  4. Easily share machines with other developers – Vagrant cuts down on the need for sharing giant virtual machines between different computers / developers. You can simply share your Vagrantfile and provision scripts and you have the same environment on any machine (assuming that machine can run Vagrant, etc.).

Vagrant File

We are going to walk through the Vagrantfile and provisioning script I’m using for my blog. First off, the Vagrantfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "precise64"
  config.vm.box_url = "http://files.vagrantup.com/precise64.box"

  config.vm.provision :shell, :path => "bootstrap.sh"
  config.vm.network :private_network, ip: '10.0.33.36'
  config.ssh.forward_agent = true

  config.vm.synced_folder "../octopress", "/home/vagrant/octopress", create: false

  config.vm.provider :virtualbox do |vb|
    vb.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
    vb.customize ["modifyvm", :id, "--memory", "512"]
  end
end

Vagrant files are written in Ruby, however, you don’t need to know Ruby to use Vagrant – the configuration code is nothing too crazy. Lets walk through some of the more interesting parts of the Vagrantfile…

Box settings

The first thing we are doing in the configuration block is defining the type of machine to use. Precise64 is a 64bit Ubuntu 12.04 machine. I generally use this one but there are quite a few to choose from in the Vagrant Cloud. With box_url we are describing where this box can be downloaded if it is not currently available on the host machine.

Provision settings

Next, we are telling Vagrant to run this bootstrap.sh as part of it’s provisioning process. Provisioning is where we will define what the environment should look like so it’s not just a base Ubuntu machine. You can provision a Vagrant box with Chef, Puppet, etc. but for this post I’m just using a shell script (still learning Chef). We will take a look at this shell script in a little bit.

Network / Sync settings

Following the vm configuration, we are setting up the networking and folder options for our box. The vm.network property is stating that when there is a webserver running on this machine, we can access it on our host browser at ‘10.0.33.36’. The synced_folder property is stating that the folder octopress living in a sibling folder to the folder that the Vagrantfile is contained in should be accessible within the virtual machine as ~/octopress. The octopress directory already exists (and has it’s own github repo) so we do not want to recreate it.

Additional settings

Finally, in the provider block toward the bottom of this script we are adjusting the memory used and setting a property that allows us to use symbolic links.

Provisioning Script

As we talked about earlier, the provisioning script is what differentiates our box from a base Ubuntu machine. In the case of this example it’s basically just a shell script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/usr/bin/env bash
HOME="/home/vagrant"
PROV_FILE=.vagrant_provision.lock

#inspired by https://github.com/junwatu/nodejs-vagrant 
if [ -f $PROV_FILE ];
then
    echo "Already Provisioned"
else
    touch $PROV_FILE

    sudo apt-get install -y git make

    git clone https://github.com/sstephenson/rbenv.git $HOME/.rbenv

    # Install ruby-build
    git clone https://github.com/sstephenson/ruby-build.git $HOME/.rbenv/plugins/ruby-build

    $HOME/.rbenv/bin/rbenv install 1.9.3-p194
    $HOME/.rbenv/bin/rbenv global 1.9.3-p194

    #Add rbenv to PATH
    echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> $HOME/.profile
    echo 'eval "$(rbenv init -)"' >> $HOME/.profile

    #own rbenv as the vagrant user
    sudo chown -Rf vagrant $HOME/.rbenv

    #don't like doing this
    sudo su - vagrant -c "rbenv rehash && cd /home/vagrant/octopress/ && gem install bundler"
    sudo su - vagrant -c "cd /home/vagrant/octopress/ && bundle install"
fi

I’m not going to spend as much time on this as it’s not too interesting if you know shell scripting (and there is probably a better way to do a lot of this).

  1. Check to see if the provision lock exists. If it does it means our box is already setup and we shouldn’t configure our environment again.
  2. If the lock file does not exist we are create it
  3. Get git and make
  4. Install rbenv and Ruby 1.9.3-p194 (that was the version I was using when my blog was on an actual machine so I’ll stick with that for now)
  5. modify the path so it contains the Ruby defined in rbenv
  6. Change the ownership of the .rbenv file from the privileged user (sudo) to vagrant – if you don’t do this, you will not be able to use the gem files when you ssh into the box later on.
  7. Rehash rbenv so it uses the right Ruby version and install the bundler gem as the vagrant user
  8. Install the files required to run octopress (as it says in the comment, I really don’t like the sudo su - vagrant commands)

Running the machine

Once everything is setup, you can simply say vagrant up. Vagrant will then run through the Vagrantfile and the script to configure the environment. Once the configuration is complete, you can say vagrant ssh. Once you are ssh’d into the box, you can cd octopress, rake generate, rake preview, etc (see Octopress docs for more information). When finished, vagrant halt will shut down the VM. If you need to destroy the box, you simply can type vagrant destroy. Removing the machine does not remove the code in the synced folders or the Vagrant scripts. Running vagrant up will configure the machine all over again and your code will still be intact where you left off.

Finishing up

I have tossed this Octopress Vagrantfile and provision script on github. For more information on Vagrant, check out the Vagrant site. Of further note, I referenced junwatu’s Vagrant script when writing the Octopress script. Please feel free to submit pull requests for any corrections that you may have to this content.

Thoughts on Microsoft Surface

Last fall, I won a Microsoft Surface 2 as part of the Surface Remix Project contest. I always love to win gadgets but this was a bit more exciting to me as I am a hobby music producer (shameless link to some of my music). I was initially planning on using the device for the music app/remix blade, however, after I had used the device a little over a week, I realized that there was a lot more to the Surface than just another device trying to make waves in the tablet market. I have since purchased a Surface Pro (1) and am really liking it.

I want to be very clear here I’m stepping into a boundary that could make me sound very fanboy-ish. While I am generally a bit more fond of Microsoft technology than some (.NET developer by trade), I try to avoid using a gadget / language / whatever simply based on the brand. To put it another way, I am more of a fan of technology than any particular company – I like the advances that each competitor brings because overall it helps the consumer.

Now that I said all that, I want to discuss my initial thoughts on what I think Microsoft is bringing to the table with the Surface and where I hope that’s going…

Hybrid OS

Initially upgrading to Windows 8 at home had resulted in me switching to Ubuntu until 8.1 came out. My reaction may have been a bit extreme, but I really was not a fan of how many aspects of the OS. While 8.1 is a ton better, seeing the operating system on the tablet really made Windows feel a bit more like it was likely intended. On my desktop I found myself using the Windows UI (or UI style formerly known as Metro) as a task launcher and using mostly desktop apps. On the Surface, however, I kind of wish I could turn desktop mode off entirely. That wouldn’t workout so well on the Pro, but it would be cool if that could be kind of a combination of the two – just Windows UI when no keyboard/dock is attached and more like the desktop when docked.

I had always hoped that there would be a day when I would have one device that could function as my computer and phone (I guess kind of like the Ubuntu phone concept). While the Surface is not entirely where I would like this type of technology to end up, it is definitely a step in the right direction. As I said before, if it were entirely up to me, there would be some changes I would make to Windows 8 but it seems a step closer to making this a reality (still Windows phones would need to run the same OS – not just same kernel).

Niche Markets

As stated above, I won the Surface as part of a contest that Microsoft was having to promote their yet-to-be released Remix Cover. I think it’s fantastic that music producers are given a first-class experience in the Surface world. The remix blade feels like it’s a natural part of the Surface – not an add-on. I would love to see more things like this for the device.

Mobility

The weight of the Surface pales in comparison to any laptop I’ve ever owned – it’s almost an after-thought to pack it up and bring it when traveling. The type cover feels more natural to me than any iPad keyboard I’ve used and works well to protect the screen.

Combined with a dock, such as the Plugable UD-3900, I can run multiple monitors and hook up to a real keyboard / mouse. When I need to head out, I simply can unplug the dock from the USB port and use it as a tablet or laptop.

Processor

The Surface 2 felt pretty zippy but the fact that it ran Windows RT was a bit of a negative for me as a developer. The pro one has been fast enough so far for most web development tasks I’ve thrown at it. I wouldn’t necessarily play VM Inception with it but it’s worked out okay for me so far. I imagine the 2 with 8gb of RAM would fair even better.

Wrapping Up

I started this post in November – left it for a couple months and finally decided to finish it. My feelings toward the Surface are still the same. The pro seems like a fantastic developer machine (if you are in the windows realm) and the ability to have a specialized experience for niche applications makes it a great little device.

Fake Popovers for Angular-xeditable

I was recently working on a project with AngularJS and xeditable (if you’re not familiar, xeditable is an awesome library for inline editing). There is an Angular version of xeditable but the popover editing functionality is not implemented yet (it’s in the roadmap). Instead of using the original version of xeditable and implementing custom directives or try to add the popover functionality to the project, I decided to see if I could make the popover using just CSS – this happened to be more inline with my timeframe.

Take a look at the original (non-angular) popover:

Starting out, I noticed that clicking on the link of an xeditable element was showing an input element (and buttons) in a form and hiding the link. To mimic the popover, the link and the form should both be visible when the form was activated, however, the form should be positioned a bit higher than the link. Working with a forked version of vitalets’ jsfiddle example, I wrapped the initial links in <span class='item-wrapper'></span> – from there, I edited the link and the form’s CSS as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
.item-wrapper a{
    /* make the link always show up */
    display: inline !important;
}

.item-wrapper{
    /* make absolutely positioned children constrained to this box*/
    position: relative;
}

.item-wrapper form {
    background: #FFF;
    border: 1px solid #AAA;
    border-radius: 5px;
    display: inline-block;
    left: 50%;

    /* half the width */
    margin-left: -110px;
    padding: 7px;
    position: absolute;
    top: -55px;
    width: 220px;
    z-index: 101;
}

It’s a step in the right direction, however, doesn’t really look exactly like we want. To get the triangle to show up below the pop-up, I thought it would be good to use the technique for creating a triangle on css-tricks as an :after filter (please check the link for more info on that because how it works is a bit outside the scope of this post).

This works but it looks funny because the popover has a border but the triangle is just a solid color. Additionally, we cannot just toss a border on the :after filter since we’re using the border to create the triangle. What I ended up doing is using a :before filter with a width of 10px and a background color the same as the border color followed by an :after filter 1px narrower and the same background color as the popover.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
.item-wrapper form:before{
    content:"";
    width: 0;
    height: 0;
    border-left: 10px solid transparent;
    border-right: 10px solid transparent;
    border-top: 10px solid #AAA;
    position:absolute;
    bottom:-10px;
    left:100px;
}

.item-wrapper form:after{
    content:"";
    width:0;
    height:0;
    border-left: 9px solid transparent;
    border-right: 9px solid transparent;
    border-top: 9px solid #FFF;
    position:absolute;
    bottom:-9px;
    left:101px;
}

There is a jsfiddle of the example available here. A few things to note… I am only using this with the Angular-xeditable dropdowns and text boxes so the other controls may or may not work. Additionally, I added some javascript (not in the examples) to hide any visible popovers when displaying a new one. I was running into some issues displaying multiple (or displaying the same one multiple times).

Learning AngularJS III: Routes

So far we’ve covered the basics of using AngularJS to interact with RESTful services and Filtering / Ordering views in AngularJS. Using AngularJS Routes, we are going to add a bit of structure to this example app.

If you have not already, please take a look at Part 1 and Part 2 as we will be working with the app we have started there…

First off, lets open our index.ejs file. As you may notice this file is an unstructured mess. We want to break apart the controllers and templates into their own files so our architecture of our demo app is a bit more clear. When we’re done, we will have the following files:

  • app.js under /assets/js/angular/
  • controllers.js under /assets/js/angular/
  • list.html under /public/templates/ – there is a better way to use Angular with Sails, however, for the sake of example this is okay
  • detail.html under /public/templates/
  • edit.html under /public/templates/

app.js

App.js is where we’re storing our module definition (that we added in Part 1), our factory defintion and our routes. The factory is exactly the same as before except we’ve added an update endpoint.

Resource

1
2
3
foodApp.factory('Food', ['$resource', function($resource){
    return $resource('/food/:id', {id:'@id'}, { update: {method:'PUT' } } );
}]);

By default, the Angular resource module has get/save/query/remove/delete methods but no update. What’s more, we want to make sure we are using a PUT method for storing our modified food items so Sails knows that we’re trying to modify an existing record. Thankfully, we can add custom actions (as you may have noticed above) by simply adding a hash after our route parameters object in the resource defintion like so { update: {method:'PUT' } }. Since this is just a hash, you can add as many definitions as you would like (e.g. { update: {method: 'PUT' }, somethingelse: {method: 'DELETE'} }).

Routing

In Part 1 we are showing/hiding a form based on a $scope variable on our controller. While this works, it may be a bit cleaner to use routing and separate our views by their function. Routing in Angular is pretty straight forward – especially if you have routing experience in other frameworks.

1
2
3
4
5
6
7
8
foodApp.config(['$routeProvider', function($routeProvider) {
  $routeProvider
    .when('/food', {templateUrl: '/templates/list.html', controller: FoodController})
    .when('/food/edit/:id', {templateUrl: '/templates/edit.html', controller: FoodController})
    .when('/food/create', {templateUrl: '/templates/edit.html', controller: FoodController})
    .when('/food/:id', {templateUrl: '/templates/detail.html', controller: FoodController})
    .otherwise({redirectTo: '/food'});
}]);

When the URL matches one of the route values, the visitor will be directed to the template and given controller (you will notice that we’re using the same controller for all our routes). Additionally, the routes that have :id will have a route parameter of id available in the controller (more on this later). If none of the routes are matched we default to /food. We won’t focus too much on the views becuase they are mostly the same as our old index.ejs, however, they are available in the gist created for this post.

controllers.js

Our controller is mostly the same as before except we’re no longer maintaining which page we’re showing. The whole controller is available as a gist however, some of the more interesting parts are as follows:

1
2
3
4
5
6
if($routeParams.id){
 $scope.currentFood = Food.get({id: $routeParams.id});
} else {
  $scope.currentFood = new Food();
  $scope.food = Food.query();
}

This is checking for the route parameter that we are setting in our route – if it’s there, we get the individual food item with that ID. When the parameter is not there, we get all the food items to be displayed in a list (and initialize a Food item for creates).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$scope.addFood = function(){
      if ($scope.currentFood.id && $scope.currentFood.id != 0){
        Food.get({id: $scope.currentFood.id}, function(food){
            food.type = $scope.currentFood.type;
            food.name = $scope.currentFood.name;
            food.percentRemaining = $scope.currentFood.percentRemaining;
            food.quantity = $scope.currentFood.quantity;

            food.$update({}, function(){
              $location.path( "/" );
            });
        });
      } else {
        $scope.currentFood.$save();
        $location.path( "/" );
      }
};

In this method we are adding our food item or updating an existing food item. We start by checking the food item’s id. If it has an id, we go ahead and get the server version and update the properties with the form values. If it doesn’t have an id, we save the food item and redirect to the list view. Food.$save is calling the built in resource action where Food.$update is calling the custom resource action we created above – both of these actions then interact with the Sails API on the server.

Wrapping Up

So there we have it. While this is still an example app – it’s way more organized than the previous iterations. The code files are available in this gist.

Learning AngularJS II : Filtering / Ordering

Last time I wrote about some basic AngularJS functionality for interacting with a RESTful API. We’re going to continue where with left off with our food inventory app to add some filtering/sorting . Check out the first post if you missed it, as we will be depending heavily on what is covered there.

Filtering

Lets say we want to search through our food inventory for something specific like oranges. We first need to open the index.ejs (that we created in Part 1) and add the following right before our table definition.

1
2
3
4
<div class="filter">
  <label for="filter">filter:</label>
  <input type="text" name="filter" ng-model="filter" />
</div>

The div isn’t entirely necessary, however, it could be useful for applying styling (it’s pretty ugly as it sits). Now that the filter definition is complete, we need to go back to our repeater definition and pipe the results through the filter as so:

1
<tr class="row" ng-repeat="f in food | filter:filter">

In a console in your project directory – fire off a sails lift command, navigate to http://localhost:1337 in your browser of choice and start typing in the filter input box. You’ll notice that all of the model bound columns are available to be filtered (e.g. entering fruit displays only food items that were classified as fruit – typing in orange shows only any records with orange in the name etc. etc.). Also, you may notice that this is not case sensitive.

Ordering

Now lets add the ability to sort the data in our table. If we followed the basic example on the AngularJS docs site, we could simply create a sort variable that we would modify in the table headers and reference in the orderBy of our repeater. The value of the sort property should be the names of one of our columns.

1
2
3
<th><a ng-click="sort='name'">Name</a></th>
...
<tr class="row" ng-repeat="f in food | filter:filter | orderBy:sort">

To handle ascending / descending we could do something like this (however, as we’ll see in a minute this may not be an ideal solution):

1
2
3
<th><a ng-click="sort='name'; reverse=!reverse">Name</a></th>
...
<tr class="row" ng-repeat="f in food | filter:filter | orderBy:sort:reverse">

Unfortunately, the reverse value would be shared across all columns. That means that if I click the ‘Name’ column and sort it descending and then click the ‘Type’ column – we will notice that it is sorting in ascending order. The problem is that the shared reverse variable is never getting reset when sorting by a different column.

To get around this, lets move our sorting functionality to the controller so we’re not duplicating a lot of code:

1
2
3
4
5
6
7
8
9
10
11
12
$scope.sort = "name";
$scope.reverse = false;

$scope.changeSort = function(value){
    if ($scope.sort == value){
      $scope.reverse = !$scope.reverse;
      return;
    }

    $scope.sort = value;
    $scope.reverse = false;
}

We’re creating the sort and reverse properties that are referenced in the orderBy of the repeater (orderBy:sort:reverse) as well as a function to manage whether to change the sort column or simply change the value of reverse. If you click the ‘Name’ column several times, the sort will not change, however, the reverse value will (which wil trigger ascending / descending order).

Next we need to change our table headers so they call this function when clicked. As before, the column’s property name will be passed as a parameter to this function:

1
2
3
<th><a ng-click="changeSort('name')">Name</a></th>
<th><a ng-click="changeSort('type')">Type</a></th>
<th><a ng-click="changeSort('expiration')">Expiration</a></th>

At this point fire up the page and see how everything is looking. The sorting / filtering is all working as intended. I have created a gist of the newly created index.ejs file that you can view here. In the next part of this series we will look at routes and editing our data.