Justin Beckwith Just another technology blog. http://jbeckwith.com Dependency management and Go <p>I find dependency management and package managers interesting. Each language has its own package manager, and each one has characteristics that are specific to that community. NuGet for .NET has great tooling and Visual Studio support, since that’s important to the .NET developer audience. NPM has a super flexible model, and great command line tools.</p> <p>In a lot of ways, golang is a little <a href="https://golang.org/doc/faq#Why_doesnt_Go_have_feature_X">quirky</a>. And that’s awesome. However - I’ve really struggled to wrap my head around dependency management in Go. </p> <p><img src="/images/2015/dependency-management-go/package.png" alt="&quot;Dependency management and golang&quot;" /></p> <p>When dealing with dependency management, I expect a few things:</p> <h4 id="repeatable-builds">1. Repeatable builds</h4> <p>Given the same source code, I expect to be able to reproduce the same set of binaries. Every. Time. Every bit of information needed to complete a build, whether it be on my local dev box or on a build server, should be explicitly called out in my source code. No surprises.</p> <h4 id="isolated-environments">2. Isolated environments</h4> <p>I am likely to be working on multiple projects at a time. Each project may have a requirement on different compilers, and different versions of the same dependency. At no point should changing a dependency in one project have an effect on the dependencies on a completely separate project. </p> <h4 id="consensus">3. Consensus</h4> <p>Having a package management story is awesome. What’s even better is making sure everyone uses the same one :) As long as developers are inventive and curious, there will always be alternatives. But there needs to be consensus on the community accepted standard on how a package manager will work. If 5 projects use 5 different models of dependency management, we’re all out of luck. </p> <h2 id="how-nodejs-does-it">How node.js does it</h2> <p><a href="http://jbeckwith.com/2015/01/04/comparing-go-and-dotnet/">As I’ve talked about before</a>, I like to use my experience with other languages as a way to learn about a new language (just like most people I’d assume). Let’s take a look at how NPM for node.js solves these problems. </p> <p>Similar to the <code>go get</code> command, there is an <code>npm install</code> command. It looks like this:</p> <pre><code class="language-bash"> npm install --save yelp </code></pre> <p>The big difference you’ll see is <code>--save</code>. This tells NPM to save the dependency, and the version I’m using into the <code>package.json</code> for my project:</p> <pre><code class="language-javascript"> { "name": "pollster", "version": "2.0.0", "private": true, "scripts": { "start": "node server" }, "dependencies": { "express": "~3.1.0", ... "nconf": "~0.6.7", "socket.io": "~0.9.13" } } </code></pre> <p><code>package.json</code> is stored in the top level directory of my app. It provides my <code>isolation</code>. If I start another project - that means another project.json, another set of dependencies. The environments are entirely isolated. The list of dependencies and their versions provides my <code>repeatability</code>. Every time someone clones my repository and runs <code>npm install</code>, they will get the same list of dependencies from a centralized source. The fact that most people use NPM provides my <code>consensus</code>. </p> <p>Version pinning is accomplished using <a href="http://semver.org/">semver</a>. The <code>~</code> relaxes the rules on version matching, meaning I’m ok with bringing down a different version of my dependency, as long as it is only a <code>PATCH</code> - which means no API breaking changes, only bug fixes. If you’re being super picky (on production stuff I am), you can specify a specific version minus the <code>~</code>. For downstream dependencies (dependencies of your dependencies) you can lock those in as well using <a href="https://docs.npmjs.com/cli/shrinkwrap">npm-shrinkwrap</a>. On one of my <a href="http://azure.microsoft.com/en-us/marketplace/partners/microsoft/nodejsstartersite/">projects</a>, I got bit by the lack of shrink-wrapping when a misbehaved package author used a wildcard import for a downstream dependency that actually broke us in production. </p> <p>The typical workflow is to check in your <code>package.json</code>, and then .gitignore your <code>node_modules</code> directory that contains the actual source code of 3rd party packages.</p> <p>It’s all pretty awesome. </p> <h2 id="go-out-of-the-box">Go out of the box</h2> <p>With the out of the box behavior, Go is less than ideal in repeatability, isolation, and consensus. If you follow the setup guide for golang, you’ll find yourself with a single directory where you’re supposed to keep all of your code. Inside of there, you create a /src directory, and a new directory for each project you’re going to work on. When you install a dependency using <code>go get</code>, it will essentially drop the source code from that repository into `$GOPATH/src’. In your source code, you just tell the compiler where it needs to go to grab the latest sources:</p> <pre><code class="language-go">import "github.com/JustinBeckwith/go-yelp/yelp" ... client := yelp.New(options) result, err := client.DoSimpleSearch("coffee", "seattle")</code></pre> <p>So this is <em>really</em> bad. The <a href="https://github.com/JustinBeckwith/go-yelp">go-yelp</a> library I’m importing from github is pulled down at compile time (if not already available from a <code>go get</code> command), and built into my project. That is pointing to the <em>master</em> branch of my github repository. Who’s to say I won’t change my API tomorrow, breaking everyone who has imported the library in this way? As a library author, I’m left with 3 options:</p> <ol> <li>Never make breaking changes.</li> <li>Make a completely new repository on GitHub for a new version of my API that has breaking changes.</li> <li>Make breaking changes, and assume / hope developers are using a dependency management tool. </li> </ol> <p>Without using an external tool (or one of the methods I’ll talk about below), there is no concept of version pinning in go. You point towards a namespace, and that path is used to find your code during the build. For most open source projects - the out of the box behavior is broken.</p> <p>My problem is that the default workflow on a go project leads you down a path of sadness. You start with a magical <code>go get</code> command that installs the latest and greatest version of a dependency - but doesn’t ask you which specific version or hash of that dependency you should be using. Most web developers have been conditioned to not check our dependencies into source control, if they’re managed by a package manager (see: gem, NuGet, NPM, bower, etc). The end result is that I could easily break someone else, and I can easily be broken.</p> <h2 id="vendoring-import-rewrites-and-the-gopath">Vendoring, import rewrites, and the GOPATH</h2> <p>There is currently no agreed upon package manager for Go. Recently the Go team kicked up <a href="https://groups.google.com/forum/#!msg/golang-dev/nMWoEAG55v8/iJGgur7W_SEJ">a great thread</a> asking the community for their thoughts on a package management system. There are a few high level concepts that are helpful to understand.</p> <h4 id="vendoring">Vendoring</h4> <p>At Google, the source code for a dependency is copied into the source tree, and checked into source control. This provides <code>repeatability</code>. There is never a question on where the source is downloaded from, because it is always available in the source tree. Copying the source from a dependency into your own source is referred to as “vendoring”. </p> <h4 id="import-rewriting">Import rewriting</h4> <p>After you copy the code into your source tree, you need to change your import path to not point at the original source, but rather to point at a path in your tree. This is called “Import rewriting”.</p> <p>After copying a library into your tree, instead of this:</p> <pre><code class="language-go">import "github.com/JustinBeckwith/go-yelp/yelp" ... client := yelp.New(options)</code></pre> <p>you would do this:</p> <pre><code class="language-go">import "yourtree/third_party/github.com/JustinBeckwith/go-yelp/yelp" ... client := yelp.New(options)</code></pre> <p>.</p> <h4 id="gopath-rewriting">GOPATH rewriting</h4> <p>Vendoring and import rewriting provide our repeatable builds. But what about isolation? If project (x) relies on go-yelp#v1.0, project (y) should be able to rely on go-yelp#v2.0. They should be isolated. If you follow <a href="https://golang.org/doc/code.html">How to write go code</a>, you’re led down a path of a single workspace, which is driven by <code>$GOPATH</code>. <code>$GOPATH</code> is where libraries installed via <code>go get</code> will be installed. It controls where your own binaries are generated. It’s generally the defining variable for the root of your workspace. If you try to run multiple projects out of the same directory - it completely blows up <code>isolation</code>. If you want to be able to reference different versions of the same dependency, you need to change the $GOPATH variable for each current project. The act of changing the $GOPATH environment variable when switching projects is “GOPATH rewriting”. </p> <h2 id="package-managers--tools">Package managers &amp; tools</h2> <p>Given the lack of prescriptive guidance and tools on how to deal with dependency management, just a few tools have popped up. In no particular order, here are a few I found:</p> <ul> <li><a href="https://github.com/tools/godep">https://github.com/tools/godep</a></li> <li><a href="https://github.com/gpmgo/gopm">https://github.com/gpmgo/gopm</a></li> <li><a href="https://github.com/pote/gpm">https://github.com/pote/gpm</a></li> <li><a href="https://github.com/nitrous-io/goop">https://github.com/nitrous-io/goop</a></li> <li><a href="https://github.com/alouche/rodent">https://github.com/alouche/rodent</a></li> <li><a href="https://github.com/jingweno/nut">https://github.com/jingweno/nut</a></li> <li><a href="https://github.com/niemeyer/gopkg">https://github.com/niemeyer/gopkg</a></li> <li><a href="https://github.com/mjibson/party">https://github.com/mjibson/party</a></li> <li><a href="https://github.com/kardianos/vendor">https://github.com/kardianos/vendor</a></li> <li><a href="https://github.com/kisielk/vendorize">https://github.com/kisielk/vendorize</a></li> <li><a href="https://github.com/mattn/gom">https://github.com/mattn/gom</a></li> <li><a href="https://github.com/dkulchenko/bunch">https://github.com/dkulchenko/bunch</a></li> <li><a href="https://github.com/skelterjohn/wgo">https://github.com/skelterjohn/wgo</a></li> <li><a href="https://github.com/Masterminds/glide">https://github.com/Masterminds/glide</a></li> <li><a href="https://github.com/robfig/glock">https://github.com/robfig/glock</a></li> <li><a href="https://bitbucket.org/vegansk/gobs">https://bitbucket.org/vegansk/gobs</a></li> <li><a href="https://launchpad.net/godeps">https://launchpad.net/godeps</a></li> <li><a href="https://github.com/d2fn/gopack">https://github.com/d2fn/gopack</a></li> <li><a href="https://github.com/laher/gopin">https://github.com/laher/gopin</a></li> <li><a href="https://github.com/LyricalSecurity/gigo">https://github.com/LyricalSecurity/gigo</a></li> <li><a href="https://github.com/VividCortex/johnny-deps">https://github.com/VividCortex/johnny-deps</a></li> </ul> <p>Given my big 3 requirements above, I checked out the most popular of the repos above, and settled on godep. The alternatives all fell into at least one of these traps:</p> <ul> <li>Forced rewriting the url, making it harder to manage dependency paths</li> <li>Relied on a centralized service</li> <li>Only works on a single platform</li> <li>Doesn’t provide isolation in the $GOPATH</li> </ul> <h3 id="godep">godep</h3> <p><a href="https://github.com/tools/godep">Godep</a> matched most of my requirements for a package manager, and is the most popular solution in the community. It solves the repeatability and isolation issues above. The workflow:</p> <p>Run <code>go get</code> to install a dependency (nothing new here):</p> <pre><code class="language-bash"> go get github.com/JustinBeckwith/go-yelp/yelp </code></pre> <p>When you’re done installing dependencies, use the <code>godep save</code> command. This will copy all of the referenced code imported into the project from the current $GOPATH into the ./Godeps directory in your project. Make sure to check this into source control. </p> <pre><code class="language-bash"> godep save </code></pre> <p>It also will walk the graph of dependencies and create a ./Godeps/Godeps.json file:</p> <pre><code class="language-javascript"> { "ImportPath": "github.com/JustinBeckwith/coffee", "GoVersion": "go1.4.2", "Deps": [ { "ImportPath": "github.com/JustinBeckwith/go-yelp/yelp", "Rev": "e0e1b550d545d9be0446ce324babcb16f09270f5" }, { "ImportPath": "github.com/JustinBeckwith/oauth", "Rev": "a1577bd3870218dc30725a7cf4655e9917e3751b" }, .... </code></pre> <p>When it’s time to build, use the godep tool instead of the standard go toolchain:</p> <pre><code class="language-bash"> godep go build </code></pre> <p>The <code>$GOPATH</code> is automatically rewritten to use the local copy of dependencies, ensuring you have isolation for your project. This approach is great for a few reasons:</p> <ol> <li><em>Repeatable builds</em> - When someone clones the repository and runs it, everything you need to build is present. There are no floating versions.</li> <li><em>No external repository needed for dependencies</em> - with all dependencies checked into the local repository, there’s no need to worry about a centralized service. <a href="http://blog.npmjs.org/post/76918947811/registry-downtime-2014-02-16">NPM</a> will occasionally go down, as does <a href="http://blog.nuget.org/20140403/nuget-2.8.1-april-2nd-downtime.html">NuGet</a>.</li> <li><em>Isolated environment</em> - With $GOPATH being rewritten at build time, you have complete isolation from one project to the next. </li> <li><em>No import rewriting</em> - A few other tools operate by changing the import url from the origin repository to a rewritten local repository. This makes installing dependencies a little painful, and makes the import statement somewhat unsightly. </li> </ol> <p>There are a few negatives though as well:</p> <ol> <li>Not checking in your dependencies is convenient. It’s a pain to check in thousands of source files I won’t really edit. Without a centralized repository, this is not likely to be solved. </li> <li>You need to use a wrapped toolchain with the <code>godep</code> commands. There is still no real consensus.</li> </ol> <p>For an example of a project that uses godep, check out <a href="https://github.com/JustinBeckwith/coffee">coffee</a>.</p> <h2 id="wrapping-up">Wrapping up</h2> <p>While using godep is great - I’d really love to see consensus. It’s way too easy for newcomers to fall into the trap of floating dependencies, and it’s hard without much official guidance to come to any sort of consensus on the right approach. At this stage - it’s really up to each team to pick what they value in their dependency management story and choose one of the (many) options out there. Until proven otherwise, I’m sticking with godep.</p> <h2 id="great-posts-on-this-subject">Great posts on this subject</h2> <p>There have been a lot of great posts by others on this subject, check these out as well:</p> <ul> <li><a href="http://dave.cheney.net/2013/10/10/why-i-think-go-package-management-is-important">http://dave.cheney.net/2013/10/10/why-i-think-go-package-management-is-important</a></li> <li><a href="http://dave.cheney.net/2014/03/22/thoughts-on-go-package-management-six-months-on">http://dave.cheney.net/2014/03/22/thoughts-on-go-package-management-six-months-on</a></li> <li><a href="http://nathany.com/go-packages/">http://nathany.com/go-packages/</a></li> <li><a href="http://blog.gopheracademy.com/advent-2014/deps/">http://blog.gopheracademy.com/advent-2014/deps/</a></li> <li><a href="http://blog.gopheracademy.com/advent-2014/case-against-3pl/">http://blog.gopheracademy.com/advent-2014/case-against-3pl/</a></li> <li><a href="http://kylelemons.net/blog/2012/04/22-rx-for-go-headaches.article">http://kylelemons.net/blog/2012/04/22-rx-for-go-headaches.article</a></li> <li><a href="http://dev.af83.com/2013/09/14/a-journey-in-golang-package-manager.html">http://dev.af83.com/2013/09/14/a-journey-in-golang-package-manager.html</a></li> </ul> Fri, 29 May 2015 00:00:00 +0000 http://jbeckwith.com/2015/05/29/dependency-management-go/ http://jbeckwith.com/2015/05/29/dependency-management-go/ Docker, Revel, and AppEngine <p><img src="/images/2015/docker-revel-appengine/revel.png" alt="&quot;Revel running on Google AppEngine with Docker&quot;" /></p> <p>I’ve spent some time recently using <a href="http://jbeckwith.com/2015/01/04/comparing-go-and-dotnet/">go</a> for my side web projects. The Go standard libraries are minimal by design - meaning it doesn’t come with a prescriptive web framework out of the box. The good news is that there are a ton of options:</p> <ul> <li><a href="https://revel.github.io/">Revel</a></li> <li><a href="https://github.com/gin-gonic/gin">Gin</a></li> <li><a href="http://martini.codegangsta.io/">Martini</a></li> <li><a href="http://beego.me/">Beego</a></li> <li><a href="http://www.gorillatoolkit.org/">Gorilla</a></li> </ul> <p>Of course, you could decide to just <a href="https://news.ycombinator.com/item?id=8772760">not use a web framework at all</a>. Comparing these is a topic of great debate - but that topic is for another post :) I decided to try out Revel first, as it was the closest to a full featured rails-esque framework at a glance. I’ll likely give all of these a shot at some point.</p> <p>After building an app on Revel, I wanted to get a feel for deploying my app to see if it posed any unique challenges. I recently started a new gig working on <a href="http://cloud.google.com">Google Cloud</a>, and decided to try out <a href="https://cloud.google.com/appengine/docs">AppEngine</a>. The default runtime environment for Go in AppEngine is <a href="https://cloud.google.com/appengine/docs/go/#Go_The_sandbox">sandboxed</a>. This comes with some benefits, and a few challenges. You get a lot of stuff for free, but you also are restricted in terms of file system access, network access, and library usage. Given the restrictions, I decided to use the new <a href="https://cloud.google.com/appengine/docs/go/managed-vms/">managed VM</a> service. Managed VMs let you deploy your application in a docker container, while still having access to the other AppEngine features like <a href="https://cloud.google.com/appengine/features/#datastore">datastore</a>, <a href="https://cloud.google.com/appengine/features/#logs">logging</a>, <a href="https://cloud.google.com/appengine/features/#memcache">caching</a>, etc. The advantage of using docker here is that I don’t need to write any AppEngine specific code. I can write a standard Go/Revel app, and just deploy to docker.</p> <h2 id="starting-with-revel">Starting with Revel</h2> <p>There’s a pretty great <a href="https://revel.github.io/tutorial/gettingstarted.html">getting started tutorial for Revel</a>. After getting the libraries installed, scaffold a new app with the <a href="https://revel.github.io/tutorial/createapp.html"><code>revel new</code></a> command:</p> <pre><code class="language-bash">go get github.com/revel/revel go get github.com/revel/cmd/revel revel new myapp </code></pre> <h2 id="using-docker">Using Docker</h2> <p>Before touching managed VMs in AppEngine, the first step is to get it working with docker. It took a little time and effort, but once docker is <a href="https://docs.docker.com/installation/">completely set up on your machine</a>, writing the docker file is straight forward.</p> <p>Here’s the docker file I’m using right now:</p> <pre><code class="language-docker"> # Use the official go docker image built on debian. FROM golang:1.4.2 # Grab the source code and add it to the workspace. ADD . /go/src/github.com/JustinBeckwith/revel-appengine # Install revel and the revel CLI. RUN go get github.com/revel/revel RUN go get github.com/revel/cmd/revel # Use the revel CLI to start up our application. ENTRYPOINT revel run github.com/JustinBeckwith/revel-appengine dev 8080 # Open up the port where the app is running. EXPOSE 8080 </code></pre> <p>There are a few things to call out with this Dockerfile:</p> <ol> <li> <p>I chose to use the <a href="https://registry.hub.docker.com/_/golang/">golang docker image</a> as my base. You could replicate the steps needed to install and configure go with a base debian/ubuntu image, but I found this easier. I could have also used the <a href="https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#base_images">pre-configured AppEngine golang image</a>, but I did not need the additional service account support.</p> </li> <li> <p>The <code>ENTRYPOINT</code> command tells Docker (and AppEngine) which process to run when the container is started. I’m using the CLI included with revel.</p> </li> <li> <p>For the <code>ENTRYPOINT</code> and <code>EXPOSE</code> directives, make sure to use port 8080 - this is a hard coded port for AppEngine.</p> </li> </ol> <p>To start using docker with your existing revel app, you need to <a href="https://docs.docker.com/installation/">install docker</a> and copy the <a href="https://github.com/JustinBeckwith/revel-appengine/blob/master/Dockerfile">dockerfile</a> into the root of your app. Update the dockerfile to change the path in the <code>ADD</code> and <code>ENTRYPOINT</code> instructions to use the local path to your revel app instead of mine.</p> <p>After you have docker setup, build your image and try running the app:</p> <pre><code class="language-bash"> # make sure docker is running (I'm in OSX) boot2docker up $(boot2docker shellinit) # build and run the image docker build -t revel-appengine . docker run -it -p 8080:8080 revel-appengine </code></pre> <p>This will run docker, build the image locally, and then run it. Try hitting <a href="http://localhost:8080"><code>http://localhost:8080</code></a> in your browser. You should see the revel startup page:</p> <p><img src="/images/2015/docker-revel-appengine/docker.png" alt="&quot;Running revel in docker&quot;" /></p> <p>Now we’re running revel inside of docker.</p> <h2 id="appengine-managed-vms">AppEngine Managed VMs</h2> <p>The original version of AppEngine had a bit of a funny way of managing application runtimes. There are a limited set of stacks available, and you’re left using a locked down version an approved runtime. Managed VMs get rid of this restriction by letting you run pretty much anything inside of a container. You just need to define a little bit of extra config in a <code>app.yaml</code> file that tells AppEngine how to treat your container:</p> <pre><code class="language-yaml"> runtime: custom vm: true api_version: go1 health_check: enable_health_check: False </code></pre> <p>This config lets me use AppEngine, with a custom docker image as my runtime, running on a managed virtual machine. You can copy my <a href="https://github.com/JustinBeckwith/revel-appengine/blob/master/app.yaml">app.yaml</a> into your app directory, alongside the <a href="https://github.com/JustinBeckwith/revel-appengine/blob/master/Dockerfile">Dockerfile</a>. Next, make sure you’ve signed up for a <a href="https://cloud.google.com/">Google Cloud</a> account, and download the <a href="https://cloud.google.com/sdk/">Google Cloud SDK</a>. After getting all of that setup, you’ll need to create a new project in the <a href="https://console.developers.google.com/">developer console</a>.</p> <pre><code class="language-bash"> # Install the Google Cloud SDK curl https://sdk.cloud.google.com | bash # Log into your account gcloud auth login # Install the preview components gcloud components update app # Set the project gcloud config set project &lt;project-id&gt; </code></pre> <p>That covers the initial setup. After you have a project created, you can try running the app locally. This is essentially going to startup your app using the Dockerfile we defined earlier:</p> <pre><code class="language-bash"> # Run the revel application locally gcloud preview app run ./app.yaml # Deploy the application gcloud preview app deploy ./app.yaml </code></pre> <p>After deploying, you can visit your site here: <a href="http://revel-gae.appspot.com"><code>http://revel-gae.appspot.com</code></a></p> <p><img src="/images/2015/docker-revel-appengine/appengine.png" alt="Revel running on AppEngine" /></p> <h2 id="wrapping-up">Wrapping up</h2> <p>So that’s it. I decided to use revel for this one, but the whole idea behind using docker for AppEngine is that you can bring pretty much any stack. If you have any questions, feel free to <a href="http://github.com/JustinBeckwith/revel-appengine">check out the source</a>, or find me <a href="https://twitter.com/JustinBeckwith">@JustinBeckwith</a>.</p> Fri, 08 May 2015 00:00:00 +0000 http://jbeckwith.com/2015/05/08/docker-revel-appengine/ http://jbeckwith.com/2015/05/08/docker-revel-appengine/ Realtime services with io.js, redis and Azure <p><a href="http://wazstagram.azurewebsites.net"><img src="/images/2013/01/waz-screenshot.png" alt="&quot;View the demo&quot;" /></a></p> <p>A few years ago, I put together a <a href="http://jbeckwith.com/2013/01/30/building-scalable-realtime-services-with-node-js-socket-io-and-windows-azure/">fun little app</a> that used node.js, service bus, cloud services, and the Instagram realtime API to build a realtime visualization of images posted to Instagram. In 2 years time, a lot has changed on the Azure platform. I decided to go back into that code, and retool it to take advantage of some new technology and platform features. And for fun. </p> <ul> <li><a href="http://jbeckwith.com/2013/01/30/building-scalable-realtime-services-with-node-js-socket-io-and-windows-azure/">Original blog post</a></li> <li><a href="http://wazstagram.azurewebsites.net/">View the demo on Azure</a></li> <li><a href="https://github.com/JustinBeckwith/wazstagram">View the code on GitHub</a></li> </ul> <p>Let’s take a look through the updates!</p> <h2 id="resource-groups">Resource groups</h2> <p>I’m using <a href="http://azure.microsoft.com/en-us/documentation/articles/azure-preview-portal-using-resource-groups/">resource groups</a> to organize the various services. Resource groups provide a nice way to visualize and manage the services that make up an app. RBAC and aggregated monitoring are two of the biggest features that make this useful.</p> <p><img src="/images/2015/wazstagram/resource-group.png" alt="&quot;Using a resource group makes it easier to organize services&quot;" /></p> <h2 id="websites--websockets">Websites &amp; Websockets</h2> <p>In the original version of this app, I chose to use <a href="http://azure.microsoft.com/en-us/services/cloud-services/">cloud services</a> instead of <a href="http://azure.microsoft.com/en-us/documentation/services/websites/">Azure web sites</a>. One of the biggest reasons for this choice was websocket support with socket.io. At the time, Azure websites did not support websockets. Well… now it does. There are a lot of reasons to choose websites over cloud services:</p> <ul> <li>Fast continuous deployment via Github</li> <li>Low concept count, no special tooling needed</li> <li>Now supports deployment slots, ssl, enterprise features</li> </ul> <p>When you create your site, make sure to turn on websockets:</p> <p><img src="/images/2015/wazstagram/websockets.png" alt="&quot;setting up websockets&quot;" /></p> <h2 id="iojs">io.js</h2> <p><a href="https://iojs.org/">io.js</a> is a fork of <a href="http://nodejs.org/">node.js</a> that provides a faster release cycle and es6 support. It’s pretty easy to get it running on Azure, thanks to <a href="https://github.com/felixrieseberg/iojs-azure">iojs-azure</a>. Just to prove I’m running io.js instead of node.js, I added this little bit in my server.js:</p> <pre><code class="language-javascript">logger.info(`Started wazstagram running on ${process.title} ${process.version}`);</code></pre> <p>The results:</p> <p><img src="/images/2015/wazstagram/iojs.png" alt="&quot;Console says it's io.js&quot;" /></p> <h2 id="redis">redis</h2> <p>In the previous version of this app, I used service bus for publishing messages from the back end process to the scaled out front end nodes. This worked great, but I’m more comfortable with redis. There are a lot of options for redis on Azure, but we recently rolled out a first class redis cache service, so I decided to give that a try. I’m really looking to use two features from redis:</p> <ul> <li>Pub / Sub - Messages received by Instagram are published to the scaled out front end</li> <li>Caching - I keep a cache of 100 messages around to auto-fill the page on the initial visit</li> </ul> <p>You can create a new redis cache from the Gallery:</p> <p><img src="/images/2015/wazstagram/redis-create.png" alt="&quot;Create a new redis cache&quot;" /></p> <p>After creating the cache, you have a good ol standard redis database. Nothing special/fancy/funky. You can connect to it using the standard redis-cli from the command line:</p> <p><img src="/images/2015/wazstagram/redis-cli.png" alt="&quot;I can connect using standard redis tools&quot;" /></p> <p>Note the password I’m using is actually one of the management keys provided in the portal. I also chose to disable SSL, as nothing I’m storing is sensitive data:</p> <p><img src="/images/2015/wazstagram/redis-ssl.png" alt="&quot;Set up non-SSL connections&quot;" /></p> <p>I used <a href="https://github.com/mranney/node_redis">node-redis</a> to talk to the database, both for pub/sub and cache. First, create a new redis client:</p> <pre><code class="language-javascript">function createRedisClient() { return redis.createClient( 6379, nconf.get('redisHost'), { auth_pass: nconf.get('redisKey'), return_buffers: true } ).on("error", function (err) { logger.error("ERR:REDIS: " + err); }); } // create redis clients for the publisher and the subscriber var redisSubClient = createRedisClient(); var redisPubClient = createRedisClient();</code></pre> <p><strong>PROTIP</strong>: Use <a href="https://github.com/flatiron/nconf">nconf</a> to store secrets in json locally, and read from <a href="http://azure.microsoft.com/blog/2013/07/17/windows-azure-web-sites-how-application-strings-and-connection-strings-work/">app settings</a> in Azure. </p> <p>When the Instagram API sends a new image, it’s published to a channel, and centrally cached:</p> <pre><code class="language-javascript">logger.verbose('new pic published from: ' + message.city); logger.verbose(message.pic); redisPubClient.publish('pics', JSON.stringify(message)); // cache results to ensure users get an initial blast of (n) images per city redisPubClient.lpush(message.city, message.pic); redisPubClient.ltrim(message.city, 0, 100); redisPubClient.lpush(universe, message.pic); redisPubClient.ltrim(universe, 0, 100);</code></pre> <p>The centralized cache is great, since I don’t need to use up memory in each io.js process used in my site (keep scale out in mind). Each client also connects to the pub/sub channel, ensuring every instance gets new messages:</p> <pre><code class="language-javascript">// listen to new images from redis pub/sub redisSubClient.on('message', function(channel, message) { logger.verbose('channel: ' + channel + " ; message: " + message); var m = JSON.parse(message.toString()); io.sockets.in (m.city).emit('newPic', m.pic); io.sockets.in (universe).emit('newPic', m.pic); }).subscribe('pics');</code></pre> <p>After setting up the service, I was using the redis-cli to do a lot of debugging. There’s also some great monitoring/metrics/alerts available in the portal:</p> <p><img src="/images/2015/wazstagram/redis-mon.png" alt="&quot;monitoring and metrics&quot;" /></p> <h2 id="wrapping-up">Wrapping up</h2> <p>If you have any questions, feel free to <a href="http://github.com/JustinBeckwith/wazstagram">check out the source</a>, or find me <a href="https://twitter.com/JustinBeckwith">@JustinBeckwith</a>.</p> Sun, 15 Feb 2015 00:00:00 +0000 http://jbeckwith.com/2015/02/15/iojs-redis-azure/ http://jbeckwith.com/2015/02/15/iojs-redis-azure/ Please take this personally <p>A few weeks ago I got pulled into a meeting. There’s another team at Microsoft that’s using our SDK to build their UI, and they had a few questions. Their devs had a chance to get their hands on our SDK, and like most product guys, I was interested in getting some unfiltered feedback from an internal team. Before getting into details, someone dropped the phrase <strong>“Please don’t take this personally, but…“</strong></p> <p>The feedback that comes after that sentence is really important. We write it down. We share it with the team. We stack it up against other priorities, compare it with feedback from teams who have similar pain points, and use it to find a way to make our product better. Customer feedback (from an internal team or external customer) is so incredibly critical, that any product team has a similar pattern/process for dealing with it. So what’s the problem?</p> <p>Any feedback that starts with <strong>“Don’t take this personally”</strong> really pisses me off. When you say this to someone, you’re making one of two judgments about this person:</p> <ol> <li> <p><em>They are not personally invested in their work.</em> They go to their job, they do whatever work is put in front of them, and then they go home. If what they’ve made is not good, it doesn’t bother them. </p> </li> <li> <p>They are personally invested in their work. They want to create something amazing, and will go to great lengths to do so. Whatever you’re about to say - despite your warning - <em>They’re going to take it personally.</em></p> </li> </ol> <p>For me, that expression elicits a sort of Marty Mcfly “nobody calls me chicken” response.</p> <div class="embed-container"><iframe src="https://www.youtube.com/embed/gKosmXx1gkc?start=26" frameborder="0" allowfullscreen=""></iframe></div> <p>What’s more personal to me than my product!? I work at Microsoft because I genuinely believe it’s the best place for me to build stuff that has a real tangible impact. I went through years of school so I could do *<strong>this</strong>*. I moved my family moved across the country. I work 50-60 hours a week (probably more than I should) because I wanted to build *<strong>this</strong>*. My product is in many ways a reflection of me. What could possibly be more personal? </p> <p>Does this mean I don’t want criticism? Of course I do! Objective criticism from an informed customer who has used your product is the greatest gift a product manager can receive. It’s how we get better. Just expect me to take it personally. </p> Sun, 01 Feb 2015 00:00:00 +0000 http://jbeckwith.com/2015/02/01/please-take-this-personally/ http://jbeckwith.com/2015/02/01/please-take-this-personally/ Comparing Go and .NET <p><img src="/images/2015/comparing-go-and-dotnet/gopher.png" alt="&quot;The gopher image is Creative Commons Attributions 3.0 licensed. Credit Renee French.&quot;" title="The gopher image is Creative Commons Attributions 3.0 licensed. Credit Renee French." /></p> <p>2014 was a crazy year. I spent most of the year thinking about client side code while working on <a href="http://jbeckwith.com/2014/09/20/how-the-azure-portal-works/">the new Azure Portal</a>. I like to use the holiday break as a time to hang out with the family, disconnect from work, and learn about something new. I figured a programming language used for distributed systems was about as far from client side JavaScript as I could get for a few weeks, so I decided to check out <a href="https://golang.org/">golang</a>. </p> <p>Coming at this from 0 - I used a few resources to get started:</p> <ul> <li><a href="https://tour.golang.org/welcome/1">Tour of Go</a> - this was a great getting started guide that walks step by step through the language.</li> <li><a href="https://gobyexample.com/">Go by Example</a> - I actually learned more from go by example than the tour. It was great.</li> <li><a href="https://github.com/google/go-github">The github API wrapper in Go</a> - I figured I should start with some practical code samples written by folks at Google. When in doubt, I used this to make sure I was doing things ‘the go way’.</li> </ul> <p>I’m a learn-by-doing kind of guy - so I decided to learn by building something I’ve built in the past - an API wrapper for the Yelp API. A few years ago, I was working with the ineffable <a href="https://twitter.com/howard_dierking">Howard Dierking</a> on a side project to compare RoR to ASP.NET. The project we picked needed to work with the Yelp API - and we noticed the lack of a NuGet package that fit the bill (for the record, there was a gem). To get that project kickstarted, I wrote a C# wrapper over the Yelp API - so I figure, why not do the same for Go? You can see the results of these projects here:</p> <ul> <li><a href="https://github.com/JustinBeckwith/YelpSharp">YelpSharp</a> - C# / .NET Yelp wrapper API</li> <li><a href="https://github.com/JustinBeckwith/go-yelp">go-yelp</a> - Go Yelp wrapper API</li> </ul> <p>To get a feel for the differences, it’s useful to poke around the two repositories and compare apples to apples. This isn’t a “Why I’m rage quitting .NET and moving to Go” post, or a “Why Go sucks and is doing it wrong” post. Each stack has it’s strengths and weaknesses. Each is better suited for different teams, projects and development cultures. I found myself wanting to understand Go in terms of what I already know about .NET and nodejs, and wishing there was a guide to bridge those gaps. So I guess my goal is to make it easier for those coming from a .NET background to understand Go, and get a feel for how it relates to similar concepts in the .NET world. Here we go!</p> <p><em><strong>disclaimer:</strong> I’ve been writing C# for 14 years, and Go for 14 days. Please take all judgments of Go with a grain of salt.</em></p> <h3 id="environment-setup">Environment Setup</h3> <p>In the .NET world, after you install Visual Studio - you’re really free to set up a project wherever you like. By default, projects are created in ~/Documents, but that’s just a default. When we need to reference another project - you create a project reference in Visual Studio, or you can install a NuGet package. Either way - there really aren’t any restrictions on where the project lives.</p> <p>Go takes a different approach. If you’re getting started, it’s really important to read/follow this guide:</p> <p><a href="https://golang.org/doc/code.html">How to write go code</a></p> <p>All go code you write goes into a single root directory, which has a directory for each project / namespace. You tell the go tool chain where to find that directory via the $GOPATH environment variable. You can see on my box, I have all of the pieces I’ve written or played around with here:</p> <p><img src="/images/2015/comparing-go-and-dotnet/gopath.png" alt="$GOPATH" /></p> <p>For a single $GOPATH, you have one set of dependencies, and you keep a single version of the go toolchain. It felt a little uncomfortable, and at times it made me think of the <a href="http://msdn.microsoft.com/en-us/library/yf1d93sz%28v=vs.110%29.aspx">GAC</a>. It’s also pretty similar to Ruby. Ruby has <a href="https://github.com/wayneeseguin/rvm">RVM</a> to solve this problem, and Go has <a href="https://github.com/moovweb/gvm">GVM</a>. If you’re working on different Go projects that have different requirements for runtime version / dependencies - I’d imagine you want to use GVM. Today, I only have one project - so it is less of a concern. </p> <h3 id="build--runtime">Build &amp; Runtime</h3> <p>In .NET land we use msbuild for compilation. On my team we use Visual Studio at dev time, and run msbuild via jenkins on our CI server. This is a pretty typical setup. At compile time, C# code is compiled down into IL, which is then just-in-time compiled at runtime to the native assembly language of the system’s host. </p> <p>Go is a little different, as it compiles directly to native code on the current platform. If I compile my yelp library in OSX, it creates an executable that will run on my current machine. This binary is not interpreted, or IL - it is a good ol’ native executable. When linking occurs, the full native go runtime is embedded in your binary. It has much more of a c++ / gcc kind of feel. I’m sure this doesn’t hurt in the performance department - which is one of the reasons folks move from Python/Ruby/Node to Go. </p> <p>Compilation of *.go files is done by running the <code>go build</code> command from within the directory that contains your go files. I haven’t really come across many projects using complex builds - its usually sufficient to just use <code>go build</code>. Outside of that - it seems like most folks are using <a href="http://blog.snowfrog.net/2013/06/18/golang-building-with-makefile-and-jenkins/">makefiles to perform build automation</a>. There’s really no equivalent of a *.csproj file, or *.sln file in go - so there’s no baked in file you would run through an msbuild equivalent. There are just *.go files in a directory, that you run the build tool against. At first I found all of this alarming. After a while - I realized that it mostly “just worked”. It feels very similar to the csproj-less build system of <a href="http://jbeckwith.com/2014/11/09/aspnet-vnext-oredev/">ASP.NET vNext</a>.</p> <h3 id="package-management">Package management</h3> <p>In the .NET world, package management is pretty well known: it’s all about <a href="http://nuget.org">NuGet</a>. You create a project it, add a nuspec, compile a nupkg with binaries, and publish them to <a href="http://nuget.org">nuget.org</a>. There are a lot of NuGet packages that fall under the “must have” category - things like <a href="http://www.nuget.org/packages/Newtonsoft.Json/">JSON.NET</a>, <a href="http://www.nuget.org/packages/elmah/">ELMAH</a> and even <a href="http://www.nuget.org/packages/Microsoft.AspNet.Mvc/6.0.0-beta1">ASP.NET MVC</a>. It’s not uncommon to have 30+ packages referenced in your project. </p> <p>In .NET, we have a <code>packages.config</code> that contains a list of dependencies. This is nice, because it explicitly lays out what we depend upon, and the specific version we want to use:</p> <pre><code class="language-markup">&lt;packages&gt; &lt;package id=&quot;Newtonsoft.Json&quot; version=&quot;4.5.11&quot; targetFramework=&quot;net40&quot; /&gt; &lt;package id=&quot;RestSharp&quot; version=&quot;104.1&quot; targetFramework=&quot;net40&quot; /&gt; &lt;/packages&gt;</code></pre> <p>Go takes a bit of a different approach. The general philosophy of Go seems to trend towards avoiding external dependencies. I’ve found Blake Mizerany’s talk to be pretty standard for sentiment from the community:</p> <div class="embed-container"><iframe src="http://www.youtube.com/embed/yi5A3cK1LNA" frameborder="0" allowfullscreen=""></iframe></div> <p>In Go - there’s no equivalent of <code>packages.config</code>. It just doesn’t exist. Instead - dependency installation is driven from Git/Hg repositories or local paths. The dependency is installed into your go path with the <code>go get</code> command:</p> <pre><code class="language-clike">&gt; go get github.com/JustinBeckwith/go-yelp/yelp</code></pre> <p>This command pulls down the relevant sources from the Git/Hg repository, and builds the binaries specific to your OS. To use the dependency, you don’t reference a namespace or dll - you just import the library using the same url used to acquire the library:</p> <pre><code class="language-go">import "github.com/JustinBeckwith/go-yelp/yelp" ... client := yelp.New(options) result, err := client.DoSimpleSearch("coffee", "seattle")</code></pre> <p>When you <code>go compile</code>, the compiler walks through each *.go file, finds the list of external (non BCL) libraries, and implicitly does a <code>go get</code> if needed. This is both awesome and frightening at the same time. It’s great that go doesn’t require explicit dependencies. It’s great that I don’t need to think of the package and the namespace as different entities. It’s <strong>not</strong> cool that I cannot choose a specific version of a package. No wonder the go community is skeptical of external dependencies - I wouldn’t reference the tip of the master branch of any project and expect it to keep working for the long haul.</p> <p>To get get around this limitation, a few package managers started to pop up in the community. <a href="https://code.google.com/p/go-wiki/wiki/PackageManagementTools">There are a lot of them.</a> Given the lack of a single winner in this space, I chose to write my package ‘the go way’ and not attempt using a package manager.</p> <h3 id="tooling">Tooling</h3> <p>In the .NET world - <a href="http://msdn.microsoft.com/en-us/vstudio/aa718325.aspx">Visual Studio</a> is king. I know a lot of folks that use things like JetBrains or SublimeText for code editing (I’m one of those SublimeText folks), but really it’s all about VS. Visual Studio gives us project templates, IntelliSense, builds, tests, refactoring, code outlines - you get it. It’s all in the box. A giant box.</p> <p>With Go, most developers tend to use a more stripped down code editor. There are a lot of folks using vim, sublimetext, or notepad++. Here are some of the more popular options:</p> <ul> <li><a href="https://github.com/DisposaBoy/GoSublime">GoSublime</a></li> <li><a href="https://github.com/fatih/vim-go">vim-go</a></li> <li><a href="https://github.com/visualfc/liteide">LiteIDE</a></li> </ul> <p>You can find a good conversation about the topic <a href="http://www.reddit.com/r/golang/comments/2739gp/golang_ides/">on this reddit thread</a>. Personally - I’m comfortable with SublimeText, so I went with that + the GoSublime plugin. It gave me syntax highlighting, auto-format on save, and some lightweight IntelliSense for core packages. That having been said, <a href="https://github.com/visualfc/liteide">LiteIDE</a> feels a little more close to a full featured IDE:</p> <p><img src="/images/2015/comparing-go-and-dotnet/liteide.png" alt="LiteIDE" /></p> <p>There are a lot of options out there - which is a good thing :) Go comes with a variety of other command line tools that make working with the framework easier:</p> <ul> <li><em>go build</em> - builds your code</li> <li><em>go install</em> - builds the code, and installs it in the $GOPATH</li> <li><em>go test</em> - runs all tests in the project</li> <li><em>gofmt</em> - formats your source code matching go coding standards</li> <li><em>gocov</em> - perform code coverage analysis</li> </ul> <p>I used all of these while working on my library. </p> <h3 id="testing">Testing</h3> <p>In <a href="https://github.com/JustinBeckwith/YelpSharp">YelpSharp</a>, I have the typical unit test project included with my package. I have several test files created, each of which has several test functions. I can then run my test through Visual Studio or the test runner. A typical test would look like this:</p> <pre><code class="language-csharp">[TestMethod] public void VerifyGeneralOptions() { var y = new Yelp(Config.Options); var searchOptions = new SearchOptions(); searchOptions.GeneralOptions = new GeneralOptions() { term = "coffee" }; searchOptions.LocationOptions = new LocationOptions() { location = "seattle" }; var results = y.Search(searchOptions).Result; Assert.IsTrue(results.businesses != null); Assert.IsTrue(results.businesses.Count &gt; 0); }</code></pre> <p>The accepted pattern in Go for tests is to write a corresponding <code>&lt;filename&gt;_test.go</code> for each Go file. Every method that starts with <code>Test&lt;RestOfFunctionName&gt;</code> in the name, is executed as part of the test suite. By running <code>go test</code>, you run every test in the current project. It’s pretty convenient, though I found myself wishing for something that auto-compiled my code and auto-ran tests (similar to the grunt/mocha/concurrent setup I like to use in node). A typical test function in go would look like this:</p> <pre><code class="language-go">// TestGeneralOptions will verify search with location and search term. func TestGeneralOptions(t *testing.T) { client := getClient(t) options := SearchOptions{ GeneralOptions: &amp;GeneralOptions{ Term: "coffee", }, LocationOptions: &amp;LocationOptions{ Location: "seattle", }, } result, err := client.DoSearch(options) check(t, err) assert(t, len(result.Businesses) &gt; 0, containsResults) }</code></pre> <p>The assert I used here is not baked in - there are <a href="http://golang.org/doc/faq#assertions">no asserts in Go</a>. For code coverage reports, the <code>gocov</code> tool does a nice job. To automatically run test against my GitHub repository, and auto-generate code coverage reports - I’ve using <a href="https://travis-ci.org/">Travis CI</a> and <a href="https://coveralls.io/">Coveralls.io</a>. I’m planning on writing up another post on the tools you can use to build an effective open source Go library - so more on that later :) </p> <h3 id="programming-language">Programming language</h3> <p>Finally, let’s take a look at some code. C# is amazing. It’s been around now for 15 years or so, and it’s grown methodically (in a good way). In terms of basic syntax, it’s your standard C derivative language:</p> <pre><code class="language-csharp">using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace YelpSharp.Data.Options { /// &lt;summary&gt; /// options for locale /// &lt;/summary&gt; public class LocaleOptions : BaseOptions { /// &lt;summary&gt; /// ISO 3166-1 alpha-2 country code. Default country to use when parsing the location field. /// United States = US, Canada = CA, United Kingdom = GB (not UK). /// &lt;/summary&gt; public string cc { get; set; } /// &lt;summary&gt; /// ISO 639 language code (default=en). Reviews written in the specified language will be shown. /// &lt;/summary&gt; public string lang { get; set; } /// &lt;summary&gt; /// format the properties for the querystring - bounds is a single querystring parameter /// &lt;/summary&gt; /// &lt;returns&gt;&lt;/returns&gt; public override Dictionary&lt;string, string&gt; GetParameters() { var ps = new Dictionary&lt;string, string&gt;(); if (!String.IsNullOrEmpty(cc)) ps.Add(&quot;cc&quot;, this.cc); if (!String.IsNullOrEmpty(lang)) ps.Add(&quot;lang&quot;, this.lang); return ps; } } }</code></pre> <p>C# supports both static and dynamic typing, but generally trends towards a static type style. I’ve always appreciated the way C# can appeal to both new and experienced developers. The best part about the language in my opinion has been the steady, thoughtful introduction of new features. Some of the things that happened between C# 1.0 and C# 5.0 (the current version) include:</p> <ul> <li><a href="http://msdn.microsoft.com/en-us/library/hh191443.aspx">Async / Await</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/512aeb7t.aspx">LINQ</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/wa80x488.aspx">Partial Classes &amp; Methods</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/dscyy5s0.aspx">Iterators</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/bb384062.aspx">Object initializers</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/dd264739.aspx">Optional paramters</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/bb384061.aspx">Variable type inference</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/bb383977.aspx">Extension methods</a></li> <li><a href="http://msdn.microsoft.com/en-us/library/1t3y8s4s.aspx">Nullable Types</a></li> <li>many things I’m forgetting…</li> </ul> <p>There’s more great stuff coming in <a href="http://msdn.microsoft.com/en-us/magazine/dn802602.aspx">C# 6.0</a>. </p> <p>With Go - I found most language features to be… well… missing. It’s a really basic language - you get interfaces (not the way we know them), maps, slices, arrays, and some primitives. It’s very minimal - and this is by design. I was surprised when I started using go and found a few things missing (in no order):</p> <ul> <li>Generics</li> <li>Exception handling</li> <li>Method overloading</li> <li>Optional parameters</li> <li>Nullable types</li> <li>Implementing an interface explicitly</li> <li>foreach, while, yield, etc</li> </ul> <p>To understand why Go doesn’t have these features - you have to understand the roots of the language. Rob Pike (one of the designers of Go) really explains the problems they were trying to solve, and the design decisions of the language well in his post <a href="http://commandcenter.blogspot.com/2012/06/less-is-exponentially-more.html">“Less is exponentially more”</a>. The idea is to provide a set of base primitive constructs: only the ones that are absolutely required. </p> <h4 id="interfaces-methods-and-not-classes">Interfaces, methods, and not-classes</h4> <p>Having written a lot of C# - I got used to having all of these features at my disposal. I got used to traditional OOP features like classes, inheritance, method overloading, and explicit interfaces. I also write a lot of JavaScript - so I understand (and love) dynamic typing. What’s weird about Go is that is both statically typed - and doesn’t provide many of the OOP features I’ve grown to lean upon. </p> <p>To see how this plays out - lets look at the same structure written in Go:</p> <pre><code class="language-go">package yelp // LocaleOptions provide additional search options that enable returning results // based on a given country or locale. type LocaleOptions struct { // ISO 3166-1 alpha-2 country code. Default country to use when parsing the location field. // United States = US, Canada = CA, United Kingdom = GB (not UK). cc string // ISO 639 language code (default=en). Reviews written in the specified language will be shown. lang string } // getParameters will reflect over the values of the given struct, and provide a type appropriate // set of querystring parameters that match the defined values. func (o *LocaleOptions) getParameters() (params map[string]string, err error) { params = make(map[string]string) if o.cc != "" { params["cc"] = o.cc } if o.lang != "" { params["lang"] = o.lang } return params, nil }</code></pre> <p>There are a few interesting things to call out from these two samples:</p> <ul> <li>The Go sample and C# are close to the same size - XMLDoc in this case really makes C# seem longer.</li> <li>I’m not using a class - but rather an interface. Interfaces can support methods via the syntax above. If you define a func that takes a pointer to an object of the interface type - it now supports methods. </li> <li>In my C# sample, this structure implements an interface. In Go, you write an interface, and then structures implement them ambiently - there is no <code>implements</code> keyword. </li> <li>Pointers are an important concept in Go. I haven’t had to think about pointers since 2001 (the last time I wrote C++). It’s not a big deal, but not something I expected to run into. </li> <li>Notice that the <code>getParameters()</code> function returns multiple results - that’s new (and kind of cool). </li> <li>The <code>getParameters()</code> method returns an error as one of the potential return values. You need to do that since <em>there is no concept of an exception in Go</em>.</li> </ul> <h4 id="error-handling">Error handling</h4> <p>Let that one sink for a moment. Go takes a strange (but effective) approach to error handling. Instead of tossing an exception and expecting the caller to catch and react, many (if not most) functions will return an error. It’s on the caller to check the value of that error, and choose how to react. You can learn more about <a href="http://blog.golang.org/error-handling-and-go">error handling in Go here</a>. The net result, is that I wrote a lot of code like this:</p> <pre><code class="language-go">// DoSearch performs a complex search with full search options. func (client *Client) DoSearch(options SearchOptions) (result SearchResult, err error) { // get the options from the search provider params, err := options.getParameters() if err != nil { return SearchResult{}, err } // perform the search request rawResult, _, err := client.makeRequest(searchArea, &quot;&quot;, params) if err != nil { return SearchResult{}, err } // convert the result from json err = json.Unmarshal(rawResult, &amp;result) if err != nil { return SearchResult{}, err } return result, nil }</code></pre> <p>In this example, the <code>DoSearch</code> method returns multiple values (get used to this), one of which is an error. There are 3 different method calls made in this function - all of which may return an error. For each of them, you need to check the err value, and choose how to react - oftentimes, just bubbling the error back up through the callstack by hand. I haven’t quite learned to love this aspect of the language yet. </p> <h4 id="writing-async-code">Writing async code</h4> <p>In the previous sample, you may have noticed something fishy. On the following line, I’m making an HTTP request, checking for an error, and them moving forward:</p> <pre><code class="language-go">rawResult, _, err := client.makeRequest(searchArea, &quot;&quot;, params) if err != nil { return SearchResult{}, err } ...</code></pre> <p>That code is <em>synchronous</em>. When I first wrote this code - I was fairly certain I was making a mistake. Years of callbacks or promises in node, and years of tasks and async/await in C# had taught me something really clear - synchronous methods that block the thread are bad. But here’s Go - just doing it’s thing. I thought I was making a mistake, until I started poking around and found a <a href="http://stackoverflow.com/questions/23709118/does-golang-have-callback-concept">few people with the same misunderstanding</a>. To make a call asynchronously in Go, it’s largely up to the caller, using a <a href="https://gobyexample.com/goroutines">goroutine</a>. Goroutines are kind of cool. You essentially point at a function and say ‘run this asynchronously’:</p> <pre><code class="language-go">func Announce(message string, delay time.Duration) { go func() { time.Sleep(delay) fmt.Println(message) }() // Note the parentheses - must call the function. }</code></pre> <p>Running <code>go &lt;func&gt;</code> in this manner will run the function concurrently in the same process. This does not create a system thread or fork the process - it’s completely internal to the go runtime. Like most things with Go - I was confused and scared at first, as I tried to apply what I know about C# and JavaScript to their model. I haven’t written enough of this style of asynchronous code to have a great feel for the subject, but I plan to spend a lot of time here in the coming weeks. </p> <h3 id="whats-next">What’s next</h3> <p>My experience so far with Go has been at times frustrating, but certainty not boring. The best advice I can give for those new to the language is to let go of your preconceived notion of how [insert concept or task] works - it’s almost like the designers of Go tried to do things differently for the sake of doing it differently at times. And that’s ok :) I was cursing far less at the end of the project than the beginning, and I’ve now started to move towards understanding why it’s different (except for the lack of generics - that’s just weird). It’s helping me question some of the design decisions I’ve made on my own APIs at work, and helped me better appreciate the niceties of C#. </p> <p>I’ve really only scratched the surface of what’s out there for Go. Now that I’ve put together a library, I’m going to take the next step and start playing around with <a href="http://revel.github.io/">revel</a>, which provides an ASP.NET style web framework on top of Go. From there, I’m going to keep on building, and see where this goes. Happy coding!</p> <p><em>The gopher image is Creative Commons Attributions 3.0 licensed. Credit Renee French.</em></p> Sun, 04 Jan 2015 00:00:00 +0000 http://jbeckwith.com/2015/01/04/comparing-go-and-dotnet/ http://jbeckwith.com/2015/01/04/comparing-go-and-dotnet/ es6: Getting ready for the next version of JavaScript <p><img src="/images/2014/es6-oredev/es6-oredev.png" alt="Rockin' the big stage at Øredev" /></p> <p>While attending the developer conference <a href="http://oredev.org" target="_blank">Øredev</a> last week, I had the pleasure of giving a talk on <a href="http://vimeo.com/111289052" target="_blank">es6</a>. Many of the new features I covered touch on challenges we’ve faced building the <a href="/2014/09/20/how-the-azure-portal-works/" target="_blank">Azure Portal</a>. Instead of covering each new feature one by one (<a href="https://github.com/lukehoban/es6features" target="_blank">Luke Hoban already does a nice job of that</a>) I decided to cover a few high level features that fundamentally affect the way teams build large scale JavaScript applications. </p> <h3 id="timeline">Timeline</h3> <ul> <li><strong>00:00</strong> - Intro</li> <li><strong>01:45</strong> - History of JavaScript</li> <li><strong>05:00</strong> - Challenges with large scale applications</li> <li><strong>08:45</strong> - Modules</li> <li><strong>13:20</strong> - Classes</li> <li><strong>15:15</strong> - Using the Traceur compiler</li> <li><strong>17:30</strong> - TypeScript &amp; AMD</li> <li><strong>20:15</strong> - Scope / What is ‘this’</li> <li><strong>21:30</strong> - Arrow functions</li> <li><strong>22:30</strong> - Let vs var</li> <li><strong>25:00</strong> - Browser support</li> <li><strong>28:45</strong> - Promises</li> <li><strong>34:00</strong> - Node.js support</li> <li><strong>36:50</strong> - More features</li> <li><strong>37:13</strong> - es7</li> <li><strong>38:50</strong> - Closing</li> </ul> <h3 id="watch-the-video">Watch the video</h3> <div class="embed-container"><iframe src="//player.vimeo.com/video/111289052?portrait=0" width="750" height="450" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe></div> <p><a href="http://vimeo.com/111289052">ES6: GETTING READY FOR JAVASCRIPT VNEXT</a> from <a href="http://vimeo.com/user4280938">Øredev Conference</a> on <a href="https://vimeo.com">Vimeo</a>.</p> <p>Thanks!</p> Tue, 11 Nov 2014 00:00:00 +0000 http://jbeckwith.com/2014/11/11/es6-oredev/ http://jbeckwith.com/2014/11/11/es6-oredev/ Building web applications with ASP.NET vNext <p><img src="/images/2014/aspnet-vnext-oredev/aspnetvnext-featured.png" alt="Rockin' the big stage at Øredev" /></p> <p>Last week I had the amazing opportunity to give a talk on <a href="http://asp.net/vNext" target="_blank">ASP.NET vNext</a> at the <a href="http://oredev.org" target="_blank">Øredev</a> developer conference. I had a blast - especially the part where I got to show off the new bits to a few hundred people on the big stage. It’s especially fun showing off the new features that open up ASP.NET development with Mono on OSX. There’s a lot of great stuff in this release - the new request pipeline, bin deployable CLR, command line tools, configuration APIs, SublimeText support, fewer dependencies on Visual Studio - and lots of open source. </p> <h3 id="timeline">Timeline</h3> <ul> <li><strong>00:00</strong> - Intro</li> <li><strong>04:20</strong> - Challenges with the current stack</li> <li><strong>06:25</strong> - Intro to ASP.NET vNext</li> <li><strong>07:50</strong> - ASP.NET vNext project templates</li> <li><strong>10:05</strong> - Controllers, Models, Views </li> <li><strong>12:30</strong> - Reference model</li> <li><strong>14:45</strong> - project.json</li> <li><strong>16:30</strong> - Commands</li> <li><strong>18:20</strong> - KVM, KPM, &amp; K</li> <li><strong>22:45</strong> - Startup.cs</li> <li><strong>23:50</strong> - Configuration</li> <li><strong>26:15</strong> - Services</li> <li><strong>27:00</strong> - Module registration</li> <li><strong>31:00</strong> - Open source ASP.NET</li> <li><strong>34:15</strong> - Publishing with CoreCLR &amp; MVC</li> <li><strong>36:00</strong> - OSX, Mono, SublimeText</li> <li><strong>40:15</strong> - Timeline</li> <li><strong>41:30</strong> - Closing</li> </ul> <h3 id="watch-the-video">Watch the video</h3> <div class="embed-container"><iframe src="//player.vimeo.com/video/111004374?portrait=0&amp;color=c9ff23" width="750" height="450" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe></div> <p><a href="http://vimeo.com/111004374">Building web applications with ASP.NET</a> from <a href="http://vimeo.com/user4280938">&Oslash;redev Conference</a> on <a href="https://vimeo.com">Vimeo</a>.</p> <p>Thanks!</p> Sun, 09 Nov 2014 00:00:00 +0000 http://jbeckwith.com/2014/11/09/aspnet-vnext-oredev/ http://jbeckwith.com/2014/11/09/aspnet-vnext-oredev/ Under the hood of the new Azure Portal <p><img src="/images/2014/how-the-azure-portal-works/portal.png" alt="Damn, we look good." /></p> <p>So - I haven’t been doing much blogging or speaking on WebMatrix or node recently. For the last year and a half, I’ve been part of the team that’s building the new <a href="http://portal.azure.com" target="_blank">Azure portal</a> - and it’s been quite an experience. A lot has been said about the <a href="http://channel9.msdn.com/Blogs/Windows-Azure/Azure-Preview-portal" target="_blank">end to end experience</a>, the <a href="http://blogs.msdn.com/b/bharry/archive/2014/04/03/visual-studio-online-integration-in-the-azure-portal.aspx" target="_blank">integration of Visual Studio Online</a>, and even some of the <a href="http://weblogs.asp.net/scottgu/azure-new-documentdb-nosql-service-new-search-service-new-sql-alwayson-vm-template-and-more" target="_blank">new services that have been released lately</a>. All of that’s awesome, but it’s not what I want to talk about today. As much as those things are great (and I mean, who doesn’t like the design), the real interesting piece is the underlying architecture. Let’s take a look under the hood of the new Azure portal.</p> <h3 id="a-little-history">A little history</h3> <p>To understand how the new portal works, you need to know a little about the <a href="http://manage.windowsazure.com" target="_blank">current management portal</a>. When the current portal was started, there were only a handful of services in Azure. Off of the top of my head, I think they were:</p> <ul> <li>Cloud Services</li> <li>Web sites</li> <li>Storage</li> <li>Cache</li> <li>CDN </li> </ul> <p>Out of the gate - this was pretty easy to manage. Most of those teams were all in the same organization at Microsoft, so coordinating releases was feasible. The portal team was a single group that was responsible for delivering the majority of the UI. There was little need to hand off responsibility to the individual experiences to the teams which wrote the services, as it was easier to keep everything in house. There is a single ASP.NET MVC application, which contains all of the CSS, JavaScript, and shared widgets used throughout the app. </p> <p><img src="/images/2014/how-the-azure-portal-works/vcurrent.png" alt="The current Azure portal, in all of it's blue glory" /></p> <p>The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work. </p> <p><strong><em>And then everything went crazy.</em></strong></p> <p>As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services. </p> <h3 id="scaling-to-infin-services">Scaling to ∞ services</h3> <p>Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.</p> <p>The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal. </p> <h4 id="sandboxing-in-the-browser">Sandboxing in the browser</h4> <p>To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code. </p> <p>To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:</p> <ul> <li>Defining parts, blades, and commands</li> <li>Customizing the UI of parts</li> <li>Binding data into UI elements</li> <li>Sending notifications</li> </ul> <p>The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.</p> <h4 id="distributed-continuous-deployment">Distributed continuous deployment</h4> <p>In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal. </p> <p><strong><em>IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().</em></strong></p> <p><img src="/images/2014/how-the-azure-portal-works/extensions.png" alt="Each extension is loaded into the shell at runtime from their own back end" /></p> <p>This architecture allows us to scale to ∞ deployments in a given day. If the media services team wants to roll out a new feature on a Tuesday, but the storage team isn’t ready with updates they’re planning - that’s fine. They can each deploy their own changes as needed, without affecting the rest of the portal.</p> <h3 id="stuff-were-using">Stuff we’re using</h3> <p>Once you start poking around, you’ll notice the portal is big single page application. That came with a lot of challenges - here are some of the technologies we’re using to solve them.</p> <h4 id="typescript">TypeScript</h4> <p>Like any single page app, the portal runs a lot of JavaScript. We have a ton of APIs that run internal to the shell, and APIs that are exposed for extension authors across Microsoft. To support our enormous codebase, and the many teams using our SDK to build portal experiences, we chose to use <a href="http://www.typescriptlang.org/" target="_blank">TypeScript</a>. </p> <ul> <li><strong>TypeScript compiles into JavaScript.</strong> There’s no runtime VM, or plug-ins required.</li> <li><strong>The tooling is awesome.</strong> Visual Studio gives us (and partner teams) IntelliSense and compile time validation.</li> <li><strong>Generating interfaces for partners is really easy.</strong> We distribute d.ts files which partners use to program against our APIs. </li> <li><strong>There’s great integration for using AMD module loading.</strong> This is critical to us for productivity and performance reasons. (more on this in another post).</li> <li><strong>JavaScript is valid TypeScript - so the learning curve isn’t so high.</strong> The syntax is also largely forward looking to ES6, so we’re actually getting a jump on some new concepts.</li> </ul> <h4 id="less">Less</h4> <p>Visually, there’s a lot going on inside of the portal. To help organize our CSS, and promote usability, we’ve adopted <a href="http://lesscss.org/" target="_blank">{LESS}</a>. Less does a couple of cool things for us:</p> <ul> <li><strong>We can create variables for colors.</strong> We have a pre-defined color palette - less makes it easy to define those up front, and re-use the same colors throughout our style sheets.</li> <li><strong>The tooling is awesome.</strong> Similar to TypeScript, Visual Studio has great Less support with full IntelliSense and validation.</li> <li><strong>It made theming easier.</strong></li> </ul> <p><img src="/images/2014/how-the-azure-portal-works/portaldark.png" alt="The dark theme of the portal was much easier to make using less" /></p> <h4 id="knockout">Knockout</h4> <p>With the new design, we were really going for a ‘live tile’ feel. As new websites are added, or new log entries are available, we wanted to make sure it was easy for developers to update that information. Given that goal, along with the quirks of our design (extension authors can’t write JavaScript that runs in the main window), <a href="http://knockoutjs.com/" target="_blank">Knockout</a> turned out to be a fine choice. There are a few reasons we love Knockout:</p> <ul> <li><strong>Automatic refreshing of the UI</strong> - The data binding aspect of Knockout is pretty incredible. We make changes to underlying model objects in TypeScript, and the UI is updated for us.</li> <li><strong>The tooling is great.</strong> This is starting to be a recurring theme :) Visual Studio has some great tooling for Knockout data binding expressions (thanks <a href="http://madskristensen.net/" target="_blank">Mads</a>).</li> <li><strong>The binding syntax is pure</strong> - We’re not stuck putting invalid HTML in our code to support the specifics of the binding library. Everything is driven off of data-* attributes.</li> </ul> <p>I’m sure there are 100 other reasons our dev team could come up with on why we love Knockout. Especially the ineffable <a href="http://blog.stevensanderson.com/" target="_blank">Steve Sanderson</a>, who joined our dev team to work on the project. He even gave an awesome talk on the subject at NDC:</p> <div class="embed-container"> <iframe style="margin-left: auto; margin-right: auto" src="//player.vimeo.com/video/97519516" width="100%" height="400" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe> <p><a href="http://vimeo.com/97519516">Steve Sanderson - Architecting large Single Page Applications with Knockout.js</a> from <a href="http://vimeo.com/ndcoslo">NDC Conferences</a> on <a href="https://vimeo.com">Vimeo</a>.</p> </div> <h3 id="whats-next">What’s next</h3> <p>I’m really excited about the future of the portal. Since our first release at //build, we’ve been working on new features, and responding to a lot of the <a href="http://feedback.azure.com/forums/223579-azure-preview-portal" target="_blank">customer feedback</a>. Either way - we really want to know what you think. </p> Sat, 20 Sep 2014 00:00:00 +0000 http://jbeckwith.com/2014/09/20/how-the-azure-portal-works/ http://jbeckwith.com/2014/09/20/how-the-azure-portal-works/ Switching from Wordpress to Jekyll <img src="/images/posts/wordpress-to-jekyll/jekyll.png" alt="jekyll is fun" /> <p>Over the last few weeks, I've been slowly moving my blog from Wordpress to <a href="http://jekyllrb.com/" target="_blank">Jekyll</a>. The change has been a long time coming, and so far I couldn't be happier with the results. I thought it may be interesting to make the ultimate meta post, and write a blog post about my blog. You can take a look at the <a href="https://github.com/JustinBeckwith/justinbeckwith.github.io" target="_blank">source code on GitHub</a>.</p> <h3>What's wrong with Wordpress?</h3> <p>In short? Absolutely nothing. I love Wordpress. I've been using it across multiple sites for years, I worked on a <a href="http://webmatrix.com" target="_blank">product that supported Wordpress development</a>, I've even blogged here about <a href="http://jbeckwith.com/2012/06/09/wordpress-and-webmatrix/">speaking at WordCamp</a>. The problem is that for me, the costs of a full featured blog engine outweigh the benefits.</p> <img src="/images/posts/wordpress-to-jekyll/update.png" alt="Every damn time." /> <p>Let me give you an example. My post rate on this blog is atrocious. Part of the reason is that like most people I'm freakishly busy, but there's another nagging reason - every time I sit down to write a post, I'm burdened with maintenance costs. On the few evenings I have the time or content to write a post, it would usually go like this:</p> <pre> <em>9:00 PM</em> - Kids are in bed. Time to sit down and write that blog post. <em>9:05 PM</em> - I'm logged into the Wordpress admin site. Looks like I need an update. Better install it. <em>9:15 PM</em> - Oh, I have some permissions error when I try to download. I'll do it manually. <em>9:35 PM</em> - Alright, I backed up my database, downloaded the new Wordpress version and did a manual upgrade. <em>9:40 PM</em> - My plugins are broken. Dammit. <em>9:45 PM</em> - Updating my plugins causes another access denied error. <em>9:50 PM</em> - I had to use putty and remember the flags for chmod. F-me. <em>10:00 PM</em> - That was fun. I'm going to bed. </pre> <p>Running a Wordpress blog comes with a cost. You need to keep it updated. You need to find the right plugins, and keep those updated. You need to back up databases. You need to have a strategy for backing up changes to the theme. For someone that's posting every week, these costs may be worth it. It just isn't worth it to me.</p> <h3>Enter Jekyll</h3> Jekyll takes a bit of a different approach to serving up a blog. Instead of the traditional model of hosting an active web application with PHP/Ruby/.NET/whatevs and a database, you simply post static pages. You write your posts in one of the supported markup languages (I use good ol' HTML), and then write the jekyll build tool to generate your static HTML pages. There are around 100 posts on setting up jekyll, <a href="http://jekyllrb.com/docs/home/" target="_blank">none better than the official documentation</a> - so I won't go too deep into how jekyll works. I'll just share my setup. <h4>Importing Wordpress</h4> <p>After playing around with the <a href="http://jekyllrb.com/docs/quickstart/" target="_blank">quick start guide</a>, I got started by importing the Wordpress data to script out the first version of the site. The jekyll site has a great section on <a href="http://jekyllrb.com/docs/migrations/" target="_blank">migrating from other blogs</a>, so I mostly followed their steps. </p> First, I downloaded my wordpress.xml file from the Wordpress admin: <img src="/images/posts/wordpress-to-jekyll/export.png" /> Next I ran the import tool: <pre><code class="language-clike">gem install hpricot ruby -rubygems -e 'require "jekyll/jekyll-import/wordpressdotcom"; JekyllImport::WordpressDotCom.process({ :source => "wordpress.xml" })' </code></pre> This downloaded all of my existing posts, and created new posts with metadata in jekyll format (woo!). What it didn't do was download all of my images. To get around that, I just connected with my FTP client and downloaded my images directory into the root of my jekyll site. <h4>Syntax Highlighting</h4> One of the plugins I had installed on my Wordpress site was <a href="http://wordpress.org/plugins/syntaxhighlighter/" target="_blank">SyntaxHighlighter Evolved</a>. Jekyll comes with a built in syntax highlighting sysyntax using Pygments and Liquid: <pre><code class="language-javascript">{% highlight javascript %} var logger = new (winston.Logger)({ transports: [ new (winston.transports.Console)(), new (winston.transports.Skywriter)({ account: stName, key: stKey, partition: require('os').hostname() + ':' + process.pid }) ] }); logger.info('Started wazstagram backend'); {% endhighlight %} </code></pre> That's all well and good but - the syntax highlighter wasn't quite as nice as I would like. I also didn't feel the need to lock myself into liquid for something that can be handled on the client. I chose to use <a href="http://prismjs.com/" target="_blank">PrismJS</a>, largely because I've used it in the past with success. Someone even wrote a fancy jekyll plugin to <a href="http://gmurphey.com/2012/08/09/jekyll-plugin-syntax-highlighting-with-prism.html" target="_blank">generate your highlighted markup at compile time</a>, if that's your thing. <h4>--watch and livereload</h4> <p>As I worked on the site, I was making a lot of changes, rebuilding, waiting for the build to finish, and reloading the browser. To make some of this easier, I did a few things. Instead of saving my file, building, and running the server every time, you can just use the built in watch command:</p> <pre><code class="language-clike">jekyll serve --watch</code></pre> This will run the server, watch for changes, and perform a build anytime something is modified on disk. The other side to this is refreshing the browser automatically. To accomplish that, I used <a href="http://livereload.com/" target="_blank">LiveReload</a> with the Chrome browser plugin: <img src="/images/posts/wordpress-to-jekyll/livereload.png" alt="LiveReload refreshes the browser after a change" /> The OSX version of LiveReload lets you set a delay between noticing the change on the filesystem and refreshing the browser. You really want to set that to a second or two just to give jekyll enough time to compile the full site after the first change hits the disk. <h4>RSS Feed</h4> One of the pieces that isn't baked into jekyll is the construction of an RSS feed. The good news is that <a href="https://github.com/snaptortoise/jekyll-rss-feeds" target="_blank">someone already solved this problem</a>. This repository has a few great examples. <h4>Archive by Category</h4> One of the pieces I wanted to add was a post archive page. Building this was relatively straight forward - you create a list of categories used across all of the posts in your site. Next you render an excerpt for each post: <pre><code class="language-markup">&lt;div class="container"&gt; &lt;div id="home"&gt; &lt;h1&gt;The Archive&lt;/h1&gt; &lt;div class="hrbar"&gt;&nbsp;&lt;/div&gt; &lt;div class="categories"&gt; {% for category in site.categories %} &lt;span&gt;&lt;a href="#{{ category[0] }}"&gt;{{ category[0] }} ({{ category[1].size }})&lt;/a&gt;&lt;/span&gt; &lt;span class="dot"&gt;&nbsp;&lt;/span&gt; {% endfor %} &lt;/div&gt; &lt;div class="hrbar"&gt;&nbsp;&lt;/div&gt; &lt;div class="all-posts"&gt; {% for category in site.categories %} &lt;div&gt; &lt;a name="{{category[0]}}"&gt;&lt;/a&gt; &lt;h3&gt;{{ category[0] }}&lt;/h3&gt; &lt;ul class="posts"&gt; {% for post in category[1] %} &lt;li&gt;&lt;span&gt;{{ post.date | date_to_string }}&lt;/span&gt; &raquo; &lt;a href="{{ post.url }}"&gt;{{ post.title }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; &lt;/div&gt; &lt;/div&gt;</code></pre> For the full example, <a href="https://github.com/JustinBeckwith/justinbeckwith.github.io/blob/master/archive.html" target="_blank">check it out on GitHub</a>. <h4>Disqus</h4> I used <a href="http://disqus.com/" target="_blank">Disqus</a> for my commenting and discussion engine. This probably isn't news to anyone, but disqus is pretty awesome. Without a backend database to power user sign ups and comments, it's easier to just hand this over to a third party service (and it's free!). One tip though - disqus has a 'discovery' feature turned on by default. It shows a bunch of links I don't want, and muddied up the comments. Here's where you can turn it off: <img src="/images/posts/wordpress-to-jekyll/disqus.png" alt="turn off discovery under settings->discovery->Just comments" /> <h4>Backups</h4> With no database, backing up means just backing up the files. Good news everyone! I'm just using good ol <a href="https://github.com/JustinBeckwith/justinbeckwith.github.io" target="_blank">GitHub and a git repository</a> to track changes and store my files. I keep local files in Dropbox just in case. <h4>Hosting the bits</h4> <p>The coolest part of using Jekyll is that you can <a href="https://help.github.com/articles/using-jekyll-with-pages" target="_blank">host your site on GitHub - for free</a>. They build the site when you push changes, and even let you set up a <a href="https://help.github.com/articles/setting-up-a-custom-domain-with-pages" target="_blank">custom domain</a>. <h4>What's Next?</h4> <p>Now that I've got the basic workflow for the site rolling (hopefully with less maintenance costs), the next piece I'll probably tackle is performance. Between Bootstrap, JQuery, and Prism I'm pushing a lot of JavaScript and CSS that should be bundled and minified. Until then, I'm just going to keep enjoying writing my posts in SublimeText and publishing with a git push. Let me know what you think! Wed, 17 Jul 2013 00:00:00 +0000 http://jbeckwith.com/2013/07/17/wordpress-to-jekyll/ http://jbeckwith.com/2013/07/17/wordpress-to-jekyll/ Scalable realtime services with Node.js, Socket.IO and Windows Azure <p><a href="http://wazstagram.azurewebsites.net/"><img alt="WAZSTAGRAM" src="/images/2013/01/waz-screenshot.png" title="View the Demo"/></a> </p> <p><a href="http://wazstagram.azurewebsites.net/">Wazstagram</a> is a fun experiment with node.js on <a href="http://www.windowsazure.com/en-us/develop/nodejs/">Windows Azure</a> and the <a href="http://instagram.com/developer/realtime/">Instagram Realtime API</a>. The project uses various services in Windows Azure to create a scalable window into Instagram traffic across multiple cities. </p> <ul> <li><a href="http://wazstagram.azurewebsites.net/">View the demo on Windows Azure</a></li> <li><a href="https://github.com/JustinBeckwith/wazstagram/">View the code on GitHub</a></li> </ul> The code I used to build <a href="https://github.com/JustinBeckwith/wazstagram/" target="_blank">WAZSTAGRAM</a> is under an <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/LICENSE.md" target="_blank">MIT license</a>, so feel free to learn and re-use the code. <h3>How does it work</h3> <p>The application is written in node.js, using cloud services in Windows Azure. A scalable set of backend nodes receive messages from the Instagram Realtime API. Those messages are sent to the front end nodes using <a href="http://msdn.microsoft.com/en-us/library/hh690929.aspx">Windows Azure Service Bus</a>. The front end nodes are running node.js with <a href="http://expressjs.com/">express</a> and <a href="http://socket.io/">socket.io</a>. </p> <p> <a href="/images/2013/01/architecture.png"> <img alt="WAZSTAGRAM Architecture" title="WAZSTAGRAM Architecture" src="/images/2013/01/architecture.png"/> </a> </p> <h3>Websites, and Virtual Machines, and Cloud Services, Oh My!</h3> <p>One of the first things you need to grok when using Windows Azure is the different options you have for your runtimes. Windows Azure supports three distinct models, which can be mixed and matched depending on what you&#39;re trying to accomplish: </p> <h5>Websites</h5> <p><a href="http://www.windowsazure.com/en-us/home/scenarios/web-sites/">Websites</a> in Windows Azure match a traditional PaaS model, when compared to something like Heroku or AppHarbor. They work with node.js, asp.net, and php. There is a free tier. You can use git to deploy, and they offer various scaling options. For an example of a real time node.js site that works well in the Website model, check out my <a href="https://github.com/JustinBeckwith/TwitterMap">TwitterMap</a> example. I chose not to use Websites for this project because a.) websockets are currently not supported in our Website model, and b.) I want to be able to scale my back end processes independently of the front end processes. If you don&#39;t have crazy enterprise architecture or scaling needs, Websites work great. </p> <h5>Virtual Machines</h5> <p>The <a href="http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/">Virtual Machine</a> story in Windows Azure is pretty consistent with IaaS offerings in other clouds. You stand up a VM, you install an OS you like (yes, <a href="http://www.windowsazure.com/en-us/manage/linux/">we support linux</a>), and you take on the management of the host. This didn&#39;t sound like a lot of fun to me because I can&#39;t be trusted to install patches on my OS, and do other maintainency things. </p> <h5>Cloud Services</h5> <p><a href="http://www.windowsazure.com/en-us/manage/services/cloud-services/">Cloud Services</a> in Windows Azure are kind of a different animal. They provide a full Virtual Machine that is stateless - that means you never know when the VM is going to go away, and a new one will appear in it&#39;s place. It&#39;s interesting because it means you have to architect your app to not depend on stateful system resources pretty much from the start. It&#39;s great for new apps that you&#39;re writing to be scalable. The best part is that the OS is patched automagically, so there&#39;s no OS maintenance. I chose this model because a.) we have some large scale needs, b.) we want separation of conerns with our worker nodes and web nodes, and c.) I can&#39;t be bothered to maintain my own VMs. </p> <h3>Getting Started</h3> <p>After picking your runtime model, the next thing you&#39;ll need is some tools. Before we move ahead, you&#39;ll need to <a href="http://www.windowsazure.com/en-us/pricing/free-trial/">sign up for an account</a>. Next, get the command line tools. Windows Azure is a little different because we support two types of command line tools: </p> <ul><li><a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/powershell-cmdlets/">PowerShell Cmdlets</a>: these are great if you&#39;re on Windows and dig the PowerShell thing. </li><li><a href="http://www.windowsazure.com/en-us/manage/linux/other-resources/command-line-tools/">X-Platform CLI</a>: this tool is interesting because it&#39;s written in node, and is available as a node module. You can actually just <code>npm install -g azure-cli</code> and start using this right away. It looks awesome, though I wish they had kept the flames that were in the first version. </li></ul> <p> <a href="/images/2013/01/cli.png"> <img alt="X-Plat CLI" title="X-Plat CLI" src="/images/2013/01/cli.png" /> </a> </p> <p>For this project, I chose to use the PowerShell cmdlets. I went down this path because the Cloud Services stuff is not currently supported by the X-Platform CLI (I&#39;m hoping this changes). If you&#39;re on MacOS and want to use Cloud Services, you should check out <a href="https://github.com/tjanczuk/git-azure">git-azure</a>. To bootstrap the project, I pretty much followed the <a href="http://www.windowsazure.com/en-us/develop/nodejs/tutorials/app-using-socketio/">&#39;Build a Node.js Chat Application with Socket.IO on a Windows Azure Cloud Service&#39; tutorial</a>. This will get all of your scaffolding set up. </p> <h3>My node.js editor - WebMatrix 2</h3> <p>After using the PowerShell cmdlets to scaffold my site, I used <a href="http://www.microsoft.com/web/webmatrix/">Microsoft WebMatrix</a> to do the majority of the work. I am very biased towards WebMatrix, as I helped <a href="http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/">build the node.js experience</a> in it last year. In a nutshell, it&#39;s rad because it has a lot of good editors, and just works. Oh, and it has IntelliSense for everything: </p> <p> <a href="/images/2013/01/webmatrix.png"> <img alt="I &lt;3 WebMatrix" title="WebMatrix FTW" src="/images/2013/01/webmatrix.png" /> </a> </p> <h4>Install the Windows Azure NPM module</h4> <p>The <a href="https://npmjs.org/package/azure">azure npm module</a> provides the basis for all of the Windows Azure stuff we&#39;re going to do with node.js. It includes all of the support for using blobs, tables, service bus, and service management. It&#39;s even <a href="https://github.com/WindowsAzure/azure-sdk-for-node/">open source</a>. To get it, you just need to cd into the directory you&#39;re using and run this command: </p> <p><code>npm install azure</code> </p> <p>After you have the azure module, you&#39;re ready to rock. </p> </li> </ol> <h3>The Backend</h3> <p>The <a href="https://github.com/JustinBeckwith/wazstagram/tree/master/backend">backend</a> part of this project is a worker role that accepts HTTP Post messages from the Instagram API. The idea is that their API batches messages, and sends them to an endpoint you define. Here&#39;s <a href="http://instagram.com/developer/realtime/">some details</a> on how their API works. I chose to use <a href="http://expressjs.com/">express</a> to build out the backend routes, because it&#39;s convenient. There are a few pieces to the backend that are interesting: </p> <ol> <li><h5>Use <a href="https://github.com/flatiron/nconf">nconf</a> to store secrets. Look at the .gitignore.</h5> If you&#39;re going to build a site like this, you are going to need to store a few secrets. The backend includes things like the Instagram API key, my Windows Azure Storage account key, and my Service Bus keys. I create a keys.json file to store this, though you could add it to the environment. I include an example of this file with the project. **DO NOT CHECK THIS FILE INTO GITHUB!** Seriously, <a href="https://github.com/blog/1390-secrets-in-the-code" target="_blank">don&#39;t do that</a>. Also, pay **close attention** to my <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/.gitignore" target="_blank">.gitignore file</a>. You don&#39;t want to check in any *.cspkg or *.csx files, as they contain archived versions of your site that are generated while running the emulator and deploying. Those archives contain your keys.json file. That having been said - nconf does makes it really easy to read stuff from your config: <pre><code class="language-javascript"> // read in keys and secrets nconf.argv().env().file('keys.json'); var sbNamespace = nconf.get('AZURE_SERVICEBUS_NAMESPACE'); var sbKey = nconf.get('AZURE_SERVICEBUS_ACCESS_KEY'); var stName = nconf.get('AZURE_STORAGE_NAME'); var stKey = nconf.get('AZURE_STORAGE_KEY'); </code></pre> </li> <li><h5>Use <a href="https://github.com/flatiron/winston">winston</a> and <a href="https://github.com/pofallon/winston-skywriter">winston-skywriter</a> for logging.</h5> The cloud presents some challenges at times. Like *how do I get console output* when something goes wrong. Every node.js project I start these days, I just use winston from the get go. It&#39;s awesome because it lets you pick where your console output and logging gets stored. I like to just pipe the output to console at dev time, and write to <a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/table-services/" target="_blank">Table Storage</a> in production. Here&#39;s how you set it up: <pre><code class="language-javascript"> // set up a single instance of a winston logger, writing to azure table storage var logger = new (winston.Logger)({ transports: [ new (winston.transports.Console)(), new (winston.transports.Skywriter)({ account: stName, key: stKey, partition: require('os').hostname() + ':' + process.pid }) ] }); logger.info('Started wazstagram backend'); </code></pre> </li> <li><h5>Use <a href="http://msdn.microsoft.com/en-us/library/ee732537.aspx">Service Bus</a> - it&#39;s pub/sub (+) a basket of kittens.</h5> <p> <a href="http://msdn.microsoft.com/en-us/library/ee732537.aspx" target="_blank">Service Bus</a> is Windows Azure's swiss army knife of messaging. I usually use it in the places where I would otherwise use the PubSub features of Redis. It does all kinds of neat things like <a href="http://www.windowsazure.com/en-us/develop/net/how-to-guides/service-bus-topics/" target="_blank">PubSub</a>, <a href="http://msdn.microsoft.com/en-us/library/windowsazure/hh767287.aspx" target="_blank">Durable Queues</a>, and more recently <a href="https://channel9.msdn.com/Blogs/Subscribe/Service-Bus-Notification-Hubs-Code-Walkthrough-Windows-8-Edition" target="_blank">Notification Hubs</a>. I use the topic subscription model to create a single channel for messages. Each worker node publishes messages to a single topic. Each web node creates a subscription to that topic, and polls for messages. There's great <a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/service-bus-topics/" target="_blank">support for Service Bus</a> in the <a href="https://github.com/WindowsAzure/azure-sdk-for-node" target="_blank">Windows Azure Node.js SDK</a>. </p> <p> To get the basic implementation set up, just follow the <a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/service-bus-topics/" target="_blank">Service Bus Node.js guide</a>. The interesting part of my use of Service Bus is the subscription clean up. Each new front end node that connects to the topic creates it&#39;s own subscription. As we scale out and add a new front end node, it creates another subscription. This is a durable object in Service bus that hangs around after the connection from one end goes away (this is a feature). To make sure sure you don&#39;t leave random subscriptions lying around, you need to do a little cleanup: </p> <pre><code class="language-javascript"> function cleanUpSubscriptions() { logger.info('cleaning up subscriptions...'); serviceBusService.listSubscriptions(topicName, function (error, subs, response) { if (!error) { logger.info('found ' + subs.length + ' subscriptions'); for (var i = 0; i &lt; subs.length; i++) { // if there are more than 100 messages on the subscription, assume the edge node is down if (subs[i].MessageCount &gt; 100) { logger.info('deleting subscription ' + subs[i].SubscriptionName); serviceBusService.deleteSubscription(topicName, subs[i].SubscriptionName, function (error, response) { if (error) { logger.error('error deleting subscription', error); } }); } } } else { logger.error('error getting topic subscriptions', error); } setTimeout(cleanUpSubscriptions, 60000); }); } </code></pre> </li> <li><h5>The <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/backend/routes/home.js">NewImage endpoint</a></h5> All of the stuff above is great, but it doesn't cover what happens when the Instagram API actually hits our endpoint. The route that accepts this request gets metadata for each image, and pushes it through the Service Bus topic: <pre><code class="language-javascript"> serviceBusService.sendTopicMessage('wazages', message, function (error) { if (error) { logger.error('error sending message to topic!', error); } else { logger.info('message sent!'); } }) </code></pre> </li> </ol> <h3>The Frontend</h3> <p>The <a href="https://github.com/JustinBeckwith/wazstagram/tree/master/frontend">frontend</a> part of this project is (despite my &#39;web node&#39; reference) a worker role that accepts accepts the incoming traffic from end users on the site. I chose to use worker roles because I wanted to take advantage of Web Sockets. At the moment, Cloud Services Web Roles do not provide that functionality. I could stand up a VM with Windows Server 8 and IIS 8, but see my aformentioned anxiety about managing my own VMs. The worker roles use <a href="http://socket.io/">socket.io</a> and <a href="http://expressjs.com">express</a> to provide the web site experience. The front end uses the same NPM modules as the backend: <a href="https://github.com/visionmedia/express/">express</a>, <a href="https://github.com/flatiron/winston">winston</a>, <a href="https://github.com/pofallon/winston-skywriter">winston-skywriter</a>, <a href="https://github.com/flatiron/nconf">nconf</a>, and <a href="https://github.com/WindowsAzure/azure-sdk-for-node">azure</a>. In addition to that, it uses <a href="http://socket.io/">socket.io</a> and <a href="https://github.com/visionmedia/ejs">ejs</a> to handle the client stuff. There are a few pieces to the frontend that are interesting: </p> <ol> <li><h5>Setting up socket.io</h5> Socket.io provides the web socket (or xhr) interface that we&#39;re going to use to stream images to the client. When a user initially visits the page, they are going to send a `setCity` call, that lets us know the city to which they want to subscribe (by default all <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/backend/cities.json" target="_blank">cities in the system</a> are returned). From there, the user will be sent an initial blast of images that are cached on the server. Otherwise, you wouldn&#39;t see images right away: <pre><code class="language-javascript"> // set up socket.io to establish a new connection with each client var io = require('socket.io').listen(server); io.sockets.on('connection', function (socket) { socket.on('setCity', function (data) { logger.info('new connection: ' + data.city); if (picCache[data.city]) { for (var i = 0; i &lt; picCache[data.city].length; i++) { socket.emit('newPic', picCache[data.city][i]); } } socket.join(data.city); }); }); </code></pre> </li> <li><h5>Creating a Service Bus Subscription</h5> To receive messages from the worker nodes, we need to create a single subscription for each front end node process. This is going to create subscription, and start listening for messages: <pre><code class="language-javascript"> // create the initial subscription to get events from service bus serviceBusService.createSubscription(topicName, subscriptionId, function (error) { if (error) { logger.error('error creating subscription', error); throw error; } else { getFromTheBus(); } }); </code></pre> </li><li><h5>Moving data between Service Bus and Socket.IO</h5> As data comes in through the service bus subscription, you need to pipe it up to the appropriate connected clients. Pay special attention to `io.sockets.in(body.city)` - when the user joined the page, they selected a city. This call grabs all users subscribed to that city. The other **important thing to notice** here is the way `getFromTheBus` calls itself in a loop. There&#39;s currently no way to say &quot;just raise an event when there&#39;s data&quot; with the Service Bus Node.js implementation, so you need to use this model. <pre><code class="language-javascript"> function getFromTheBus() { try { serviceBusService.receiveSubscriptionMessage(topicName, subscriptionId, { timeoutIntervalInS: 5 }, function (error, message) { if (error) { if (error == &quot;No messages to receive&quot;) { logger.info('no messages...'); } else { logger.error('error receiving subscription message', error) } } else { var body = JSON.parse(message.body); logger.info('new pic published from: ' + body.city); cachePic(body.pic, body.city); io.sockets. in (body.city).emit('newPic', body.pic); io.sockets. in (universe).emit('newPic', body.pic); } getFromTheBus(); }); } catch (e) { // if something goes wrong, wait a little and reconnect logger.error('error getting data from service bus' + e); setTimeout(getFromTheBus, 1000); } } </code></pre> </li></ol> <h3>Learning</h3> <p>The whole point of writing this code for me was to explore building performant apps that used a rate limited API for data. Hopefully this model can effectively be used to accept data from any API responsibly, and scale it out to a number of connected clients to a single service. If you have any ideas on how to make this app better, please let me know, or submit a PR! </p> <h3>Questions?</h3> <p>If you have any questions, feel free to submit an issue here, or find me <a href="https://twitter.com/JustinBeckwith" target="_blank">@JustinBeckwith</a> </p> Wed, 30 Jan 2013 00:00:00 +0000 http://jbeckwith.com/2013/01/30/building-scalable-realtime-services-with-node-js-socket-io-and-windows-azure/ http://jbeckwith.com/2013/01/30/building-scalable-realtime-services-with-node-js-socket-io-and-windows-azure/ 5 steps to a better Windows command line <a href="/images/2012/11/header.png"> <img src="/images/2012/11/header.png"> </a> I spend a lot of time at the command line. As someone who likes to code on OSX and Windows, I've always been annoyed by the Windows command line experience. Do I use cmd, or PowerShell? Where are my tabs? What about package management? What about little frivolous things like <em>being able to resize the window</em>. I've finally got my Windows command line experience running smoothly, and wanted to share my setup. Here are my 5 steps to a Windows command line that doesn't suck. <h3>1. Use Console2 or ConEmu</h3> The first place to start is the actual console application. Scott Hanselman wrote an <a href="http://www.hanselman.com/blog/Console2ABetterWindowsCommandPrompt.aspx" target="_blank">excellent blog post</a> on setting up <a href="http://sourceforge.net/projects/console/" target="_blank">Console2</a>, and I've been using it ever since. It adds tabs, a resizable window, transparency, and the ability to run multiple shells. I choose to run PowerShell (you should too, keep listening). There are <a href="http://www.hanselman.com/blog/ConEmuTheWindowsTerminalConsolePromptWeveBeenWaitingFor.aspx" target="_blank">other options</a> out there, but I've really grown to love Console2. <a href="/images/2012/11/console2.png"> <img src="/images/2012/11/console2.png" alt="Console2"> </a> <h3>2. Use PowerShell</h3> I won't spend a ton of time evangelizing PowerShell. There are a few good reasons to dump cmd.exe and move over: <ul> <li><b>Most of the things you do in cmd will just work.</b> There are obviously some exceptions, but for the better part all of the things I want to do in cmd are easily done in PowerShell. </li> <li><b><a href="http://blogs.msdn.com/b/powershell/archive/2008/01/31/tab-completion.aspx" target="_blank">Tab Completion</a> and <a href="http://technet.microsoft.com/en-us/library/ee176848.aspx" target="_blank">Get-Help</a> is awesome.</b> PowerShell does a great job of making things discoverable as you learn. <li><b>It's a sane scripting tool.</b> If you've ever tried to do anything significant in a batch script, I'm sorry. You can even create your <a href="http://community.bartdesmet.net/blogs/bart/archive/2008/02/03/easy-windows-powershell-cmdlet-development-and-debugging.aspx" target="_blank">own modules and cmdlets</a> using managed code, if that's your thing.</li> <li><b>Microsoft is releasing a lot of stuff built on PowerShell.</b> Most of the new stuff we release is going to have great PowerShell support, including <a href="http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx" target="_blank">Windows Azure</a>. </li> <li><b>It's a growing community.</b> Sites like <a href="http://powershell.org/" target="_blank">PowerShell.org</a> and <a href="http://psget.net/" target="_blank">PsGet</a> provide a great place to ask questions and look at work others have done. </ul> Now that I've sold you, there are a few things you'll find through here that make using PowerShell a bit easier. To use this stuff, you're going to want to set an execution policy in PowerShell that lets you run custom scripts. By default, the execution of PS scripts is disabled, but it's kind of necessary to do anything interesting. I lead a wild and dangerous life, so I use an unrestricted policy. To set your policy, first run Console2 (or PowerShell) as an administrator: <a href="/images/2012/11/console2-as-administrator.png"> <img src="/images/2012/11/console2-as-administrator.png"> </a> Next, use the Set-ExecutionPolicy command. Note, this means any un-signed script can be run on your system, if you run it, and many people choose to use RemoteSigned. Here is the <a href="" target="_blank">official doc on Set-ExecutionPolicy</a>. <pre><code class="language-clike"> Set-ExecutionPolicy Unrestricted </code></pre> <a href="/images/2012/11/set-executionpolicy.png"> <img src="/images/2012/11/set-executionpolicy.png"> </a> Now you're ready to start doing something interesting. <h3>3. Use the Chocolatey package manager</h3> Spending a lot of time in Ubuntu and OSX, I got really used to `sudo apt-get install <package>` and `<a href="http://mxcl.github.com/homebrew/" target="_blank">brew</a> install <package>`. The closest I've found to that experience on Windows is the <a href="http://chocolatey.org/" target="_blank">Chocolatey package manager</a>. Chocolatey has all of the packages you would expect to find on a developer's machine: <a href="/images/2012/11/choc-list.png"> <img src="/images/2012/11/choc-list.png" alt="list packages"> </a> To install Chocolatey, just run cmd.exe and run the following command (minus the c:\> part): <pre><code class="language-clike"> C:\&gt; @powershell -NoProfile -ExecutionPolicy unrestricted -Command &quot;iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))&quot; &amp;&amp; SET PATH=%PATH%;%systemdrive%\chocolatey\bin </code></pre> And you're ready to rock. If you want to install something like 7zip, you can use the cinst command: <pre><code class="language-clike"> cinst 7zip </code></pre> <a href="/images/2012/11/7zip-install.png"> <img src="/images/2012/11/7zip-install.png" alt="install 7zip"> </a> <h3>4. Use an alias for SublimeText</h3> This seems kind of trivial, but one of the things I've really missed on Windows is the default shortcut to launch <a href="http://www.sublimetext.com/" target="_blank">SublimeText</a>, <a href="http://www.sublimetext.com/docs/2/osx_command_line.html" target="_blank">subl</a>. I use my PowerShell profile to create an alias to SublimeText.exe, which allows me to `subl file.txt` or `subl .` just like I would from OSX. <a href="http://www.howtogeek.com/50236/customizing-your-powershell-profile/" target="_blank">This article</a> gives a basic overview on how to customize your PowerShell Profile; it's really easy to follow, so I won't go into re-creating the steps. <a href="/images/2012/11/create-profile.png"> <img src="/images/2012/11/create-profile.png"> </a> After you've got your PowerShell profile created, edit the script, and add this line: <pre><code class="language-clike"> Set-Alias subl 'C:\Program Files\Sublime Text 2\sublime_text.exe' </code></pre> Save your profile, and spin up a new PowerShell tab in Console2 to reload the session. Go to a directory that contains some code, and try to open it: <pre><code class="language-clike"> subl . </code></pre> This will load the current directory as a project in SublimeText from the command line. Small thing, but a nice thing. <h3>5. Use PsGet and Posh-Git</h3> One of the nice things about using PowerShell over cmd is the community that's starting to emerge. There are a ton of really useful tools and cmdlets that others have already written, and the easiest way to get at most of these is to use <a href="http://psget.net/" target="_blank">PsGet</a>. PsGet provides a super easy way to install PowerShell modules that extend the basic functionality of the shell, and provide other useful libraries. To install PsGet, run the following command from a PowerShell console: <pre><code class="language-clike"> (new-object Net.WebClient).DownloadString(&quot;http://psget.net/GetPsGet.ps1&quot;) | iex </code></pre> If you get an error complaining about executing scripts, you need to go back to #2. Immediately, we can start using the `Install-Module` command to start adding functionality to our console. <a href="/images/2012/11/psget.png"> <img src="/images/2012/11/psget.png" alt="Install PsGet"> </a> The first module that led me to PsGet is a package that adds status and tab completion to git. Phil Haack did a <a href="http://haacked.com/archive/2011/12/13/better-git-with-powershell.aspx" target="_blank">great write up</a> on setting up <a href="https://github.com/dahlbyk/posh-git/" target="_blank">posh-git</a>, and I've since discovered a few other <a href="http://pscx.codeplex.com" target="_blank">cool things</a> in the PsGet gallery. Installing Posh-Git is pretty straight forward: <a href="/images/2012/11/install-posh-git.png"> <img src="/images/2012/11/install-posh-git.png" alt="Install Posh-Git"> </a> The first nice thing here is that I now have command completion. As I type `git sta` and hit <tab>, it will be completed to `git status`. Some tools like <a href="https://github.com/MSOpenTech/posh-npm" target="_blank">posh-npm</a> will even search the npm registry for packages using tab completion. The other cool thing you get with this module is the status of your repository right in the prompt: <a href="/images/2012/11/posh-git-status.png"> <img src="/images/2012/11/posh-git-status.png" alt="posh git"> </a> <h4>Wrapping up</h4> These are just the ways I know how to make the command line experience better. If any one else has some tips, I'd love to hear them! Wed, 28 Nov 2012 00:00:00 +0000 http://jbeckwith.com/2012/11/28/5-steps-to-a-better-windows-command-line/ http://jbeckwith.com/2012/11/28/5-steps-to-a-better-windows-command-line/ WebMatrix and Node Package Manager <img src="/images/2012/09/node_128.png" alt="NPM and WebMatrix" /> A few months ago, we introduced the new <a href="http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/" target="_blank">node.js features we've added to WebMatrix 2</a>. One of the missing pieces from that experience was a way to manage <a href="https://npmjs.org/" target="_blank">NPM</a> (Node Package Manager) from within the IDE. This week we shipped the final release of WebMatrix 2, and one of the fun things that comes with it is a new extension for managing NPM. For a more complete overview of the WebMatrix 2, check out <a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a>. If you want to skip all of this and just download the bits, here you go: <p><a href="http://go.microsoft.com/?linkid=9809776" target="_blank"><img style="display: inline" title="image" alt="image" src="http://lh5.ggpht.com/-lm1GuUL20p8/T9HReoCZk7I/AAAAAAAABU4/uO7oVvNCGPQ/image%25255B4%25255D.png?imgmax=800" width="170" height="45"></a></p> <h3>Installing the Extension</h3> The NPM extension can be installed using the extension gallery inside of WebMatrix. To get started, go ahead and create a new node site with express using the built in template: <a href="/images/2012/09/template.png"> <img src="/images/2012/09/template.png" alt="Create a new express site" /> </a> After you create the site, click on the 'Extensions' button in the ribbon: <a href="/images/2012/09/extension-gallery-icon.png"> <img src="/images/2012/09/extension-gallery-icon.png" alt="WebMatrix Extension Gallery" /> </a> Search for 'NPM', and click through the wizard to finish installing the extension: <a href="/images/2012/09/npm-extension.png"> <img src="/images/2012/09/npm-extension.png" alt="Install the NPM Gallery Extension" /> </a> Now when you navigate to the files workspace, you should see the new NPM icon in the ribbon. <h3>Managing Packages</h3> While you're working with node.js sites, the icon should always show up. To get started, click on the new icon in the ribbon: <a href="/images/2012/09/npm-icon.png"> <img src="/images/2012/09/npm-icon.png" alt="NPM Icon in the ribbon" /> </a> This will load a window very similar to the other galleries in WebMatrix. From here you can search for packages, install, uninstall, update, any of the basic tasks you're likely to do day to day with npm. <a href="/images/2012/09/npm-dialog.png"> <img src="/images/2012/09/npm-dialog.png" alt="NPM Gallery" class="alignnone" /> </a> When you open up a new site, we also check your package.json to see if you're missing any dependencies: <a href="/images/2012/09/missing-packages.png"> <img src="/images/2012/09/missing-packages.png" alt="Missing NPM packages" /> </a> We're just getting started with the node tools inside of WebMatrix, so if you have anything else you would like to see added please hit us up over at <a href="https://webmatrix.uservoice.com" target="_blank">UserVoice</a>. <h3>More Information</h3> If you would like some more information to help you get started, check out some of these links: <ul> <li><a href="http://bit.ly/LG7gs8" target="_blank">WebMatrix on Microsoft.com</a></li> <li><a href="https://twitter.com/#!/webmatrix" target="_blank">WebMatrix on Twitter</a></li> <li><a href="https://github.com/MicrosoftWebMatrix" target="_blank">WebMatrix on GitHub</a></li> <li><a href="http://webmatrix.uservoice.com" target="_blank">WebMatrix on UserVoice</a></li> <li><a href="http://www.microsoft.com/Web/webmatrix/optimize.aspx" target="_blank">WebMatrix and Open Source Applications</a></li> <li><a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a></li> </ul> <br /> <br /> <h4>Happy Coding!</h4> Fri, 07 Sep 2012 00:00:00 +0000 http://jbeckwith.com/2012/09/07/webmatrix-and-node-package-manager/ http://jbeckwith.com/2012/09/07/webmatrix-and-node-package-manager/ WordPress and WebMatrix <img src="/images/2012/06/wp_title_header.png" alt="WordPress and WebMatrix" /> After releasing WebMatrix 2 RC this week, I'm excited to head out to NYC for WordCamp 2012. While I get ready to present tomorrow, I figured I would share some of the amazing work the WebMatrix team has done to create a great experience for WordPress developers. For a more complete overview of the WebMatrix 2 RC, check out <a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a>. If you want to skip all of this and just download the bits, here you go: <p><a href="http://bit.ly/L77V6w" target="_blank"><img style="display: inline" title="image" alt="image" src="http://lh5.ggpht.com/-lm1GuUL20p8/T9HReoCZk7I/AAAAAAAABU4/uO7oVvNCGPQ/image%25255B4%25255D.png?imgmax=800" width="170" height="45"></a></p> <h3>Welcome to WebMatrix</h3> WebMatrix gives you a couple of ways to get started with your application. Anything we do is going to be focused on building web applications, with as few steps as possible. WebMatrix supports opening remote sites, opening local sites, creating new sites with PHP, or creating an application by starting with the Application Gallery. <a href="/images/2012/06/wp_start_screen.png"> <img src="/images/2012/06/wp_start_screen.png" alt="Welcome to WebMatrix" /> </a> <h3>The Application Gallery</h3> We work with the community to maintain a list of open source applications that just work with WebMatrix on the Windows platform. This includes installing the application locally, and deploying to Windows Server or Windows Azure: <a href="/images/2012/06/wp_app_gallery.png"> <img src="/images/2012/06/wp_app_gallery.png" alt="WebMatrix application gallery" /> </a> <h3>Install PHP and MySQL Automatically</h3> When you pick the application you want to install, WebMatrix knows what dependencies need to be installed on your machine. This means you don't need to set up a web server, install and configure MySQL, mess around with the MySQL command line - none of that. It all just happens auto-magically. <a href="/images/2012/06/wp_dependencies.png"> <img src="/images/2012/06/wp_dependencies.png" alt="Install and setup automatically" /> </a> <h3>The Dashboard</h3> After installing WordPress and all of it's dependencies, WebMatrix provides you with a dashboard that's been customized for WordPress. We open up an extensibility model that makes it easier for open source communities to plug into WebMatrix, and we've been working with several groups to make sure we provide this kind of experience: <a href="/images/2012/06/wp_dashboard.png"> <img src="/images/2012/06/wp_dashboard_clipped.png" alt="WordPress Dashboard" /> </a> <h3>Protected Files</h3> When you move into the files work space, you'll notice a lock file next to many of the files in the root. We worked with the WordPress community to define a list of files that are protected in WordPress. These are files that power the core of WordPress, and probably shouldn't be changed: <a href="/images/2012/06/wp_locked_files.png"> <img src="/images/2012/06/wp_locked_files.png" alt="Locked system files" /> </a> We won't stop you from editing the file, but hopefully this prevents people from making mistakes: <a href="/images/2012/06/wp_lock_warning.png"> <img src="/images/2012/06/wp_lock_warning.png" alt="WebMatrix saves you from yourself" /> </a> <h3>HTML5 & CSS3 Tools</h3> The HTML editor in WebMatrix has code completion, validation, and formatting for HTML5. The editor is really, really good. The CSS editor includes code completion, validation, and formatting for CSS3, including the latest and greatest CSS3 modules. We also include support for CSS preprocessors like LESS and Sass. I think my favorite part about the CSS editor is the way it makes dealing with color easier. If you start off a color property, WebMatrix will look at the current CSS file, and provide a palette built from the other colors used throughout your site. This prevents you from having 17 shades of the mostly same color blue: <a href="/images/2012/06/wp_color_pallette.png"> <img src="/images/2012/06/wp_color_pallette.png" alt="The CSS Color Palette" /> </a> If you want to add a new color, we also have a full color picker. This thing is awesome - my favorite part is the eye dropper that lets you choose colors in other applications. <a href="/images/2012/06/wp_color_picker.png"> <img src="/images/2012/06/wp_color_picker.png" alt="The CSS Color Picker" /> </a> <h3>PHP Code Completion</h3> When you're ready to start diving into PHP, we include a fancy new PHP editor. It provides code completion with documentation from php.net, and a lot of other little niceties that make writing PHP easier: <a href="/images/2012/06/wp_php_intellisense.png"> <img src="/images/2012/06/wp_php_intellisense.png" alt="PHP Code Completion" /> </a> <h3>WordPress Code Completion</h3> So you've written some PHP, but now you want to start using the built-in functions available in WordPress. We worked with the WordPress community to come up with a list of supported functions, along with documentation on how they work. Any open source application in the gallery can provide this kind of experience: <a href="/images/2012/06/wp_intellisense.png"> <img src="/images/2012/06/wp_intellisense.png" alt="WordPress specific Code Completion" /> </a> <h3>MySQL Database Editor</h3> If you need to make changes directly to the database, WebMatrix has a full featured MySQL editor built right into the product. You can create tables, manage keys, or add data right through the UI. No command line needed. <a href="/images/2012/06/wp_mysql.png"> <img src="/images/2012/06/wp_mysql.png" alt="MySQL Database Manager" /> </a> <h3>Remote Editing</h3> If you need to make edits to a live running site, we can do that to. Just enter your connection information (FTP or Web Deploy), and you can start editing your files without dealing with a FTP client: <a href="/images/2012/06/wp_start_remote.png"> <img src="/images/2012/06/wp_start_remote.png" alt="Open a remote site" /> </a> After you make your changes, just save the file to automatically upload it to your server: <a href="/images/2012/06/wp_remote_code.png"> <img src="/images/2012/06/wp_remote_code.png" alt="Edit files remotely" /> </a> <h3>Easy Publishing</h3> When you're ready to publish your application, you have the choice of using FTP or Web Deploy. If you use Web Deploy, we can even publish your database automatically along with the files in your WordPress site. When you make subsequent publish calls, only the changed files are published: <a href="/images/2012/06/wp_publish.png"> <img src="/images/2012/06/wp_publish.png" alt="Easy Publishing" /> </a> <h3>More Information</h3> If you would like some more information to help you get started, check out some of these links: <ul> <li><a href="http://bit.ly/LG7gs8" target="_blank">WebMatrix on Microsoft.com</a></li> <li><a href="https://twitter.com/#!/webmatrix" target="_blank">WebMatrix on Twitter</a></li> <li><a href="https://github.com/MicrosoftWebMatrix" target="_blank">WebMatrix on GitHub</a></li> <li><a href="http://webmatrix.uservoice.com" target="_blank">WebMatrix on UserVoice</a></li> <li><a href="http://www.microsoft.com/Web/webmatrix/optimize.aspx" target="_blank">WebMatrix and Open Source Applications</a></li> <li><a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a></li> </ul> <br /> <br /> <h2>Happy Coding!</h2> Sat, 09 Jun 2012 00:00:00 +0000 http://jbeckwith.com/2012/06/09/wordpress-and-webmatrix/ http://jbeckwith.com/2012/06/09/wordpress-and-webmatrix/ Node.js meet WebMatrix 2 <img src="/images/2012/06/title-header.png" alt="WebMatrix 2 + Node.js = love" /> After months of hard work by the WebMatrix team, it's exciting to introduce the release candidate of WebMatrix 2. WebMatrix 2 includes tons of new features, but today I want to give an overview of the work we've done to enable building applications with Node.js. If you want to skip all of this and just get a download link (it's free!), <a href="http://bit.ly/LG7gs8" target="_blank">here you go</a>. <h3>How far we have come</h3> <p> Less than a year ago, I was working at Carnegie Mellon University, trying to use Node.js with ASP.NET for real time components of our online learning environment. Running Linux inside of our customers' data centers was a non-starter, and running a production system in cygwin was even less ideal. Developing node on Windows wasn't exactly easy either - if you managed to get node running, getting NPM to work was near impossible. Using node in an environment favorable to Windows was more than an up hill battle. </p> <p> In the last 12 months since I've joined Microsoft, we've seen various partnerships between Joyent and Microsoft, resulting in new releases of node and npm to support Windows, and a <a href="https://www.windowsazure.com/en-us/develop/nodejs/" target="_blank">commitment to Node on Windows Azure</a>. We've worked together to build a better experience for developers, IT administrators, and ultimately, the users who use our systems. </p> <p> One of the results of that work is a vastly improved experience for building applications with Node.js on Windows Azure. Glenn Block on the SDK team has done a <a href="http://codebetter.com/glennblock/2012/06/07/windowsazure-just-got-a-lot-friendlier-to-node-js-developers/" target="_blank">fabulous write up</a> on the ways Microsoft is making Azure a great place for Node.js developers. As our favorite VP Scott Guthrie says on his blog, <a href="http://weblogs.asp.net/scottgu/archive/2012/06/07/meet-the-new-windows-azure.aspx" target="_blank">meet the new Windows Azure</a>. </p> <br /> <br /> <h3>Enter WebMatrix 2</h3> Today, getting started with node.js is a relatively simple task. You install node, npm (which is now bundled with the node installers), and get started with your favorite text editor. There are infinite possibilities, and limitless configurations for managing projects, compiling CoffeeScript & LESS, configuring your production settings, and deploying your apps. WebMatrix 2 sets out to provide another way to build node.js apps: everything you need to build great apps is one place. <a href="/images/2012/06/splash.png"> <img src="/images/2012/06/splash.png" alt="Welcome to WebMatrix" /> </a> WebMatrix 2 is first and foremost designed for building web applications. From the start screen, you can create applications using pre-built templates, or install common open source applications from the Web Gallery. The current set of templates support creating applications with <a href="http://nodejs.org/" target="_blank">Node.js</a>, <a href="http://php.net/" target="_blank">PHP</a>, and (of course) <a href="http://www.asp.net/web-pages" target="_blank">ASP.NET Web Pages</a>. Out of the box, WebMatrix 2 includes three templates for Node.js: <ul> <li>Empty Node.js Site</li> <li>Express Site</li> <li>Express Starter Site</li> </ul> <p> The empty site provides a very basic example of using an http server - the same sample that's available on <a href="http://nodejs.org" target="_blank">nodejs.org</a>. The Express Site is a basic application generated using the scaffolding tool in the Node.js framework <a href="http://expressjs.com/" target="_blank">express</a>. The Node Starter Site is where things start to get interesting. This boilerplate is <a href="https://github.com/MicrosoftWebMatrix/ExpressStarter" target="_blank">hosted on GitHub</a>, and shows how to implement sites that include parent/child layouts with jade, LESS css, logins with Twitter and Facebook, mobile layouts, and captcha. When you create a new application using any of these templates, WebMatrix 2 is going to ensure node, npm, and IISNode are installed on your system. If not, it will automatically install any missing dependencies. This feature is also particularly useful if you are building PHP/MySQL applications on Windows. </p> <a href="/images/2012/06/dependencies.png"> <img src="/images/2012/06/dependencies.png" alt="WebMatrix installs node, npm, and iisnode" /> </a> <p>The end result of the Node Starter Site is a fully functional application that includes Express, Jade, LESS, chat with socket.io, logins with EveryAuth, and mobile support with jQuery Mobile:</p> <a href="/images/2012/06/template.png"> <img src="/images/2012/06/template.png" alt="The node starter template" /> </a> <br /> <br /> <h3>IntelliSense for Node.js</h3> <p> One of the goals of WebMatrix 2 is reduce the barrier of entry for developers getting started with Node.js. One of the ways to do that is to provide IntelliSense for the core modules on which all applications are built. The documentation we use is actually built from the docs on the <a href="http://nodejs.org/api/" target="_blank">node.js docs site</a>. </p> <a href="/images/2012/06/moduleIntelliSense.png"> <img src="/images/2012/06/moduleIntelliSense.png" alt="WebMatrix provides IntelliSense that makes it easier to get started" /> </a> <p> In addition to providing IntelliSense for core Node.js modules, WebMatrix 2 also provides code completion for your own JavaScript code, and third party modules installed through NPM. There are infinite ways to build your application, and the NPM gallery recently <a href="https://twitter.com/JavaScriptDaily/status/203878468205817857" target="_blank">surpassed 10,000 entries</a>. As developers start building more complex applications, it can be difficult (or even intimidating) to get started. WebMatrix 2 is making it easier to deal with open source packages: </p> <a href="/images/2012/06/thirdpartyintellisense.png"> <img src="/images/2012/06/thirdpartyintellisense.png" alt="Use third party modules with code completion" /> </a> <br /> <br /> <h3>Support for Jade & EJS</h3> <p> To build a truly useful tool for building Node.js web applications, we decided to provide first class editors for <a href="http://jade-lang.com/" target="_blank">Jade</a> and <a href="http://embeddedjs.com/" target="_blank">EJS</a>. WebMatrix 2 provides syntax highlighting, HTML validation, code outlining, and auto-completion for Jade and EJS. </p> <a href="/images/2012/06/jade.png"> <img src="/images/2012/06/jade.png" alt="WebMatrix has syntax highlighting for Jade" /> </a> <p> If you're into the whole angle bracket thing, the experience in EJS even better, since it's based off of our advanced HTML editor: </p> <a href="/images/2012/06/moduleIntelliSense.png"> <img src="/images/2012/06/ejs.png" alt="WebMatrix has IntelliSense for EJS" /> </a> <h3>The best {LESS} editor on the planet</h3> <p>So I'll admit it - I'm a bit of a CSS pre-processor geek. I don't write CSS because I love it, but because I need to get stuff done, and I want to write as little of it as possible. Tools like <a href="http://lesscss.org/" target="_blank">LESS</a> and <a href="http://sass-lang.com/" target="_blank">Sass</a> provide missing features for programmers in CSS like variables, mixins, nesting, and built in common functions. <a href="/images/2012/06/less.png"> <img src="/images/2012/06/less.png" alt="Write LESS with validation, formatting, and IntelliSense" /> </a> The LESS editor in WebMatrix not only provides syntax highlighting, but also provides LESS specific validation, IntelliSense for variables and mixins, and LESS specific formatting. Most node developers are going to process their LESS on the server using the npm module, but if you want to compile LESS locally, you can use the <a href="http://extensions.webmatrix.com/packages/OrangeBits/" target="_blank">Orange Bits compiler</a> to compile your CSS at design time. <a href="/images/2012/06/sass.png"> <img src="/images/2012/06/sass.png" alt="WebMatrix provides syntax highlighting for Sass" /> </a> <h3>CoffeeScript Editor</h3> <p> In the same way LESS and Sass make it easier to write CSS, <a href="http://coffeescript.org/" target="_blank">CoffeeScript</a> simplifies the way you write JavaScript. WebMatrix 2 provides syntax highlighting, code outlining, and completion that simplifies the editing experience. If you want to use CoffeeScript without compiling it on the server, you can use the <a href="http://extensions.webmatrix.com/packages/OrangeBits/" target="_blank">Orange Bits compiler</a> to compile your CoffeeScript into JavaScript at design time. </p> <a href="/images/2012/06/coffeescript.png"> <img src="/images/2012/06/coffeescript.png" alt="WebMatrix and CoffeeScript" /> </a> <h3>Mobile Emulators</h3> <p> Designing applications for mobile can't be an afterthought. WebMatrix 2 is trying to make this easier in a couple of ways. First - the visual templates (in this case the Node Starter Template) is designed taking advantage of responsive layouts in the main StyleSheet: <ul><li><a href="https://github.com/MicrosoftWebMatrix/ExpressStarter/blob/master/public/stylesheets/style.less" target="_blank">styles.less</a></li></ul> This is great if you don't need to change the content of your site, but is lacking for more complex scenarios. To get around that, the node starter template uses a piece of connect middleware to detect if the user is coming from a mobile device, and sends them to a mobile layout based on jQuery Mobile (more on this in another post). For individual views, there is a convention based system that allows you to create {viewName}_mobile.jade views which are only loaded on mobile devices. </p> <p> It gets even better. What if you need to see what your site will look like in various browsers and mobile devices? WebMatrix 2 provides an extensibility model that allows you to add mobile and desktop browsers to the run menu: </p> </p> <a href="/images/2012/06/emulators.png"> <img src="/images/2012/06/emulators.png" alt="WebMatrix shows all of the browsers and emulators on your system" /> </a> <p>Today, we offer a Windows Phone emulator, and iPhone / iPad simulators. In the future we're looking for people to build support for other emulators *coughs* android *coughs*, and even build bridges to online browser testing applications:</p> <a href="/images/2012/06/iphone.png"> <img src="/images/2012/06/iphone.png" alt="Test your websites on the iPhone simulator" /> </a> <h3>Extensions & Open Source</h3> <p> A code editing tool is only as valuable as the developers that commit to the platform. We want to achieve success with everyone, and grow together. As part of that goal, we've opened up an extensibility model that allows developers to build custom extensions and share them with other developers. The extension gallery is available online (more on this to come) at <a href="http://extensions.webmatrix.com" target="_blank">http://extensions.webmatrix.com</a>. We're planning to move a bunch of these extensions into GitHub, and the NodePowerTools extension is the first one to go open source: <ul> <li><a href="https://github.com/MicrosoftWebMatrix/NodePowerTools" target="_blank">Node Power Tools</a></li> <li><a href="https://github.com/JustinBeckwith/OrangeBits" target="_blank">OrangeBits Compiler</a></li> </ul> In the coming months you'll start to see more extensions from Microsoft, and more open source. </p> <a href="/images/2012/06/extension-gallery.png"> <img src="/images/2012/06/extension-gallery.png" alt="Build extensions and share them on the extension gallery" /> </a> <h3>Everyone worked together</h3> I want to make sure I thank everyone who helped make this release happen, including the WebMatrix team, Glenn Block, Claudio Caldato, our Node Advisory board, Isaac Schlueter, and everyone at Joyent. For more information, please visit: <ul> <li><a href="http://bit.ly/LG7gs8" target="_blank">WebMatrix on Microsoft.com</a></li> <li><a href="https://twitter.com/#!/webmatrix" target="_blank">WebMatrix on Twitter</a></li> <li><a href="https://github.com/MicrosoftWebMatrix" target="_blank">WebMatrix on GitHub</a></li> <li><a href="http://webmatrix.uservoice.com" target="_blank">WebMatrix on UserVoice</a></li> <li><a href="http://www.microsoft.com/web/post/how-to-use-the-nodejs-starter-template-in-webmatrix" target="_blank">WebMatrix and Node on Microsoft.com</a></li> <li><a href="http://codebetter.com/glennblock/2012/06/07/windowsazure-just-got-a-lot-friendlier-to-node-js-developers/" target="_blank">Windows Azure just got a lot friendlier to node.js developers</a></li> <li><a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a></li> </ul> <br /> <br /> <h4>Enjoy!</h4> Thu, 07 Jun 2012 00:00:00 +0000 http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/ http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/ Building a user map with SignalR and Bing <a href="http://signalrmap.apphb.com" target="_blank"><img src="/images/2011/10/signalrheader.png" alt="" title="Building a user map with SignalR and Bing" width="430" height="290"/></a> Building asynchronous real time apps with bidirectional communication has traditionally been a very difficult thing to do. HTTP was originally designed to speak in terms of requests and responses, long before concepts of rich media, social integration, and real time communication were considered staples of modern web development. Over the years, various solutions have been hacked together to solve this problem. You can use plugins like flash or silverlight to make a true socket connection on your behalf - but not all clients support plugins. You can use long polling to manage multiple connections via HTTP - but this can be tricky to implement, and can eat up system resources. The <a href="http://dev.w3.org/html5/websockets/" target="_blank">Web Socket standard</a> promises to give web developers a first class socket connection, but browser support is spotty and inconsistent. Various tools across multiple stacks have been release to solve this problem, but in this post I would like to talk about the first real asynchronous client/server package for ASP.NET: <a href="https://github.com/SignalR/SignalR" target="_blank">SignalR</a>. SignalR allows .NET developers to change the way we think about client/server messaging: instead of worrying about implementation details of web sockets, we can focus on the way communication flows across the various components of our applications. <h3>This sounds familiar: socket.io with node.js</h3> Over the last year or so, <a href="http://nodejs.org/" target="_blank">node.js</a> has burst onto the scene as a popular stack for building highly asynchronous applications. The event driven model of JavaScript, paired with a community of inventive developers, led to a platform well suited for these needs. The package <a href="http://socket.io/" target="_blank">socket.io</a> provides what I have found to be the missing piece in the comet puzzle: a front and back end framework that just makes sockets over the web work. No more building flash applications to attempt opening connections over various ports. No more poorly implemented long polling solutions. Most importantly, socket.io made web sockets just plain easy to use: <pre><code class="language-markup"> &lt;script src=&quot;/socket.io/socket.io.js&quot;&gt;&lt;/script&gt; &lt;script&gt; var socket = io.connect('http://localhost'); socket.on('news', function (data) { console.log(data); socket.emit('my other event', { my: 'data' }); }); &lt;/script&gt; </code></pre> Node.js and socket.io paved the way for a series of new tools and frameworks across multiple stacks that enable developers to have a first class client/server messaging experience. Node.js and socket.io are wonderful tools - but let's get back to focusing on SignalR. <h3>Two ways to build apps with SignalR</h3> There are two ways you can go about setting up the server for SignalR. If you want a low level experience, you can add a 'PersistentConnection' class along with a custom route. This will give you basic messaging capabilities, suitable for many apps. Straight from the <a href="https://github.com/SignalR/SignalR" target="_blank">SignalR github</a>, here is an example: <pre><code class="language-csharp"> using SignalR; public class MyConnection : PersistentConnection { protected override Task OnReceivedAsync(string clientId, string data) { // Broadcast data to all clients return Connection.Broadcast(data); } } </code></pre> This works well if you're dealing with simple messaging - the other model SignalR supports is the 'hub' model. This is where things start to get interesting. Using hubs, you can invoke client side functions from the server, and server side functions from the client. Here's another example from the documentation: Here is the server: <pre><code class="language-csharp"> public class Chat : Hub { public void Send(string message) { // Call the addMessage method on all clients Clients.addMessage(message); } } </code></pre> And the client: <pre><code class="language-markup"> &lt;script type=&quot;text/javascript&quot;&gt; $(function () { // Proxy created on the fly var chat = $.connection.chat; // Declare a function on the chat hub so the server can invoke it chat.addMessage = function(message) { $('#messages').append('&lt;li&gt;' + message + '&lt;/li&gt;'); }; $(&quot;#broadcast&quot;).click(function () { // Call the chat method on the server chat.send($('#msg').val()) .fail(function(e) { alert(e); }) // Supports jQuery deferred }); // Start the connection $.connection.hub.start(); }); &lt;/script&gt; &lt;input type=&quot;text&quot; id=&quot;msg&quot; /&gt; &lt;input type=&quot;button&quot; id=&quot;broadcast&quot; /&gt; &lt;ul id=&quot;messages&quot;&gt; &lt;/ul&gt; </code></pre> I chose the high level API, because well... it's just cool. For a wonderful break down of the differences between these two methods, check out <a href="http://www.hanselman.com/blog/AsynchronousScalableWebApplicationsWithRealtimePersistentLongrunningConnectionsWithSignalR.aspx" target="_blank">Scott Hanselman's post on the topic</a>. <h3>Lets build something!</h3> One of the common examples of using these frameworks is a chat room: it has all of the touch points that are otherwise difficult to implement. How do we know when someone joins the room? What about sending a message? What if I want to send a message to multiple people? This is a perfect example of how client/server messaging over the web can make our lives easier. The SignalR folks have a live sample of this application running on their <a href="http://chatapp.apphb.com/" target="_blank">demo site</a>. With the chat idea done, I decided to combine two tools into one project: a user map. I want to maintain a map that uses a pushpin for every user on the page. As users come, a new pushpin will be added in their location in real time. As they leave, the pushpin will be removed. Before we dive into the code, check out the demo at <a href="http://signalrmap.apphb.com/" target="_blank">http://signalrmap.apphb.com/</a>. If no one is in the room, you can slightly randomize your position by using the "random flag" at <a href="http://signalrmap.apphb.com/?random=true" target="_blank">http://signalrmap.apphb.com/?random=true</a>. This will allow you to use multiple browser windows and watch the system add location push pins. <h3>Building the client</h3> The client of SignalRMap includes a Bing map, and some JavaScript to interact with the back end. I used <a href="http://www.asp.net/mvc/mvc3" target="_blank">ASP.NET MVC 3</a> for this example, but this will work just fine with a web form. To start, we need to include a few script files: <pre><code class="language-markup"> &lt;script charset=&quot;UTF-8&quot; type=&quot;text/javascript&quot; src=&quot;http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0&quot;&gt;&lt;/script&gt; &lt;script src=&quot;@Url.Content(&quot;~/Scripts/jquery-1.6.4.min.js&quot;)&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;script src=&quot;@Url.Content(&quot;~/Scripts/jquery.signalR.min.js&quot;)&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;script type=&quot;text/javascript&quot; src=&quot;@Url.Content(&quot;~/signalr/hubs&quot;)&quot;&gt;&lt;/script&gt; </code></pre> The first thing we are including here is the Bing Maps JavaScript SDK - this will do all of the heavy lifting for our maps. The SignalR client is dependent upon JavaScript, so we need to include it along with our SignalR reference. Finally, we include the 'hubs' functionality into our application, linking our client and server side methods. After including our scripts, connecting to a hub is crazy awesome easy: <pre><code class="language-javascript"> // create the connection to our hub var mapHub = $.connection.mapHub; // define some javascript methods the server side hub can invoke // add a new client to the map mapHub.addClient = function (client) { addClient(client); centerMap(); var pins = getPushPins(); $(&quot;#userCount&quot;).html(pins.length) }; // start the hub $.connection.hub.start(function () { // after the hub has started, get the current location from the browser navigator.geolocation.getCurrentPosition(function (position) { // create the map element on the page mappit(position); // notify the server a new user has joined the party var coords = isRandom ? createRandomPosition(position) : position.coords; var message = { 'user': '', 'location': { latitude: coords.latitude, longitude: coords.longitude} }; mapHub.join(message); }); }); </code></pre> There are a few things going on here. First, we reference our connection to the hub created on the server (note: the connection has not been established yet). Notice the mapHub.addClient method - this method will be exposed in a way such that it can be invoked from the server. *scratches head* - this is a neat concept. After defining methods which can be invoked from the server, we start the connection to the hub. Once the connection is established, we get the browser's current location, and send that location back to the server. That's about it. Remember how simple it was to use socket.io? Here we have the same experience. There's a little more client script here to handle managing the map component. For the full client source for the application, check out my <a href="https://github.com/JustinBeckwith/SignalRMap" target="_blank">github</a>. <h3>Server side code</h3> As mentioned above, I chose to take the 'hubs' route for my application. One of the nice things about using a hub is that it doesn't require any custom routing - just create a class that extends 'Hub', and you're set. In this example, I'm storing a persistent list of the clients connected to the application (obviously, this method will only work with a single web server). As users show up at the site, they send their current position to the server. The new MapClient is broadcasted to all of the connected clients, and the new client is given the master list of clients: <pre><code class="language-csharp"> using System; using System.Collections.Generic; using System.Linq; using System.Web; using SignalR.Hubs; namespace SignalRMap { public class MapHub : Hub, IDisconnect { private static readonly Dictionary&lt;string, MapClient&gt; _clients = new Dictionary&lt;string, MapClient&gt;(); public void Join(MapClient message) { _clients.Add(this.Context.ClientId, message); Clients.addClient(message); this.Caller.addClients(_clients.ToArray()); } public void Disconnect() { MapClient client = _clients[Context.ClientId]; _clients.Remove(Context.ClientId); Clients.removeClient(client); } /// &lt;summary&gt; /// model class for the join message. I tried to use dynamic here, but it didn't work. /// &lt;/summary&gt; public class MapClient { public string clientId { get; set; } public Location location { get; set; } public class Location { public float latitude { get; set; } public float longitude { get; set; } } } } } </code></pre> And that's it! SignalR figured out what types of communication my browser supports, managed the tunnel, and just made the connection work. Enjoy! <ul> <li><a href="http://signalrmap.apphb.com/?random=true" target="_blank">View the demo</a></li> <li><a href="https://github.com/JustinBeckwith/SignalRMap" target="_blank">Download the source code</a></li> <li><a href="https://github.com/SignalR/SignalR" target="_blank">SignalR on GitHub</a></li> <li><a href="http://www.bingmapsportal.com/ISDK/AjaxV7" target="_blank">Bing Maps SDK</a></li> </ul> Wed, 12 Oct 2011 00:00:00 +0000 http://jbeckwith.com/2011/10/12/building-a-user-map-with-signalr-and-bing/ http://jbeckwith.com/2011/10/12/building-a-user-map-with-signalr-and-bing/ Using MSBuild to deploy your AppFabric Application <img title="azure3" src="/images/2011/07/azure3.png" alt="Using MSBuild to deploy your AppFabric Application" width="150" height="150" /> I wrote a blog post for the MSDN AppFabric Blog! <a href="http://blogs.msdn.com/b/appfabric/archive/2011/07/20/using-msbuild-to-deploy-your-appfabric-application.aspx" target="_blank"> Using MSBuild to deploy your AppFabric Application </a> Wed, 20 Jul 2011 00:00:00 +0000 http://jbeckwith.com/2011/07/20/using-msbuild-to-deploy-your-appfabric-application/ http://jbeckwith.com/2011/07/20/using-msbuild-to-deploy-your-appfabric-application/ FRINK! - the Reddit client for tablets <img src="/images/2011/04/frink-header1.png" title="FRINK!" /> Frink! is a mobile client for the web site <a href="http://www.reddit.com" target="_blank">Reddit</a>. It is designed specifically to be used with tablets, taking advantage of gestures in a unique user interface. Right now the app is available in the BlackBerry App World: <a href="http://appworld.blackberry.com/webstore/content/38838?lang=en" target="_blank">Frink: Blackberry app world</a> After the code has a little time to settle, I plan on releasing the app to the Android Market as well. The entire project is open source, and available on my <a target="_blank" href="https://github.com/JustinBeckwith/frink">GitHub</a>. <a href="/images/2011/04/comments.png"> <img src="/images/2011/04/comments.png" alt="" title="comments on a post" /> </a> <a href="/images/2011/04/post-details.png"> <img src="/images/2011/04/post-details.png" alt="" title="post details" /> </a> <a href="/images/2011/04/subreddits.png"> <img src="/images/2011/04/subreddits.png" alt="" title="subreddits" /> </a> <a href="/images/2011/04/posts.png"> <img src="/images/2011/04/posts.png" alt="" title="posts" /> </a> For more information, here are a bunch of links that talk about the project: <ul> <li> Frink! on the web: <a target="_blank" href="http://frinkapp.com">http://frinkapp.com</a> </li> <li> Frink! on Reddit: <a target="_blank" href="http://www.reddit.com/r/frinkapp">http://www.reddit.com/r/frinkapp</a> </li> <li> Frink! on Twitter: <a target="_blank" href="http://twitter.com/frinkapp">http://twitter.com/frinkapp</a> </li> <li> Frink! on GitHub: <a target="_blank" href="https://github.com/JustinBeckwith/frink">https://github.com/JustinBeckwith/frink</a> </li> <li> Frink! at BlackBerry App World: <a target="_blank" href="http://appworld.blackberry.com/webstore/content/38838?lang=en">Frink!</a> </li> </ul> Mon, 18 Apr 2011 00:00:00 +0000 http://jbeckwith.com/2011/04/18/frink/ http://jbeckwith.com/2011/04/18/frink/ The Cause and Effect of Google's h.264 Decision <a href="http://jbeckwith.com/2011/01/20/google-h264/h264-header-2/" rel="attachment wp-att-262"><img src="/images/2011/01/h264-header1.png" alt="" title="The Cause and Effect of Google&#039;s H.264 Decision" width="383" height="166"/></a> How do the internal workings of a browser that was released only two years ago have an enormous ripple effect on the future of streaming media on the internet? Last week Google announced on their chromium blog that they're <a href="http://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html"> dropping support for the h.264 codec</a>, in favor of the open source <a href="http://www.theora.org/">Ogg Theora</a> and <a href="http://blog.webmproject.org/">WebM/VP8</a> codecs. This is yet another snag in the messy attempt to unify the playback of video in HTML 5, as we now find the #2 and #3 most popular browsers lacking support for what currently is likely the most ubiquitous encoding format. So how did we get here? <h3>The browser wars are back</h3> After years of IE 6 and Firefox being the only real browsers around, the browser wars have exploded again. For the first time since Netscape 4.7 roamed the earth, Internet Explorer has dropped below 50% in the market share. That leaves a lot of space for the likes of Firefox, Chrome, Safari, and Opera. Well, maybe not Opera. The interesting thing in this graph is the dominance of Firefox, and the growth of Chrome. That leaves ~42% of the current desktop browser market with no H.264 native playback, and ~96% of the next desktop browser market that supports WebM (assuming everyone here upgrades to the latest version of course). <style type="text/css"> .chart-container { max-width: 100%; overflow: hidden; margin-top: 20px; margin-bottom: 20px; } </style> <div class="chart-container"> <div id="browser-ww-monthly-200912-201012" style="width:600;height:400"></div><p>Source: <a href="http://gs.statcounter.com/?PHPSESSID=9ni6qaq0p0vdrb4bjtfm6l51i4">StatCounter Global Stats - Browser Market Share</a></p><script type="text/javascript" src="http://www.statcounter.com/js/FusionCharts.js"></script><script type="text/javascript" src="http://gs.statcounter.com/chart.php?browser-ww-monthly-200912-201012"></script> </div> What's scary about this is the proliferation of new browsers through mobile and embedded devices. As time goes on, iOS, Android, and RIM are going to eat more and more of those beautiful hits on our Google Analytics dashboards. While iOS is currently ahead in terms of volume, <a href="http://blog.nielsen.com/nielsenwire/online_mobile/apple-leads-smartphone-race-while-android-attracts-most-recent-customers/">Android is catching up</a>. Quickly. <a href="/images/2011/01/smartphone-os-nov2010.png"><img src="/images/2011/01/smartphone-os-nov2010.png" alt="" title="Smartphone Market 2010"></a> As the homogeneity of the browser market continues to disappear, the likelihood that all browsers will support the same native HTML 5 playback goes down quickly. So why did Google do this? <h3>Is YouTube making money yet? How about now? Now?</h3> In 2006 Google acquired YouTube for $1.65 billion in stock. The costs of running a video delivery site are sky high, and the sale price certainly turned a lot of heads. While I've heard a lot of people question the acquisition, there are estimates that YouTube may be <a href="http://mediamemo.allthingsd.com/20100305/another-youtube-revenue-guess-1-billion-in-2011/">generating as much as 1 billion a year in revenue</a> moving forward. Don't be mistaken, YouTube is a vital piece to controlling advertising in the streaming media market. They will protect their investment, and continue to grow other revenue streams to support the costs of the platform. One of the ways I look for Google to do this is through providing a channel to charge for premium or protected content. MPEG LA makes providing content encoded using H.264 free until 2016 - given the content is available freely to the end user. The moment you charge for the delivery of that content, you are subject to a delivery fee of 2% revenue per title, up to a maximum $5 million cap. While I don't think the $5 million is a huge deal to Google, it's an enormous deal for smaller software shops, startups, integrators, and hardware companies that want to stream and decode video from YouTube on their sites or devices. This model will eventually have a stifling effect on innovation in the streaming media market, which directly affects Google's YouTube line of business. This pretty easily explains why Google <a href="http://blog.streamingmedia.com/the_business_of_online_vi/2009/08/googles-acquisition-of-on2-not-a-big-deal-heres-why.html">purchased the VP8 codec from On2 for $106.5 million</a>. In the short term, this decision isn't going to cause a whole lot of impact. Even if most browsers did support HTML 5, which they don't, most of the video out there today is in H.264 format. All of the hardware devices that support decoding are using H.264. This is a decision that pays dividends 5 years down the road. Estimates have YouTube receiving as much as <a href="http://www.youtube.com/t/fact_sheet">24 hours of content per minute</a>, which is dizzying to think about. No matter how you store it, that's a ton of storage space. More and more of the content being added to the system is in high definition, so that makes the problem even bigger. Now add the fact that you need to encode your content in two different formats over the long haul, and you have a huge problem. I want to know - will Google have the stones to yank H.264 support from YouTube all together? <h3>Where does this leave everyone?</h3> As mentioned above, I don't think this changes a lot in the short term. Here is where I see the big players ending up with the change: <ul> <li><b>Adobe</b> - Adobe comes out of this situation in great shape. This pretty much just guarantees that flash isn't going anywhere for a couple of years, and they've already <a href="http://blogs.adobe.com/flashplatform/2010/05/adobe_support_for_vp8.html">announced support for VP8</a>.</li> <li><b>Microsoft</b> - Microsoft, who doesn't have a horse in this race anymore, has already announced VP8 support for Internet Explorer 9.</li> <li><b>Google</b> - Google gets to protect their YouTube and VP8 investments, while promoting innovation through an open standard.</li> <li><b>Mozilla</b> - Firefox will remain relevant, given their VP8 support in version 4. I doubt the Mozilla Foundation had any intentions of paying MPEG LA $5 million.</li> <li><b>Apple</b> - If Google drops H.264 support of YouTube (which won't happen for a long time), Apple with have their hand forced into supporting WebM. Until that happens, this is a total mystery to me.</li> </ul> <br /> <br /> Overall, I think Google's decision is a good thing for content developers and innovators. Not everyone agrees. For a few dissenting opinions on this, check out: <ul> <li> <a href="http://www.zdnet.com/blog/hardware/chromes-love-of-webm-and-hatred-of-h264-has-nothing-to-do-with-youtube/11021">Chrome's love of WebM and hatred of H.264 has nothing to do with YouTube</a> </li> <li> <a href="http://arstechnica.com/web/news/2011/01/googles-dropping-h264-from-chrome-a-step-backward-for-openness.ars/">Google's dropping H.264 from Chrome a step backward for openness</a> </li> </ul> <br /> And of course, I want to know what you think. So lets start some discussions! Thu, 20 Jan 2011 00:00:00 +0000 http://jbeckwith.com/2011/01/20/google-h264/ http://jbeckwith.com/2011/01/20/google-h264/ Bootstrapping image based bookmarklets <img src="/images/2010/12/featured.png" alt="" title="featured" width="430" height="290" /> Over this holiday break I had the interesting opportunity to write a bookmarklet for a friend who runs a comic based website.   Instead of just manipulating the currently loaded page, the bookmarklet needed to send a list of images to another site.  Often when writing <a title="Wikipedia - Bookmarklets" href="http://en.wikipedia.org/wiki/Bookmarklet" target="_blank">bookmarklets</a>, we tend to only think of loading our code in the context of a HTML content page.  How often do you test your bookmarklets when the browser is viewing an image?  In this article I am going to go through the code I used to bootstrap my bookmarklet script, and discuss some of the interesting challenges I experienced along the way. To get started with this code, I used a fantastic <a href="http://www.smashingmagazine.com/2010/05/23/make-your-own-bookmarklets-with-jquery/" target="_blank">article</a> by <a href="http://www.smashingmagazine.com/author/tommy-iamnotagoodartist/" target="_blank">Tommy Saylor</a> of <a href="http://www.smashingmagazine.com/" target="_blank">Smashing Magazine</a>.  It gave me a good start, but certainly left a lot of details out, and in my case, caused a lot of bugs. <h3>Bookmarklet Architecture</h3> That's right:  we should talk about architecture before diving right into our JavaScript.  When writing a bookmarklet, it's generally a good idea to keep as much code out of the actual bookmark as possible.  This is where 'bootstrapping' comes into play:  we will simply use our bookmark as a piece of code that actually loads the core bits of our JavaScript.  There are actually two reasons why this is a good idea: <ul> <li>Different browsers have various max-lengths of bookmarks.  Keep in mind that a bookmarklet is kind of an accidental feature.  I think the average max length works out to around 2000 characters, but some browsers (like Internet Explorer 6) have limits as low as 508 characters.</li> <li>Users are unlikely to be bothered into refreshing your bookmarklet.  Once somebody bookmarks your code, how are they going to get updates?  It's much easier if your bookmarklet simply loads a JavaScript file from a static URL.  This way we can update the code in the back whenever we want.</li> </ul> After our bootstrapper loads the script we created, any external libraries will be loaded.  For example, I used jQuery and jQuery UI for my most recent project.  After the dependencies are loaded, we will then execute our main code. Another thing to keep in mind when you're building your bookmarklet is how the site behaves after the function is disabled.  For example, if your bookmarklet gives all images on the site a red border, what happens when the user no longer wishes to use the bookmarklet?  For this reason, I tend to create a cleanup method that allows our bookmarklet changes to be undone, and leaves the script in a state that can later be used again. <h3>The bootstrap code</h3> For the purposes of this bookmarklet, I needed to write a piece of code that would interact with a standard HTML page and it's images, or interact with a page that was a single loaded image. For that reason, the first thing we need to do is determine what type of page we're dealing with.  If the page is HTML, we can insert a script.  If the page is an image, we need to behave differently.  While I found that Firefox and WebKit both generated a HTML container to render image pages, their behavior surrounding script events of these pages were too inconsistent to be depended upon. <img src="/images/2010/12/firebug.png" alt="" title="Image url firebug output" width="501" height="635" /> Here is a formatted example of what my a href tag JavaScript looks like: <pre><code class="language-javascript"> // // &lt;a&gt; tag href javascript // javascript:(function() { if( (document.contentType &amp;&amp; document.contentType.indexOf('image/')&gt;-1) ||/.png$/.test(location.href) ||/.jpg$/.test(location.href) ||/.jpeg$/.test(location.href) ||/.gif$/.test(location.href)) { location.href='http://jbeckwith.com/bookmarklet/'; } else if (!window.main) { document.body.appendChild(document.createElement('script')) .src='http://jbeckwith.com/my-bookmarklet.js'; } else { main(); } })(); </code></pre> After tidying up our script, and adding the surrounding tag, here is a final rendered output of our code, I came up with the following: <pre><code class="language-markup"> &lt;!-- &lt;a&gt; tag example --&gt; &lt;a href=&quot;javascript:(function(){if((document.contentType&amp;&amp;document.contentType.indexOf('image/')&gt;-1)||/.png$/.test(location.href)||/.jpg$/.test(location.href)||/.jpeg$/.test(location.href)||/.gif$/.test(location.href)){location.href='http://jbeckwith.com/bookmarklet/';}else if(!window.main){document.body.appendChild(document.createElement('script')).src='http://jbeckwith.com/my-bookmarklet.js';}else{main();}})();&quot;&gt;It's a bookmarklet!&lt;/a&gt; </code></pre> <h3>Loading jQuery and jQueryUI</h3> Now that the bootstrapper is created, I am going to focus the rest of the article on the external JavaScript file that contains the meat of the code. With the script I wrote, I needed to use a good deal of visual effects. I am already comfortable with <a href="http://jquery.com/" target="_blank">JQuery</a>, so I chose to use it as my JavaScript framework: <pre><code class="language-javascript"> // // create javascript libraries required for main // if (typeof jQuery == 'undefined') { // include jquery var jQ = document.createElement('script'); jQ.type = 'text/javascript'; jQ.onload=getDependencies; jQ.onreadystatechange=function() { if(this.readyState=='loaded' || this.readyState=='complete') { getDependencies(); } // end if }; jQ.src = 'http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js'; document.body.appendChild(jQ); } // end if else { getDependencies(); } // end else </code></pre> If you look at the example in the Smashing Magazine article, you will notice a couple of differences. We need to add an event for onreadystatechange to handle Internet Explorer. I found that IE inconsistently set the readyState of the script object to 'loaded' or 'complete' in various parts of the DOM, so as a rule I check for both. If you don't make this change, IE will never notify the script that jQuery is finished loading. Secondly, I have added the getDependencies() method to manage loading required scripts (in addition to jQuery). Since I am depending heavily on a few jQuery UI components, I needed to load both an external JavaScript file and an external CSS file: <pre><code class="language-javascript"> // // getDependencies // function getDependencies() { // make sure jqueryUI is loaded if (!jQuery.ui) { // get the link css tag var jQCSS = document.createElement('link'); jQCSS.type = 'text/css'; jQCSS.rel= 'stylesheet'; jQCSS.href = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css'; document.body.appendChild(jQCSS); // grab jquery ui var jQUI = document.createElement('script'); jQUI.type = 'text/javascript'; jQUI.src = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.7/jquery-ui.min.js'; jQUI.onload=getDependencies; jQUI.onreadystatechange=function() { if(this.readyState=='loaded' || this.readyState=='complete') { getDependencies(); } // end if }; document.body.appendChild(jQUI); } // end if else { main(); } // end else } // end getDependencies function </code></pre> In this case, I'm really only waiting on jQuery and jQuery UI to load. If there were more dependent scripts, I would likely create an array of scripts that need to be loaded, and check all of their completion every turn through the getDepenencies method. <h3>Embedding Styles</h3> With the supporting code written, we're now ready to work on our main method. This is where bookmarklets really are different based on your task. In my case, I'm creating a visual element on the page, complete with styles to match the target site. This works pretty much as expected, with a single caveat: any style definitions you create must be at the very bottom of your appended script. Internet Explorer has a nasty habit of inconsistently handling styles and scripts when appended to the DOM. For some reason beyond my understanding, appended style definitions, whether via script or ajax calls, only work if they are at the very bottom of the appended code. This is fantastically fun to figure out on your own, so hopefully I've saved you some trouble. <pre><code class="language-javascript"> // // main // function main() { // only do this the first time the bar is loaded on the page if ($(&quot;#myBar&quot;).length == 0) { // append the styles and bar var barHtml = &quot;&lt;div id='myBar'&gt;\ &lt;div id='myBar-main' class='dragOff'&gt;\ &lt;span id='myBar-thumbs'&gt;&lt;/span&gt;\ &lt;span id='myBar-text'&gt;drag images to the mainbar&lt;/span&gt;\ &lt;span id='myBar-buttons'&gt;\ &lt;a href='#' id='doneLink'&gt;done&lt;/a&gt;\ &lt;a href='#' id='cancelLink'&gt;cancel&lt;/a&gt;\ &lt;/span&gt;\ &lt;/div&gt;\ &lt;/div&gt;\ &lt;style type='text/css'&gt;\ #myBar {color: #FFFFFF; font-size: 130%; font-weight: bold; left: 0; position: fixed; text-align: center; top: 0; width: 100%; z-index: 99998; display: none; }\ #myBar-main {border-bottom: 3px solid #000000; padding: 7px 0;}\ #myBar-buttons { display: block; float: right; margin-right: 20px; }\ #myBar-buttons a,\ #myBar-buttons a:visited,\ #myBar-buttons a:link,\ #myBar-buttons a:active,\ #myBar-buttons a:hover\ { padding: 4px; font-size: 0.7em; border: 2px solid #008600; background-color: #00cb00; color: #FFFFFF; text-decoration: none; }\ #myBar-thumbs img { padding-left: 2px; padding-right: 2px; cursor: hand; }\ .my-hover { border: 3px solid #4476b8 }\ .dragOff { background-color: #4476b8; }\ .dropHover{background-color: #FF0000; border: 1px dashed #e5a8a8;}\ .dragActive {background-color: #759fd6}\ .dropHighlight{border: 1px solid #000000;}\ .dragHelper {z-index: 99999; border: 1px solid #000000;}\ &lt;/style&gt;&quot;; $(&quot;body&quot;).append(barHtml); </code></pre> This code simply creates a formatted div and adds it to the top of the page. <h3>Cleaning up the mess</h3> If you look at the generated HTML above, you'll notice that I include a cancel link. I like to give the user the option to cancel out of using the current bookmarklet, and even relaunch the bookmarklet without issue. So when you're done, make sure to test closing and re-launching the code. I suggest keeping all of your elements on the page, and simply hiding them from the user: <pre><code class="language-javascript"> // // myBar close evnet // $(&quot;#cancelLink&quot;).click(function(e) { // hide the bar $(&quot;#myBar&quot;).fadeOut(750); // remove any img classes or handlers $(&quot;img&quot;).removeClass('my-hover').unbind().draggable(&quot;destroy&quot;); // reset the thumbnail span $(&quot;#myBar-thumbs&quot;).html(''); // reset the text $(&quot;#myBar-text&quot;).html(&quot;drag images to the mybar&quot;); }); </code></pre> And for now, that's it. For the source to this project, visit my <a href="https://github.com/JustinBeckwith/Chogger-Bookmarklet" target="_blank">GitHub</a>. Tue, 28 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/28/bootstrapping-image-based-bookmarklets/ http://jbeckwith.com/2010/12/28/bootstrapping-image-based-bookmarklets/ Virtual Labs <img src="/images/2010/12/lab-header.png" alt="" title="Lab Header" width="475" height="230" /> When a student takes a course in chemistry, it is often accompanied by a hands on lab. After sitting through a lecture, and performing homework, students need to reinforce the learned concepts by doing. Why should technology education be any different? VTE Virtual Labs provide a sand-boxed environment for students to practice interacting with simple or complex ephemeral computing environments. These environments may be designed by a course instructor or instructional designer to promote learning by interacting with a real (as real as it needs to be) system. Especially useful for security research, these systems may contain full environments including domain controllers, mail servers, web servers running various versions of Windows or Linux. You can even configure internal routing and switching between virtual hosts. Students can install malware, viruses, bots, hacking tools, anything they want - and when they're finished, the environment is completely disposed, with no harm done. Designed at the Software Engineering Institute of Carnegie Mellon University, students and interact with the system entirely over the web, in the browser. It combines an ASP.NET MVC back end with client elements including JQuery and Adobe Flex. The back end infrastructure includes a BigIP F5, NetApp SAN, Cisco ASA, and vSphere cluster. <h3>The Student Perspective</h3> <hr />After watching a presentation on a particular technical topic, the student may be asked to practice their new skill inside of a virtual lab. To prepare for the lab, VTE also provides demos and quizzes. From a course syllabus, the student will select the lab they would like to launch: <a href="/images/2010/12/course-outline.png"><img class="alignnone" title="Course Outline" src="/images/2010/12/course-outline.png" alt="" style="width:100%" /></a> This will start spooling up the required virtual machines and networking gear in vSphere. The student is presented with a structured set of tasks they are expected to perform in order to reinforce the concepts taught in the previous lecture. As each task is completed, the student's progress is saved, and they may come back at a later time to complete the lab: <a href="/images/2010/12/lab-player-1.png"><img class="alignnone" title="Lab Player - The Platform and Task View" src="/images/2010/12/lab-player-1.png" alt="" style="width:100%" /></a> Students may select any of the virtual machines from the lab platform, and engage in a VNC session that is performed using adobe flash. The system is capable of establishing a standard VNC socket connection over 5900 or using a comet style connection to proxy the data over 80/443. The system should behave just like administering any other remote system: <a href="/images/2010/12/lab-player-3.png"><img class="alignnone" title="Lab Player - Completing a Task" src="/images/2010/12/lab-player-3.png" alt="" style="width:100%" /></a> After the student completes the required steps, they are free to submit the lab, and continue on with the other work in their course. <h3>The author perspective</h3> <hr />Instructors, content authors, and instructional designers have the ability to author their own virtual lab environments. After creating a new lab, you have the option to start with a list of predefined templated virtual machines, similar to what Amazon EC2 provides it's users: <a href="/images/2010/12/lab-author-step2.png"><img class="alignnone" title="Lab Author - Base Disks" src="/images/2010/12/lab-author-step2.png" alt="" style="width:100%" /></a> In this example, I am only going to use a single virtual machine. It's entirely acceptable to use multiple virtual machines and multiple networking devices. After all of the machines have been dragged to the stage, they need to be prepared for an initial task authoring state. All this really means is that we're going to copy the base image we started with, and make any changes needed for the specifics of this lab. Examples would include installing custom software, installing the latest patches, or creating files needed in order to the complete the lab. The final state of these machines in this step will represent the first step students see when they launch the exercise: <a href="/images/2010/12/lab-author-step3.2.png"><img class="alignnone" title="Lab Authoring - Preparing the Virtual Machines" src="/images/2010/12/lab-author-step3.2.png" alt="" style="width:100%" /></a> After the author has placed all of the machines in the desired start state, you can begin writing out the individual tasks of the lab. For longer labs, several exercises may be used. A single exercise should encompass a single task that may be completed in a sitting. Several exercises may be combined to create a lab with a broader theme. For example, if you wanted to create a lab on securing Linux, you would like have multiple exercises including 'Installing and configuring the firewall', and 'User management'. An exercise may contain multiple tasks - a task is a simple task that be completed relatively quickly. Tasks contain a brief description on what the student is supposed to be doing in this particular step, and may contain a screen-shot of the desired result: <a href="/images/2010/12/lab-author-step4.2.png"><img class="alignnone" title="Lab Authoring - Tasks" src="/images/2010/12/lab-author-step4.2.png" alt="" style="width:100%" /></a> Upon completion of these steps, the lab can be made available to students. For more information, visit <a title="Virtual Labs" href="http://vte.cert.org/labs/" target="_blank">http://vte.cert.org/labs/</a>. Wed, 22 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/22/virtual-labs/ http://jbeckwith.com/2010/12/22/virtual-labs/ Using Ant with Adobe Flex - Part 1 <a href="/images/2010/12/build-screenshot1.png"><img title="build-screenshot" src="/images/2010/12/build-screenshot1.png" alt="" width="430" height="290" /></a> Welcome to the first part in a multi-part series on building <a title="Adobe Flex" href="http://www.adobe.com/devnet/flex.html" target="_blank">Adobe Flex</a> projects using <a title="The Apache Ant Project" href="http://ant.apache.org/" target="_blank">The Apache Ant Project</a>. So why would we want to use ant to build our flex projects?  Flash Builder does a great job of building our actionscript and mxml.  But it does not do a great job of integrating into our existing automated build frameworks.  For those of us who have been writing Java in an enterprise environment, Ant is common knowledge.  If you've spent any time working with the Microsoft .NET platform, you may have been exposed to <a title="NAnt" href="http://nant.sourceforge.net/" target="_blank">NAnt</a> or <a title="MSBuild" href="http://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx" target="_blank">MSBuild</a>.  The idea is that we need to have a reliable, repeatable build process that can execute outside of the context of our development environment.  For my team, this means an independent build server (in my case, a virtual machine).  An independent build server means nightly builds, and software that can run without the user at the keys. Before we get started, I think it's a good idea to run through the list of tools I'm using for this article: <ul> <li>Apache Ant - v.1.8.1</li> <li>Flash Builder - v.4.0.1</li> <li>Flex SDK - v.3.5.0, v.4.1.0</li> </ul> So lets get started! <h3>Download, Install, and Configure Ant</h3> The first step is to download ant.  At the time of this article, you can download the binaries at http://ant.apache.org/bindownload.cgi.  The binaries are included as a *.zip file, so we need to unpackage our tool in a place that makes sense.  I chose to create a directory structure that was consistent with other installed software on my system: C:\Program Files (x86)\Apache\apache-ant-1.8.1 <a href="/images/2010/11/ant-install-folder1.png"><img class="alignnone size-full wp-image-25" title="ant-install-folder" src="/images/2010/11/ant-install-folder1.png" alt="" width="536" height="360" /></a> After Ant is installed in the appropriate location for your system, you need to create/modify a few system variables in order to use it.  Start by right clicking on 'Computer', and navigate to 'Properties'.  Click on the 'Advanced System Settings' option, and then click on the 'Environment Variables' button. The variable you need to create is ANT_HOME.  Under system variables, click on the 'New...' button.  Enter the name ANT_HOME, and enter the path you used to install Ant.  For me, this is 'C:\Program Files (x86)\Apache\apache-ant-1.8.1': <a href="/images/2010/11/ANT_HOME1.png"><img class="alignnone size-full wp-image-40" title="Setting Environment Variables" src="/images/2010/11/ANT_HOME1.png" alt="" width="617" height="362" /></a> We also need to modify the PATH variable, which will allow us to invoke Ant from the command line.  Find the PATH variable in your system variables, and choose 'Edit...'.  At the end of the existing property value, add the full path to your Ant installation, with the addition of the bin.  For me, this is 'C:\Program Files (x86)\Apache\apache-ant-1.8.1\bin;'.  We are now ready to use Ant. <h3>Configuring The Flex SDK</h3> For the purposes of this post, I am going to assume that you've already installed Flash Builder.  In order for Ant to find the Flex SDK, we need to create an environment variable that points to the appropriate location.  Instead of creating an environment variable that points to a specific SDK directory, I like to create a variable that points to the root of all SDKs.  This allows us to choose the appropriate SDK version inside of the build file, and allows for building bits that use various SDK versions easily.  Create a new environment variable named FLEX_HOME.  Set the path to the root of your Flex SDK installations; for me this is: 'C:\Program Files (x86)\Adobe\Adobe Flash Builder 4\sdks'.  In the case of an independent build machine, you can install the Flex SDKs you need to use independent of Flash Builder. <h3>Configuring Flash Builder to Invoke Ant (optional)</h3> Generally, I invoke my Ant scripts from the command line.  If you're working from a development machine, you may choose to configure Flash Builder to invoke your Ant scripts directly from the IDE.  To get this working, I followed the tutorial listed here: <a href="http://www.zoltanb.co.uk/Flash-Articles/fb4-standalone-how-to-install-ant-in-flash-builder-4-premium.php" target="_blank">http://www.zoltanb.co.uk/Flash-Articles/fb4-standalone-how-to-install-ant-in-flash-builder-4-premium.php</a> To enable Ant from Flash Builder, use the following steps: <ol> <li> Go to Help &gt; Install New Software</li> <li> Click on Available Software Sites</li> <li> Click on 'Add..'</li> <li> Type in: Name: Galileo - Location: <a title="http://download.eclipse.org/releases/galileo/" rel="nofollow" href="http://download.eclipse.org/releases/galileo/">http://download.eclipse.org/releases/galileo/</a></li> <li> Go back to Help&gt;Install New Software</li> <li> Select Galileo from the drop down:</li> <li> Wait until the List gets populated. It might take a long time!</li> <li> Type in 'Eclipse Java' in the search box to narrow down the search</li> <li> Select Eclipse Java Development Tools</li> <li> Click on Next</li> <li> Accept the Terms and click on Finish</li> <li> Click on Yes to restart FB4 and apply your changes:</li> <li> Go To Window&gt; Other Views</li> <li> Select Ant and click OK</li> </ol> These steps will allow you to build your project in Flash Builder using Ant.  Now our environment is set up and configured.  In the next part of this series, I will go over how to write your Ant scripts. Wed, 15 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/15/using-ant-with-adobe-flex-part-1/ http://jbeckwith.com/2010/12/15/using-ant-with-adobe-flex-part-1/ Virtual Training Environment The Virtual Training Environment (VTE) is a Learning Management System designed at the Software Engineering Institute of Carnegie Mellon University. This system is designed to provide students and instructors with a self managed ecosystem, including user generated content and aspects of social networking. It may be used for independent learners, synchronous instruction, or semi-synchronous instruction. Courses may be built using SCORM content, RECast presentations, podcasts, demos, quizzes, surveys, assignments, or virtual labs. I am going to do a detailed writeup on this system in the future, but until our launch, here is a gallery of screen-shots: <a href="/images/2010/12/lab-section-details.png"><img src="/images/2010/12/lab-section-details.png" alt="" title="LMS Section Details" /></a> <a href="/images/2010/12/lms-recast.png"><img src="/images/2010/12/lms-recast.png" alt="" title="LMS Launch RECast"/></a> <a href="/images/2010/12/lms-notifications.png"><img src="/images/2010/12/lms-notifications.png" alt="" title="LMS Notifications" ></a> <a href="/images/2010/12/lms-enroll.png"><img src="/images/2010/12/lms-enroll.png" alt="" title="LMS Course Enrollment" ></a> <a href="/images/2010/12/lms-contact-instructors.png"><img src="/images/2010/12/lms-contact-instructors.png" alt="" title="LMS Contact Instructors" ></a> For more information, visit <a title="VTE" href="http://vte.cert.org/lms/" target="_blank">http://vte.cert.org/lms/</a> Mon, 13 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/13/virtual-training-environment/ http://jbeckwith.com/2010/12/13/virtual-training-environment/ RECast <img src="/images/2010/12/recast-header.png" title="RECast- video for online education" width="255" height="90" class="alignnone size-full wp-image-303" /> RECast is a video playback system designed at the Software Engineering Institute of Carnegie Mellon University.  This system focuses on providing students with a with an experience as close as possible to sitting in the actual classroom. Let's face it - training is a hassle. On site classes are expensive, require travel, and require everyone to learn at the same time. RECast aims to fix this problem by providing the same material online with a unique learning experience. RECast combines an ASP.NET MVC back end with client elements including JQuery and Adobe Flex. <h3>The Student Perspective</h3> <hr /> Typically RECast is used with a Learning Management System. The current release is intended to integrate with the <a href="http://vte.cert.org/lms/" target="_blank">Virtual Training Environment</a> at Carnegie Mellon University. When a student enrolls in a course, they will be presented with an outline of material which they need to complete. Think of this as the course syllabus. Students can watch lectures, demos, complete virtual labs, take quizzes, or interact with any content that has been released in <a href="http://en.wikipedia.org/wiki/Sharable_Content_Object_Reference_Model" target="_blank">SCORM</a> format: <a href="/images/2010/12/lab-section-details1.png"><img src="/images/2010/12/lab-section-details1.png" alt="" title="Section Details" class="aligncenter size-full wp-image-296" /></a> Content that is authored in RECast will be launched in the RECast player. This player provides students with a best possible re-creation of the original learning environment. This means a view of the instructor, and any supplemental materials included in the course. RECast supports multiple track video, video with slide presentations, slides over audio, or just plain audio. Any media imported in the system is transcribed and indexed, allowing students to read the lecture at their own pace, and search on the content of the media. The presentation below is a typical RECast presentation: <a href="/images/2010/12/player.png"><img src="/images/2010/12/player.png" alt="" title="RECast Player" class="aligncenter size-full wp-image-298" /></a> As the student watches the lecture, they may wish to take notes. RECast supports using sticky notes, and transcript highlighting. In the case that the user wants to print a copy of the lecture, notes and transcripts will be included with any slide presentations. For registered users, these notes and highlights will be preserved along with their progress for the next time they launch the video: <a href="/images/2010/12/player-advanced.png"><img src="/images/2010/12/player-advanced.png" alt="" title="The player includes sticky notes and highlighting" class="aligncenter size-full wp-image-299" /></a> Now that I've reviewed the student experience, let's talk a little about how content is created. <br /> <br /> <h3>The Author Perspective</h3> <hr /> RECast is designed to allow the import of most types of media, and support most types of presentations. This means supporting standard slide presentations, voice over slides, podcasts, or screencasts. Authors in the system are given the option to choose a presentation type: <a href="/images/2010/12/new-session-info.png"><img src="/images/2010/12/new-session-info.png" alt="" title="Create New Session" /></a> After some introductory details, the author can import any lecture material that has been prepared from the course capture. This includes any videos, PowerPoint presentations, images, or audio tracks. The media is uploaded, queued, and transcoded into the appropriate format for our system. This can take a little bit of time! <a href="/images/2010/12/asset-uploader.png"><img src="/images/2010/12/asset-uploader.png" alt="" title="Asset Uploader" /></a> After all of the content has been uploaded to the system, authors can start to build their presentation. Currently RECast supports two tracks - People and Content. The 'People' track generally includes a video of the speaker, and the 'Content' track generally includes a slide presentation. As part of the import process, videos are automatically transcribed, and made available for edits by the content author: <a href="/images/2010/12/assembler.png"><img src="/images/2010/12/assembler.png" alt="" title="Session Create - Assembler" /></a> After layout out the content on a timeline, authors have the option to create multiple clips. Think of a clip as a subset of a session - a recording session may include 3 hours of recorded video content, but we don't really want to present all of that at once to the user. Instead, try splitting up the video into smaller consumable chunks (we aim for under 20 minutes). Now that you've created the session, it will appear under your list of available sessions: <a href="/images/2010/12/session-list.png"><img src="/images/2010/12/session-list.png" alt="" title="List of Sessions" /></a> To make the clips available to students, you need to publish them to an LMS: <a href="/images/2010/12/publishing-point.png"><img src="/images/2010/12/publishing-point.png" alt="" title="LMS Publishing Point"/></a> And that's it! For more information, visit <a title="RECast" href="http://vte.cert.org/recast/" target="_blank">http://vte.cert.org/recast/</a>. Mon, 13 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/13/recast/ http://jbeckwith.com/2010/12/13/recast/