Justin Beckwith's 53 Bytes Just another technology blog. http://jbeckwith.com Under the hood of the new Azure Portal <p><img src="/images/2014/how-the-azure-portal-works/portal.png" alt="Damn, we look good." /></p> <p>So - I haven’t been doing much blogging or speaking on WebMatrix or node recently. For the last year and a half, I’ve been part of the team that’s building the new <a href="http://portal.azure.com" target="_blank">Azure portal</a> - and it’s been quite an experience. A lot has been said about the <a href="http://channel9.msdn.com/Blogs/Windows-Azure/Azure-Preview-portal" target="_blank">end to end experience</a>, the <a href="http://blogs.msdn.com/b/bharry/archive/2014/04/03/visual-studio-online-integration-in-the-azure-portal.aspx" target="_blank">integration of Visual Studio Online</a>, and even some of the <a href="http://weblogs.asp.net/scottgu/azure-new-documentdb-nosql-service-new-search-service-new-sql-alwayson-vm-template-and-more" target="_blank">new services that have been released lately</a>. All of that’s awesome, but it’s not what I want to talk about today. As much as those things are great (and I mean, who doesn’t like the design), the real interesting piece is the underlying architecture. Let’s take a look under the hood of the new Azure portal.</p> <h3 id="a-little-history">A little history</h3> <p>To understand how the new portal works, you need to know a little about the <a href="http://manage.windowsazure.com" target="_blank">current management portal</a>. When the current portal was started, there were only a handful of services in Azure. Off of the top of my head, I think they were:</p> <ul> <li>Cloud Services</li> <li>Web sites</li> <li>Storage</li> <li>Cache</li> <li>CDN </li> </ul> <p>Out of the gate - this was pretty easy to manage. Most of those teams were all in the same organization at Microsoft, so coordinating releases was feasible. The portal team was a single group that was responsible for delivering the majority of the UI. There was little need to hand off responsibility to the individual experiences to the teams which wrote the services, as it was easier to keep everything in house. There is a single ASP.NET MVC application, which contains all of the CSS, JavaScript, and shared widgets used throughout the app. </p> <p><img src="/images/2014/how-the-azure-portal-works/vcurrent.png" alt="The current Azure portal, in all of it's blue glory" /></p> <p>The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work. </p> <p><strong><em>And then everything went crazy.</em></strong></p> <p>As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services. </p> <h3 id="scaling-to-infin-services">Scaling to ∞ services</h3> <p>Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.</p> <p>The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal. </p> <h4 id="sandboxing-in-the-browser">Sandboxing in the browser</h4> <p>To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code. </p> <p>To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:</p> <ul> <li>Defining parts, blades, and commands</li> <li>Customizing the UI of parts</li> <li>Binding data into UI elements</li> <li>Sending notifications</li> </ul> <p>The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using &gt; 20 of them at the same time. We’ll probably end up back there at some point.</p> <h4 id="distributed-continuous-deployment">Distributed continuous deployment</h4> <p>In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal. </p> <p><strong><em>IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().</em></strong></p> <p><img src="/images/2014/how-the-azure-portal-works/extensions.png" alt="Each extension is loaded into the shell at runtime from their own back end" /></p> <p>This architecture allows us to scale to ∞ deployments in a given day. If the media services team wants to roll out a new feature on a Tuesday, but the storage team isn’t ready with updates they’re planning - that’s fine. They can each deploy their own changes as needed, without affecting the rest of the portal.</p> <h3 id="stuff-were-using">Stuff we’re using</h3> <p>Once you start poking around, you’ll notice the portal is big single page application. That came with a lot of challenges - here are some of the technologies we’re using to solve them.</p> <h4 id="typescript">TypeScript</h4> <p>Like any single page app, the portal runs a lot of JavaScript. We have a ton of APIs that run internal to the shell, and APIs that are exposed for extension authors across Microsoft. To support our enormous codebase, and the many teams using our SDK to build portal experiences, we chose to use <a href="http://www.typescriptlang.org/" target="_blank">TypeScript</a>. </p> <ul> <li><strong>TypeScript compiles into JavaScript.</strong> There’s no runtime VM, or plug-ins required.</li> <li><strong>The tooling is awesome.</strong> Visual Studio gives us (and partner teams) IntelliSense and compile time validation.</li> <li><strong>Generating interfaces for partners is really easy.</strong> We distribute d.ts files which partners use to program against our APIs. </li> <li><strong>There’s great integration for using AMD module loading.</strong> This is critical to us for productivity and performance reasons. (more on this in another post).</li> <li><strong>JavaScript is valid TypeScript - so the learning curve isn’t so high.</strong> The syntax is also largely forward looking to ES6, so we’re actually getting a jump on some new concepts.</li> </ul> <h4 id="less">Less</h4> <p>Visually, there’s a lot going on inside of the portal. To help organize our CSS, and promote usability, we’ve adopted <a href="http://lesscss.org/" target="_blank">{LESS}</a>. Less does a couple of cool things for us:</p> <ul> <li><strong>We can create variables for colors.</strong> We have a pre-defined color palette - less makes it easy to define those up front, and re-use the same colors throughout our style sheets.</li> <li><strong>The tooling is awesome.</strong> Similar to TypeScript, Visual Studio has great Less support with full IntelliSense and validation.</li> <li><strong>It made theming easier.</strong></li> </ul> <p><img src="/images/2014/how-the-azure-portal-works/portaldark.png" alt="The dark theme of the portal was much easier to make using less" /></p> <h4 id="knockout">Knockout</h4> <p>With the new design, we were really going for a ‘live tile’ feel. As new websites are added, or new log entries are available, we wanted to make sure it was easy for developers to update that information. Given that goal, along with the quirks of our design (extension authors can’t write JavaScript that runs in the main window), <a href="http://knockoutjs.com/" target="_blank">Knockout</a> turned out to be a fine choice. There are a few reasons we love Knockout:</p> <ul> <li><strong>Automatic refreshing of the UI</strong> - The data binding aspect of Knockout is pretty incredible. We make changes to underlying model objects in TypeScript, and the UI is updated for us.</li> <li><strong>The tooling is great.</strong> This is starting to be a recurring theme :) Visual Studio has some great tooling for Knockout data binding expressions (thanks <a href="http://madskristensen.net/" target="_blank">Mads</a>).</li> <li><strong>The binding syntax is pure</strong> - We’re not stuck putting invalid HTML in our code to support the specifics of the binding library. Everything is driven off of data-* attributes.</li> </ul> <p>I’m sure there are 100 other reasons our dev team could come up with on why we love Knockout. Especially the ineffable <a href="http://blog.stevensanderson.com/" target="_blank">Steve Sanderson</a>, who joined our dev team to work on the project. He even gave an awesome talk on the subject at NDC:</p> <div style="text-align: center"> <iframe style="margin-left: auto; margin-right: auto" src="//player.vimeo.com/video/97519516" width="100%" height="400" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe> <p><a href="http://vimeo.com/97519516">Steve Sanderson - Architecting large Single Page Applications with Knockout.js</a> from <a href="http://vimeo.com/ndcoslo">NDC Conferences</a> on <a href="https://vimeo.com">Vimeo</a>.</p> </div> <h3 id="whats-next">What’s next</h3> <p>I’m really excited about the future of the portal. Since our first release at //build, we’ve been working on new features, and responding to a lot of the <a href="http://feedback.azure.com/forums/223579-azure-preview-portal" target="_blank">customer feedback</a>. Either way - we really want to know what you think. </p> Sat, 20 Sep 2014 00:00:00 +0000 http://jbeckwith.com/2014/09/20/how-the-azure-portal-works/ http://jbeckwith.com/2014/09/20/how-the-azure-portal-works/ Switching from Wordpress to Jekyll <img src="/images/posts/wordpress-to-jekyll/jekyll.png" alt="jekyll is fun" /> Over the last few weeks, I've been slowly moving my blog from Wordpress to <a href="http://jekyllrb.com/" target="_blank">Jekyll</a>. The change has been a long time coming, and so far I couldn't be happier with the results. I thought it may be interesting to make the ultimate meta post, and write a blog post about my blog. You can take a look at the <a href="https://github.com/JustinBeckwith/justinbeckwith.github.io" target="_blank">source code on GitHub</a>. <h3>What's wrong with Wordpress?</h3> <p>In short? Absolutely nothing. I love Wordpress. I've been using it across multiple sites for years, I worked on a <a href="http://webmatrix.com" target="_blank">product that supported Wordpress development</a>, I've even blogged here about <a href="http://jbeckwith.com/2012/06/09/wordpress-and-webmatrix/">speaking at WordCamp</a>. The problem is that for me, the costs of a full featured blog engine outweigh the benefits.</p> <img src="/images/posts/wordpress-to-jekyll/update.png" alt="Every damn time." /> <p>Let me give you an example. My post rate on this blog is atrocious. Part of the reason is that like most people I'm freakishly busy, but there's another nagging reason - every time I sit down to write a post, I'm burdened with maintenance costs. On the few evenings I have the time or content to write a post, it would usually go like this:</p> <pre> <em>9:00 PM</em> - Kids are in bed. Time to sit down and write that blog post. <em>9:05 PM</em> - I'm logged into the Wordpress admin site. Looks like I need an update. Better install it. <em>9:15 PM</em> - Oh, I have some permissions error when I try to download. I'll do it manually. <em>9:35 PM</em> - Alright, I backed up my database, downloaded the new Wordpress version and did a manual upgrade. <em>9:40 PM</em> - My plugins are broken. Dammit. <em>9:45 PM</em> - Updating my plugins causes another access denied error. <em>9:50 PM</em> - I had to use putty and remember the flags for chmod. F-me. <em>10:00 PM</em> - That was fun. I'm going to bed. </pre> <p>Running a Wordpress blog comes with a cost. You need to keep it updated. You need to find the right plugins, and keep those updated. You need to back up databases. You need to have a strategy for backing up changes to the theme. For someone that's posting every week, these costs may be worth it. It just isn't worth it to me.</p> <h3>Enter Jekyll</h3> Jekyll takes a bit of a different approach to serving up a blog. Instead of the traditional model of hosting an active web application with PHP/Ruby/.NET/whatevs and a database, you simply post static pages. You write your posts in one of the supported markup languages (I use good ol' HTML), and then write the jekyll build tool to generate your static HTML pages. There are around 100 posts on setting up jekyll, <a href="http://jekyllrb.com/docs/home/" target="_blank">none better than the official documentation</a> - so I won't go too deep into how jekyll works. I'll just share my setup. <h4>Importing Wordpress</h4> <p>After playing around with the <a href="http://jekyllrb.com/docs/quickstart/" target="_blank">quick start guide</a>, I got started by importing the Wordpress data to script out the first version of the site. The jekyll site has a great section on <a href="http://jekyllrb.com/docs/migrations/" target="_blank">migrating from other blogs</a>, so I mostly followed their steps. </p> First, I downloaded my wordpress.xml file from the Wordpress admin: <img src="/images/posts/wordpress-to-jekyll/export.png" /> Next I ran the import tool: <pre><code class="language-clike">gem install hpricot ruby -rubygems -e 'require "jekyll/jekyll-import/wordpressdotcom"; JekyllImport::WordpressDotCom.process({ :source => "wordpress.xml" })' </code></pre> This downloaded all of my existing posts, and created new posts with metadata in jekyll format (woo!). What it didn't do was download all of my images. To get around that, I just connected with my FTP client and downloaded my images directory into the root of my jekyll site. <h4>Syntax Highlighting</h4> One of the plugins I had installed on my Wordpress site was <a href="http://wordpress.org/plugins/syntaxhighlighter/" target="_blank">SyntaxHighlighter Evolved</a>. Jekyll comes with a built in syntax highlighting sysyntax using Pygments and Liquid: <pre><code class="language-javascript">{% highlight javascript %} var logger = new (winston.Logger)({ transports: [ new (winston.transports.Console)(), new (winston.transports.Skywriter)({ account: stName, key: stKey, partition: require('os').hostname() + ':' + process.pid }) ] }); logger.info('Started wazstagram backend'); {% endhighlight %} </code></pre> That's all well and good but - the syntax highlighter wasn't quite as nice as I would like. I also didn't feel the need to lock myself into liquid for something that can be handled on the client. I chose to use <a href="http://prismjs.com/" target="_blank">PrismJS</a>, largely because I've used it in the past with success. Someone even wrote a fancy jekyll plugin to <a href="http://gmurphey.com/2012/08/09/jekyll-plugin-syntax-highlighting-with-prism.html" target="_blank">generate your highlighted markup at compile time</a>, if that's your thing. <h4>--watch and livereload</h4> <p>As I worked on the site, I was making a lot of changes, rebuilding, waiting for the build to finish, and reloading the browser. To make some of this easier, I did a few things. Instead of saving my file, building, and running the server every time, you can just use the built in watch command:</p> <pre><code class="language-clike">jekyll serve --watch</code></pre> This will run the server, watch for changes, and perform a build anytime something is modified on disk. The other side to this is refreshing the browser automatically. To accomplish that, I used <a href="http://livereload.com/" target="_blank">LiveReload</a> with the Chrome browser plugin: <img src="/images/posts/wordpress-to-jekyll/livereload.png" alt="LiveReload refreshes the browser after a change" /> The OSX version of LiveReload lets you set a delay between noticing the change on the filesystem and refreshing the browser. You really want to set that to a second or two just to give jekyll enough time to compile the full site after the first change hits the disk. <h4>RSS Feed</h4> One of the pieces that isn't baked into jekyll is the construction of an RSS feed. The good news is that <a href="https://github.com/snaptortoise/jekyll-rss-feeds" target="_blank">someone already solved this problem</a>. This repository has a few great examples. <h4>Archive by Category</h4> One of the pieces I wanted to add was a post archive page. Building this was relatively straight forward - you create a list of categories used across all of the posts in your site. Next you render an excerpt for each post: <pre><code class="language-markup">&lt;div class="container"&gt; &lt;div id="home"&gt; &lt;h1&gt;The Archive&lt;/h1&gt; &lt;div class="hrbar"&gt;&nbsp;&lt;/div&gt; &lt;div class="categories"&gt; {% for category in site.categories %} &lt;span&gt;&lt;a href="#{{ category[0] }}"&gt;{{ category[0] }} ({{ category[1].size }})&lt;/a&gt;&lt;/span&gt; &lt;span class="dot"&gt;&nbsp;&lt;/span&gt; {% endfor %} &lt;/div&gt; &lt;div class="hrbar"&gt;&nbsp;&lt;/div&gt; &lt;div class="all-posts"&gt; {% for category in site.categories %} &lt;div&gt; &lt;a name="{{category[0]}}"&gt;&lt;/a&gt; &lt;h3&gt;{{ category[0] }}&lt;/h3&gt; &lt;ul class="posts"&gt; {% for post in category[1] %} &lt;li&gt;&lt;span&gt;{{ post.date | date_to_string }}&lt;/span&gt; &raquo; &lt;a href="{{ post.url }}"&gt;{{ post.title }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; &lt;/div&gt; &lt;/div&gt;</code></pre> For the full example, <a href="https://github.com/JustinBeckwith/justinbeckwith.github.io/blob/master/archive.html" target="_blank">check it out on GitHub</a>. <h4>Disqus</h4> I used <a href="http://disqus.com/" target="_blank">Disqus</a> for my commenting and discussion engine. This probably isn't news to anyone, but disqus is pretty awesome. Without a backend database to power user sign ups and comments, it's easier to just hand this over to a third party service (and it's free!). One tip though - disqus has a 'discovery' feature turned on by default. It shows a bunch of links I don't want, and muddied up the comments. Here's where you can turn it off: <img src="/images/posts/wordpress-to-jekyll/disqus.png" alt="turn off discovery under settings->discovery->Just comments" /> <h4>Backups</h4> With no database, backing up means just backing up the files. Good news everyone! I'm just using good ol <a href="https://github.com/JustinBeckwith/justinbeckwith.github.io" target="_blank">GitHub and a git repository</a> to track changes and store my files. I keep local files in Dropbox just in case. <h4>Hosting the bits</h4> <p>The coolest part of using Jekyll is that you can <a href="https://help.github.com/articles/using-jekyll-with-pages" target="_blank">host your site on GitHub - for free</a>. They build the site when you push changes, and even let you set up a <a href="https://help.github.com/articles/setting-up-a-custom-domain-with-pages" target="_blank">custom domain</a>. <h4>What's Next?</h4> <p>Now that I've got the basic workflow for the site rolling (hopefully with less maintenance costs), the next piece I'll probably tackle is performance. Between Bootstrap, JQuery, and Prism I'm pushing a lot of JavaScript and CSS that should be bundled and minified. Until then, I'm just going to keep enjoying writing my posts in SublimeText and publishing with a git push. Let me know what you think! Wed, 17 Jul 2013 00:00:00 +0000 http://jbeckwith.com/2013/07/17/wordpress-to-jekyll/ http://jbeckwith.com/2013/07/17/wordpress-to-jekyll/ Scalable realtime services with Node.js, Socket.IO and Windows Azure <p><a href="http://wazstagram.azurewebsites.net/"><img alt="WAZSTAGRAM" src="/images/2013/01/waz-screenshot.png" title="View the Demo"/></a> </p> <p><a href="http://wazstagram.azurewebsites.net/">Wazstagram</a> is a fun experiment with node.js on <a href="http://www.windowsazure.com/en-us/develop/nodejs/">Windows Azure</a> and the <a href="http://instagram.com/developer/realtime/">Instagram Realtime API</a>. The project uses various services in Windows Azure to create a scalable window into Instagram traffic across multiple cities. </p> <ul> <li><a href="http://wazstagram.azurewebsites.net/">View the demo on Windows Azure</a></li> <li><a href="https://github.com/JustinBeckwith/wazstagram/">View the code on GitHub</a></li> </ul> The code I used to build <a href="https://github.com/JustinBeckwith/wazstagram/" target="_blank">WAZSTAGRAM</a> is under an <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/LICENSE.md" target="_blank">MIT license</a>, so feel free to learn and re-use the code. <h3>How does it work</h3> <p>The application is written in node.js, using cloud services in Windows Azure. A scalable set of backend nodes receive messages from the Instagram Realtime API. Those messages are sent to the front end nodes using <a href="http://msdn.microsoft.com/en-us/library/hh690929.aspx">Windows Azure Service Bus</a>. The front end nodes are running node.js with <a href="http://expressjs.com/">express</a> and <a href="http://socket.io/">socket.io</a>. </p> <p> <a href="/images/2013/01/architecture.png"> <img alt="WAZSTAGRAM Architecture" title="WAZSTAGRAM Architecture" src="/images/2013/01/architecture.png"/> </a> </p> <h3>Websites, and Virtual Machines, and Cloud Services, Oh My!</h3> <p>One of the first things you need to grok when using Windows Azure is the different options you have for your runtimes. Windows Azure supports three distinct models, which can be mixed and matched depending on what you&#39;re trying to accomplish: </p> <h5>Websites</h5> <p><a href="http://www.windowsazure.com/en-us/home/scenarios/web-sites/">Websites</a> in Windows Azure match a traditional PaaS model, when compared to something like Heroku or AppHarbor. They work with node.js, asp.net, and php. There is a free tier. You can use git to deploy, and they offer various scaling options. For an example of a real time node.js site that works well in the Website model, check out my <a href="https://github.com/JustinBeckwith/TwitterMap">TwitterMap</a> example. I chose not to use Websites for this project because a.) websockets are currently not supported in our Website model, and b.) I want to be able to scale my back end processes independently of the front end processes. If you don&#39;t have crazy enterprise architecture or scaling needs, Websites work great. </p> <h5>Virtual Machines</h5> <p>The <a href="http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/">Virtual Machine</a> story in Windows Azure is pretty consistent with IaaS offerings in other clouds. You stand up a VM, you install an OS you like (yes, <a href="http://www.windowsazure.com/en-us/manage/linux/">we support linux</a>), and you take on the management of the host. This didn&#39;t sound like a lot of fun to me because I can&#39;t be trusted to install patches on my OS, and do other maintainency things. </p> <h5>Cloud Services</h5> <p><a href="http://www.windowsazure.com/en-us/manage/services/cloud-services/">Cloud Services</a> in Windows Azure are kind of a different animal. They provide a full Virtual Machine that is stateless - that means you never know when the VM is going to go away, and a new one will appear in it&#39;s place. It&#39;s interesting because it means you have to architect your app to not depend on stateful system resources pretty much from the start. It&#39;s great for new apps that you&#39;re writing to be scalable. The best part is that the OS is patched automagically, so there&#39;s no OS maintenance. I chose this model because a.) we have some large scale needs, b.) we want separation of conerns with our worker nodes and web nodes, and c.) I can&#39;t be bothered to maintain my own VMs. </p> <h3>Getting Started</h3> <p>After picking your runtime model, the next thing you&#39;ll need is some tools. Before we move ahead, you&#39;ll need to <a href="http://www.windowsazure.com/en-us/pricing/free-trial/">sign up for an account</a>. Next, get the command line tools. Windows Azure is a little different because we support two types of command line tools: </p> <ul><li><a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/powershell-cmdlets/">PowerShell Cmdlets</a>: these are great if you&#39;re on Windows and dig the PowerShell thing. </li><li><a href="http://www.windowsazure.com/en-us/manage/linux/other-resources/command-line-tools/">X-Platform CLI</a>: this tool is interesting because it&#39;s written in node, and is available as a node module. You can actually just <code>npm install -g azure-cli</code> and start using this right away. It looks awesome, though I wish they had kept the flames that were in the first version. </li></ul> <p> <a href="/images/2013/01/cli.png"> <img alt="X-Plat CLI" title="X-Plat CLI" src="/images/2013/01/cli.png" /> </a> </p> <p>For this project, I chose to use the PowerShell cmdlets. I went down this path because the Cloud Services stuff is not currently supported by the X-Platform CLI (I&#39;m hoping this changes). If you&#39;re on MacOS and want to use Cloud Services, you should check out <a href="https://github.com/tjanczuk/git-azure">git-azure</a>. To bootstrap the project, I pretty much followed the <a href="http://www.windowsazure.com/en-us/develop/nodejs/tutorials/app-using-socketio/">&#39;Build a Node.js Chat Application with Socket.IO on a Windows Azure Cloud Service&#39; tutorial</a>. This will get all of your scaffolding set up. </p> <h3>My node.js editor - WebMatrix 2</h3> <p>After using the PowerShell cmdlets to scaffold my site, I used <a href="http://www.microsoft.com/web/webmatrix/">Microsoft WebMatrix</a> to do the majority of the work. I am very biased towards WebMatrix, as I helped <a href="http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/">build the node.js experience</a> in it last year. In a nutshell, it&#39;s rad because it has a lot of good editors, and just works. Oh, and it has IntelliSense for everything: </p> <p> <a href="/images/2013/01/webmatrix.png"> <img alt="I &lt;3 WebMatrix" title="WebMatrix FTW" src="/images/2013/01/webmatrix.png" /> </a> </p> <h4>Install the Windows Azure NPM module</h4> <p>The <a href="https://npmjs.org/package/azure">azure npm module</a> provides the basis for all of the Windows Azure stuff we&#39;re going to do with node.js. It includes all of the support for using blobs, tables, service bus, and service management. It&#39;s even <a href="https://github.com/WindowsAzure/azure-sdk-for-node/">open source</a>. To get it, you just need to cd into the directory you&#39;re using and run this command: </p> <p><code>npm install azure</code> </p> <p>After you have the azure module, you&#39;re ready to rock. </p> </li> </ol> <h3>The Backend</h3> <p>The <a href="https://github.com/JustinBeckwith/wazstagram/tree/master/backend">backend</a> part of this project is a worker role that accepts HTTP Post messages from the Instagram API. The idea is that their API batches messages, and sends them to an endpoint you define. Here&#39;s <a href="http://instagram.com/developer/realtime/">some details</a> on how their API works. I chose to use <a href="http://expressjs.com/">express</a> to build out the backend routes, because it&#39;s convenient. There are a few pieces to the backend that are interesting: </p> <ol> <li><h5>Use <a href="https://github.com/flatiron/nconf">nconf</a> to store secrets. Look at the .gitignore.</h5> If you&#39;re going to build a site like this, you are going to need to store a few secrets. The backend includes things like the Instagram API key, my Windows Azure Storage account key, and my Service Bus keys. I create a keys.json file to store this, though you could add it to the environment. I include an example of this file with the project. **DO NOT CHECK THIS FILE INTO GITHUB!** Seriously, <a href="https://github.com/blog/1390-secrets-in-the-code" target="_blank">don&#39;t do that</a>. Also, pay **close attention** to my <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/.gitignore" target="_blank">.gitignore file</a>. You don&#39;t want to check in any *.cspkg or *.csx files, as they contain archived versions of your site that are generated while running the emulator and deploying. Those archives contain your keys.json file. That having been said - nconf does makes it really easy to read stuff from your config: <pre><code class="language-javascript"> // read in keys and secrets nconf.argv().env().file('keys.json'); var sbNamespace = nconf.get('AZURE_SERVICEBUS_NAMESPACE'); var sbKey = nconf.get('AZURE_SERVICEBUS_ACCESS_KEY'); var stName = nconf.get('AZURE_STORAGE_NAME'); var stKey = nconf.get('AZURE_STORAGE_KEY'); </code></pre> </li> <li><h5>Use <a href="https://github.com/flatiron/winston">winston</a> and <a href="https://github.com/pofallon/winston-skywriter">winston-skywriter</a> for logging.</h5> The cloud presents some challenges at times. Like *how do I get console output* when something goes wrong. Every node.js project I start these days, I just use winston from the get go. It&#39;s awesome because it lets you pick where your console output and logging gets stored. I like to just pipe the output to console at dev time, and write to <a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/table-services/" target="_blank">Table Storage</a> in production. Here&#39;s how you set it up: <pre><code class="language-javascript"> // set up a single instance of a winston logger, writing to azure table storage var logger = new (winston.Logger)({ transports: [ new (winston.transports.Console)(), new (winston.transports.Skywriter)({ account: stName, key: stKey, partition: require('os').hostname() + ':' + process.pid }) ] }); logger.info('Started wazstagram backend'); </code></pre> </li> <li><h5>Use <a href="http://msdn.microsoft.com/en-us/library/ee732537.aspx">Service Bus</a> - it&#39;s pub/sub (+) a basket of kittens.</h5> <p> <a href="http://msdn.microsoft.com/en-us/library/ee732537.aspx" target="_blank">Service Bus</a> is Windows Azure's swiss army knife of messaging. I usually use it in the places where I would otherwise use the PubSub features of Redis. It does all kinds of neat things like <a href="http://www.windowsazure.com/en-us/develop/net/how-to-guides/service-bus-topics/" target="_blank">PubSub</a>, <a href="http://msdn.microsoft.com/en-us/library/windowsazure/hh767287.aspx" target="_blank">Durable Queues</a>, and more recently <a href="https://channel9.msdn.com/Blogs/Subscribe/Service-Bus-Notification-Hubs-Code-Walkthrough-Windows-8-Edition" target="_blank">Notification Hubs</a>. I use the topic subscription model to create a single channel for messages. Each worker node publishes messages to a single topic. Each web node creates a subscription to that topic, and polls for messages. There's great <a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/service-bus-topics/" target="_blank">support for Service Bus</a> in the <a href="https://github.com/WindowsAzure/azure-sdk-for-node" target="_blank">Windows Azure Node.js SDK</a>. </p> <p> To get the basic implementation set up, just follow the <a href="http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/service-bus-topics/" target="_blank">Service Bus Node.js guide</a>. The interesting part of my use of Service Bus is the subscription clean up. Each new front end node that connects to the topic creates it&#39;s own subscription. As we scale out and add a new front end node, it creates another subscription. This is a durable object in Service bus that hangs around after the connection from one end goes away (this is a feature). To make sure sure you don&#39;t leave random subscriptions lying around, you need to do a little cleanup: </p> <pre><code class="language-javascript"> function cleanUpSubscriptions() { logger.info('cleaning up subscriptions...'); serviceBusService.listSubscriptions(topicName, function (error, subs, response) { if (!error) { logger.info('found ' + subs.length + ' subscriptions'); for (var i = 0; i &lt; subs.length; i++) { // if there are more than 100 messages on the subscription, assume the edge node is down if (subs[i].MessageCount &gt; 100) { logger.info('deleting subscription ' + subs[i].SubscriptionName); serviceBusService.deleteSubscription(topicName, subs[i].SubscriptionName, function (error, response) { if (error) { logger.error('error deleting subscription', error); } }); } } } else { logger.error('error getting topic subscriptions', error); } setTimeout(cleanUpSubscriptions, 60000); }); } </code></pre> </li> <li><h5>The <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/backend/routes/home.js">NewImage endpoint</a></h5> All of the stuff above is great, but it doesn't cover what happens when the Instagram API actually hits our endpoint. The route that accepts this request gets metadata for each image, and pushes it through the Service Bus topic: <pre><code class="language-javascript"> serviceBusService.sendTopicMessage('wazages', message, function (error) { if (error) { logger.error('error sending message to topic!', error); } else { logger.info('message sent!'); } }) </code></pre> </li> </ol> <h3>The Frontend</h3> <p>The <a href="https://github.com/JustinBeckwith/wazstagram/tree/master/frontend">frontend</a> part of this project is (despite my &#39;web node&#39; reference) a worker role that accepts accepts the incoming traffic from end users on the site. I chose to use worker roles because I wanted to take advantage of Web Sockets. At the moment, Cloud Services Web Roles do not provide that functionality. I could stand up a VM with Windows Server 8 and IIS 8, but see my aformentioned anxiety about managing my own VMs. The worker roles use <a href="http://socket.io/">socket.io</a> and <a href="http://expressjs.com">express</a> to provide the web site experience. The front end uses the same NPM modules as the backend: <a href="https://github.com/visionmedia/express/">express</a>, <a href="https://github.com/flatiron/winston">winston</a>, <a href="https://github.com/pofallon/winston-skywriter">winston-skywriter</a>, <a href="https://github.com/flatiron/nconf">nconf</a>, and <a href="https://github.com/WindowsAzure/azure-sdk-for-node">azure</a>. In addition to that, it uses <a href="http://socket.io/">socket.io</a> and <a href="https://github.com/visionmedia/ejs">ejs</a> to handle the client stuff. There are a few pieces to the frontend that are interesting: </p> <ol> <li><h5>Setting up socket.io</h5> Socket.io provides the web socket (or xhr) interface that we&#39;re going to use to stream images to the client. When a user initially visits the page, they are going to send a `setCity` call, that lets us know the city to which they want to subscribe (by default all <a href="https://github.com/JustinBeckwith/wazstagram/blob/master/backend/cities.json" target="_blank">cities in the system</a> are returned). From there, the user will be sent an initial blast of images that are cached on the server. Otherwise, you wouldn&#39;t see images right away: <pre><code class="language-javascript"> // set up socket.io to establish a new connection with each client var io = require('socket.io').listen(server); io.sockets.on('connection', function (socket) { socket.on('setCity', function (data) { logger.info('new connection: ' + data.city); if (picCache[data.city]) { for (var i = 0; i &lt; picCache[data.city].length; i++) { socket.emit('newPic', picCache[data.city][i]); } } socket.join(data.city); }); }); </code></pre> </li> <li><h5>Creating a Service Bus Subscription</h5> To receive messages from the worker nodes, we need to create a single subscription for each front end node process. This is going to create subscription, and start listening for messages: <pre><code class="language-javascript"> // create the initial subscription to get events from service bus serviceBusService.createSubscription(topicName, subscriptionId, function (error) { if (error) { logger.error('error creating subscription', error); throw error; } else { getFromTheBus(); } }); </code></pre> </li><li><h5>Moving data between Service Bus and Socket.IO</h5> As data comes in through the service bus subscription, you need to pipe it up to the appropriate connected clients. Pay special attention to `io.sockets.in(body.city)` - when the user joined the page, they selected a city. This call grabs all users subscribed to that city. The other **important thing to notice** here is the way `getFromTheBus` calls itself in a loop. There&#39;s currently no way to say &quot;just raise an event when there&#39;s data&quot; with the Service Bus Node.js implementation, so you need to use this model. <pre><code class="language-javascript"> function getFromTheBus() { try { serviceBusService.receiveSubscriptionMessage(topicName, subscriptionId, { timeoutIntervalInS: 5 }, function (error, message) { if (error) { if (error == &quot;No messages to receive&quot;) { logger.info('no messages...'); } else { logger.error('error receiving subscription message', error) } } else { var body = JSON.parse(message.body); logger.info('new pic published from: ' + body.city); cachePic(body.pic, body.city); io.sockets. in (body.city).emit('newPic', body.pic); io.sockets. in (universe).emit('newPic', body.pic); } getFromTheBus(); }); } catch (e) { // if something goes wrong, wait a little and reconnect logger.error('error getting data from service bus' + e); setTimeout(getFromTheBus, 1000); } } </code></pre> </li></ol> <h3>Learning</h3> <p>The whole point of writing this code for me was to explore building performant apps that used a rate limited API for data. Hopefully this model can effectively be used to accept data from any API responsibly, and scale it out to a number of connected clients to a single service. If you have any ideas on how to make this app better, please let me know, or submit a PR! </p> <h3>Questions?</h3> <p>If you have any questions, feel free to submit an issue here, or find me <a href="https://twitter.com/JustinBeckwith" target="_blank">@JustinBeckwith</a> </p> Wed, 30 Jan 2013 00:00:00 +0000 http://jbeckwith.com/2013/01/30/building-scalable-realtime-services-with-node-js-socket-io-and-windows-azure/ http://jbeckwith.com/2013/01/30/building-scalable-realtime-services-with-node-js-socket-io-and-windows-azure/ 5 steps to a better Windows command line <a href="/images/2012/11/header.png"> <img src="/images/2012/11/header.png"> </a> I spend a lot of time at the command line. As someone who likes to code on OSX and Windows, I've always been annoyed by the Windows command line experience. Do I use cmd, or PowerShell? Where are my tabs? What about package management? What about little frivolous things like <em>being able to resize the window</em>. I've finally got my Windows command line experience running smoothly, and wanted to share my setup. Here are my 5 steps to a Windows command line that doesn't suck. <h3>1. Use Console2 or ConEmu</h3> The first place to start is the actual console application. Scott Hanselman wrote an <a href="http://www.hanselman.com/blog/Console2ABetterWindowsCommandPrompt.aspx" target="_blank">excellent blog post</a> on setting up <a href="http://sourceforge.net/projects/console/" target="_blank">Console2</a>, and I've been using it ever since. It adds tabs, a resizable window, transparency, and the ability to run multiple shells. I choose to run PowerShell (you should too, keep listening). There are <a href="http://www.hanselman.com/blog/ConEmuTheWindowsTerminalConsolePromptWeveBeenWaitingFor.aspx" target="_blank">other options</a> out there, but I've really grown to love Console2. <a href="/images/2012/11/console2.png"> <img src="/images/2012/11/console2.png" alt="Console2"> </a> <h3>2. Use PowerShell</h3> I won't spend a ton of time evangelizing PowerShell. There are a few good reasons to dump cmd.exe and move over: <ul> <li><b>Most of the things you do in cmd will just work.</b> There are obviously some exceptions, but for the better part all of the things I want to do in cmd are easily done in PowerShell. </li> <li><b><a href="http://blogs.msdn.com/b/powershell/archive/2008/01/31/tab-completion.aspx" target="_blank">Tab Completion</a> and <a href="http://technet.microsoft.com/en-us/library/ee176848.aspx" target="_blank">Get-Help</a> is awesome.</b> PowerShell does a great job of making things discoverable as you learn. <li><b>It's a sane scripting tool.</b> If you've ever tried to do anything significant in a batch script, I'm sorry. You can even create your <a href="http://community.bartdesmet.net/blogs/bart/archive/2008/02/03/easy-windows-powershell-cmdlet-development-and-debugging.aspx" target="_blank">own modules and cmdlets</a> using managed code, if that's your thing.</li> <li><b>Microsoft is releasing a lot of stuff built on PowerShell.</b> Most of the new stuff we release is going to have great PowerShell support, including <a href="http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx" target="_blank">Windows Azure</a>. </li> <li><b>It's a growing community.</b> Sites like <a href="http://powershell.org/" target="_blank">PowerShell.org</a> and <a href="http://psget.net/" target="_blank">PsGet</a> provide a great place to ask questions and look at work others have done. </ul> Now that I've sold you, there are a few things you'll find through here that make using PowerShell a bit easier. To use this stuff, you're going to want to set an execution policy in PowerShell that lets you run custom scripts. By default, the execution of PS scripts is disabled, but it's kind of necessary to do anything interesting. I lead a wild and dangerous life, so I use an unrestricted policy. To set your policy, first run Console2 (or PowerShell) as an administrator: <a href="/images/2012/11/console2-as-administrator.png"> <img src="/images/2012/11/console2-as-administrator.png"> </a> Next, use the Set-ExecutionPolicy command. Note, this means any un-signed script can be run on your system, if you run it, and many people choose to use RemoteSigned. Here is the <a href="" target="_blank">official doc on Set-ExecutionPolicy</a>. <pre><code class="language-clike"> Set-ExecutionPolicy Unrestricted </code></pre> <a href="/images/2012/11/set-executionpolicy.png"> <img src="/images/2012/11/set-executionpolicy.png"> </a> Now you're ready to start doing something interesting. <h3>3. Use the Chocolatey package manager</h3> Spending a lot of time in Ubuntu and OSX, I got really used to `sudo apt-get install <package>` and `<a href="http://mxcl.github.com/homebrew/" target="_blank">brew</a> install <package>`. The closest I've found to that experience on Windows is the <a href="http://chocolatey.org/" target="_blank">Chocolatey package manager</a>. Chocolatey has all of the packages you would expect to find on a developer's machine: <a href="/images/2012/11/choc-list.png"> <img src="/images/2012/11/choc-list.png" alt="list packages"> </a> To install Chocolatey, just run cmd.exe and run the following command (minus the c:\> part): <pre><code class="language-clike"> C:\&gt; @powershell -NoProfile -ExecutionPolicy unrestricted -Command &quot;iex ((new-object net.webclient).DownloadString('http://bit.ly/psChocInstall'))&quot; &amp;&amp; SET PATH=%PATH%;%systemdrive%\chocolatey\bin </code></pre> And you're ready to rock. If you want to install something like 7zip, you can use the cinst command: <pre><code class="language-clike"> cinst 7zip </code></pre> <a href="/images/2012/11/7zip-install.png"> <img src="/images/2012/11/7zip-install.png" alt="install 7zip"> </a> <h3>4. Use an alias for SublimeText</h3> This seems kind of trivial, but one of the things I've really missed on Windows is the default shortcut to launch <a href="http://www.sublimetext.com/" target="_blank">SublimeText</a>, <a href="http://www.sublimetext.com/docs/2/osx_command_line.html" target="_blank">subl</a>. I use my PowerShell profile to create an alias to SublimeText.exe, which allows me to `subl file.txt` or `subl .` just like I would from OSX. <a href="http://www.howtogeek.com/50236/customizing-your-powershell-profile/" target="_blank">This article</a> gives a basic overview on how to customize your PowerShell Profile; it's really easy to follow, so I won't go into re-creating the steps. <a href="/images/2012/11/create-profile.png"> <img src="/images/2012/11/create-profile.png"> </a> After you've got your PowerShell profile created, edit the script, and add this line: <pre><code class="language-clike"> Set-Alias subl 'C:\Program Files\Sublime Text 2\sublime_text.exe' </code></pre> Save your profile, and spin up a new PowerShell tab in Console2 to reload the session. Go to a directory that contains some code, and try to open it: <pre><code class="language-clike"> subl . </code></pre> This will load the current directory as a project in SublimeText from the command line. Small thing, but a nice thing. <h3>5. Use PsGet and Posh-Git</h3> One of the nice things about using PowerShell over cmd is the community that's starting to emerge. There are a ton of really useful tools and cmdlets that others have already written, and the easiest way to get at most of these is to use <a href="http://psget.net/" target="_blank">PsGet</a>. PsGet provides a super easy way to install PowerShell modules that extend the basic functionality of the shell, and provide other useful libraries. To install PsGet, run the following command from a PowerShell console: <pre><code class="language-clike"> (new-object Net.WebClient).DownloadString(&quot;http://psget.net/GetPsGet.ps1&quot;) | iex </code></pre> If you get an error complaining about executing scripts, you need to go back to #2. Immediately, we can start using the `Install-Module` command to start adding functionality to our console. <a href="/images/2012/11/psget.png"> <img src="/images/2012/11/psget.png" alt="Install PsGet"> </a> The first module that led me to PsGet is a package that adds status and tab completion to git. Phil Haack did a <a href="http://haacked.com/archive/2011/12/13/better-git-with-powershell.aspx" target="_blank">great write up</a> on setting up <a href="https://github.com/dahlbyk/posh-git/" target="_blank">posh-git</a>, and I've since discovered a few other <a href="http://pscx.codeplex.com" target="_blank">cool things</a> in the PsGet gallery. Installing Posh-Git is pretty straight forward: <a href="/images/2012/11/install-posh-git.png"> <img src="/images/2012/11/install-posh-git.png" alt="Install Posh-Git"> </a> The first nice thing here is that I now have command completion. As I type `git sta` and hit <tab>, it will be completed to `git status`. Some tools like <a href="https://github.com/MSOpenTech/posh-npm" target="_blank">posh-npm</a> will even search the npm registry for packages using tab completion. The other cool thing you get with this module is the status of your repository right in the prompt: <a href="/images/2012/11/posh-git-status.png"> <img src="/images/2012/11/posh-git-status.png" alt="posh git"> </a> <h4>Wrapping up</h4> These are just the ways I know how to make the command line experience better. If any one else has some tips, I'd love to hear them! Wed, 28 Nov 2012 00:00:00 +0000 http://jbeckwith.com/2012/11/28/5-steps-to-a-better-windows-command-line/ http://jbeckwith.com/2012/11/28/5-steps-to-a-better-windows-command-line/ WebMatrix and Node Package Manager <img src="/images/2012/09/node_128.png" alt="NPM and WebMatrix" /> A few months ago, we introduced the new <a href="http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/" target="_blank">node.js features we've added to WebMatrix 2</a>. One of the missing pieces from that experience was a way to manage <a href="https://npmjs.org/" target="_blank">NPM</a> (Node Package Manager) from within the IDE. This week we shipped the final release of WebMatrix 2, and one of the fun things that comes with it is a new extension for managing NPM. For a more complete overview of the WebMatrix 2, check out <a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a>. If you want to skip all of this and just download the bits, here you go: <p><a href="http://go.microsoft.com/?linkid=9809776" target="_blank"><img style="display: inline" title="image" alt="image" src="http://lh5.ggpht.com/-lm1GuUL20p8/T9HReoCZk7I/AAAAAAAABU4/uO7oVvNCGPQ/image%25255B4%25255D.png?imgmax=800" width="170" height="45"></a></p> <h3>Installing the Extension</h3> The NPM extension can be installed using the extension gallery inside of WebMatrix. To get started, go ahead and create a new node site with express using the built in template: <a href="/images/2012/09/template.png"> <img src="/images/2012/09/template.png" alt="Create a new express site" /> </a> After you create the site, click on the 'Extensions' button in the ribbon: <a href="/images/2012/09/extension-gallery-icon.png"> <img src="/images/2012/09/extension-gallery-icon.png" alt="WebMatrix Extension Gallery" /> </a> Search for 'NPM', and click through the wizard to finish installing the extension: <a href="/images/2012/09/npm-extension.png"> <img src="/images/2012/09/npm-extension.png" alt="Install the NPM Gallery Extension" /> </a> Now when you navigate to the files workspace, you should see the new NPM icon in the ribbon. <h3>Managing Packages</h3> While you're working with node.js sites, the icon should always show up. To get started, click on the new icon in the ribbon: <a href="/images/2012/09/npm-icon.png"> <img src="/images/2012/09/npm-icon.png" alt="NPM Icon in the ribbon" /> </a> This will load a window very similar to the other galleries in WebMatrix. From here you can search for packages, install, uninstall, update, any of the basic tasks you're likely to do day to day with npm. <a href="/images/2012/09/npm-dialog.png"> <img src="/images/2012/09/npm-dialog.png" alt="NPM Gallery" class="alignnone" /> </a> When you open up a new site, we also check your package.json to see if you're missing any dependencies: <a href="/images/2012/09/missing-packages.png"> <img src="/images/2012/09/missing-packages.png" alt="Missing NPM packages" /> </a> We're just getting started with the node tools inside of WebMatrix, so if you have anything else you would like to see added please hit us up over at <a href="https://webmatrix.uservoice.com" target="_blank">UserVoice</a>. <h3>More Information</h3> If you would like some more information to help you get started, check out some of these links: <ul> <li><a href="http://bit.ly/LG7gs8" target="_blank">WebMatrix on Microsoft.com</a></li> <li><a href="https://twitter.com/#!/webmatrix" target="_blank">WebMatrix on Twitter</a></li> <li><a href="https://github.com/MicrosoftWebMatrix" target="_blank">WebMatrix on GitHub</a></li> <li><a href="http://webmatrix.uservoice.com" target="_blank">WebMatrix on UserVoice</a></li> <li><a href="http://www.microsoft.com/Web/webmatrix/optimize.aspx" target="_blank">WebMatrix and Open Source Applications</a></li> <li><a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a></li> </ul> <br /> <br /> <h4>Happy Coding!</h4> Fri, 07 Sep 2012 00:00:00 +0000 http://jbeckwith.com/2012/09/07/webmatrix-and-node-package-manager/ http://jbeckwith.com/2012/09/07/webmatrix-and-node-package-manager/ WordPress and WebMatrix <img src="/images/2012/06/wp_title_header.png" alt="WordPress and WebMatrix" /> After releasing WebMatrix 2 RC this week, I'm excited to head out to NYC for WordCamp 2012. While I get ready to present tomorrow, I figured I would share some of the amazing work the WebMatrix team has done to create a great experience for WordPress developers. For a more complete overview of the WebMatrix 2 RC, check out <a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a>. If you want to skip all of this and just download the bits, here you go: <p><a href="http://bit.ly/L77V6w" target="_blank"><img style="display: inline" title="image" alt="image" src="http://lh5.ggpht.com/-lm1GuUL20p8/T9HReoCZk7I/AAAAAAAABU4/uO7oVvNCGPQ/image%25255B4%25255D.png?imgmax=800" width="170" height="45"></a></p> <h3>Welcome to WebMatrix</h3> WebMatrix gives you a couple of ways to get started with your application. Anything we do is going to be focused on building web applications, with as few steps as possible. WebMatrix supports opening remote sites, opening local sites, creating new sites with PHP, or creating an application by starting with the Application Gallery. <a href="/images/2012/06/wp_start_screen.png"> <img src="/images/2012/06/wp_start_screen.png" alt="Welcome to WebMatrix" /> </a> <h3>The Application Gallery</h3> We work with the community to maintain a list of open source applications that just work with WebMatrix on the Windows platform. This includes installing the application locally, and deploying to Windows Server or Windows Azure: <a href="/images/2012/06/wp_app_gallery.png"> <img src="/images/2012/06/wp_app_gallery.png" alt="WebMatrix application gallery" /> </a> <h3>Install PHP and MySQL Automatically</h3> When you pick the application you want to install, WebMatrix knows what dependencies need to be installed on your machine. This means you don't need to set up a web server, install and configure MySQL, mess around with the MySQL command line - none of that. It all just happens auto-magically. <a href="/images/2012/06/wp_dependencies.png"> <img src="/images/2012/06/wp_dependencies.png" alt="Install and setup automatically" /> </a> <h3>The Dashboard</h3> After installing WordPress and all of it's dependencies, WebMatrix provides you with a dashboard that's been customized for WordPress. We open up an extensibility model that makes it easier for open source communities to plug into WebMatrix, and we've been working with several groups to make sure we provide this kind of experience: <a href="/images/2012/06/wp_dashboard.png"> <img src="/images/2012/06/wp_dashboard_clipped.png" alt="WordPress Dashboard" /> </a> <h3>Protected Files</h3> When you move into the files work space, you'll notice a lock file next to many of the files in the root. We worked with the WordPress community to define a list of files that are protected in WordPress. These are files that power the core of WordPress, and probably shouldn't be changed: <a href="/images/2012/06/wp_locked_files.png"> <img src="/images/2012/06/wp_locked_files.png" alt="Locked system files" /> </a> We won't stop you from editing the file, but hopefully this prevents people from making mistakes: <a href="/images/2012/06/wp_lock_warning.png"> <img src="/images/2012/06/wp_lock_warning.png" alt="WebMatrix saves you from yourself" /> </a> <h3>HTML5 & CSS3 Tools</h3> The HTML editor in WebMatrix has code completion, validation, and formatting for HTML5. The editor is really, really good. The CSS editor includes code completion, validation, and formatting for CSS3, including the latest and greatest CSS3 modules. We also include support for CSS preprocessors like LESS and Sass. I think my favorite part about the CSS editor is the way it makes dealing with color easier. If you start off a color property, WebMatrix will look at the current CSS file, and provide a palette built from the other colors used throughout your site. This prevents you from having 17 shades of the mostly same color blue: <a href="/images/2012/06/wp_color_pallette.png"> <img src="/images/2012/06/wp_color_pallette.png" alt="The CSS Color Palette" /> </a> If you want to add a new color, we also have a full color picker. This thing is awesome - my favorite part is the eye dropper that lets you choose colors in other applications. <a href="/images/2012/06/wp_color_picker.png"> <img src="/images/2012/06/wp_color_picker.png" alt="The CSS Color Picker" /> </a> <h3>PHP Code Completion</h3> When you're ready to start diving into PHP, we include a fancy new PHP editor. It provides code completion with documentation from php.net, and a lot of other little niceties that make writing PHP easier: <a href="/images/2012/06/wp_php_intellisense.png"> <img src="/images/2012/06/wp_php_intellisense.png" alt="PHP Code Completion" /> </a> <h3>WordPress Code Completion</h3> So you've written some PHP, but now you want to start using the built-in functions available in WordPress. We worked with the WordPress community to come up with a list of supported functions, along with documentation on how they work. Any open source application in the gallery can provide this kind of experience: <a href="/images/2012/06/wp_intellisense.png"> <img src="/images/2012/06/wp_intellisense.png" alt="WordPress specific Code Completion" /> </a> <h3>MySQL Database Editor</h3> If you need to make changes directly to the database, WebMatrix has a full featured MySQL editor built right into the product. You can create tables, manage keys, or add data right through the UI. No command line needed. <a href="/images/2012/06/wp_mysql.png"> <img src="/images/2012/06/wp_mysql.png" alt="MySQL Database Manager" /> </a> <h3>Remote Editing</h3> If you need to make edits to a live running site, we can do that to. Just enter your connection information (FTP or Web Deploy), and you can start editing your files without dealing with a FTP client: <a href="/images/2012/06/wp_start_remote.png"> <img src="/images/2012/06/wp_start_remote.png" alt="Open a remote site" /> </a> After you make your changes, just save the file to automatically upload it to your server: <a href="/images/2012/06/wp_remote_code.png"> <img src="/images/2012/06/wp_remote_code.png" alt="Edit files remotely" /> </a> <h3>Easy Publishing</h3> When you're ready to publish your application, you have the choice of using FTP or Web Deploy. If you use Web Deploy, we can even publish your database automatically along with the files in your WordPress site. When you make subsequent publish calls, only the changed files are published: <a href="/images/2012/06/wp_publish.png"> <img src="/images/2012/06/wp_publish.png" alt="Easy Publishing" /> </a> <h3>More Information</h3> If you would like some more information to help you get started, check out some of these links: <ul> <li><a href="http://bit.ly/LG7gs8" target="_blank">WebMatrix on Microsoft.com</a></li> <li><a href="https://twitter.com/#!/webmatrix" target="_blank">WebMatrix on Twitter</a></li> <li><a href="https://github.com/MicrosoftWebMatrix" target="_blank">WebMatrix on GitHub</a></li> <li><a href="http://webmatrix.uservoice.com" target="_blank">WebMatrix on UserVoice</a></li> <li><a href="http://www.microsoft.com/Web/webmatrix/optimize.aspx" target="_blank">WebMatrix and Open Source Applications</a></li> <li><a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a></li> </ul> <br /> <br /> <h2>Happy Coding!</h2> Sat, 09 Jun 2012 00:00:00 +0000 http://jbeckwith.com/2012/06/09/wordpress-and-webmatrix/ http://jbeckwith.com/2012/06/09/wordpress-and-webmatrix/ Node.js meet WebMatrix 2 <img src="/images/2012/06/title-header.png" alt="WebMatrix 2 + Node.js = love" /> After months of hard work by the WebMatrix team, it's exciting to introduce the release candidate of WebMatrix 2. WebMatrix 2 includes tons of new features, but today I want to give an overview of the work we've done to enable building applications with Node.js. If you want to skip all of this and just get a download link (it's free!), <a href="http://bit.ly/LG7gs8" target="_blank">here you go</a>. <h3>How far we have come</h3> <p> Less than a year ago, I was working at Carnegie Mellon University, trying to use Node.js with ASP.NET for real time components of our online learning environment. Running Linux inside of our customers' data centers was a non-starter, and running a production system in cygwin was even less ideal. Developing node on Windows wasn't exactly easy either - if you managed to get node running, getting NPM to work was near impossible. Using node in an environment favorable to Windows was more than an up hill battle. </p> <p> In the last 12 months since I've joined Microsoft, we've seen various partnerships between Joyent and Microsoft, resulting in new releases of node and npm to support Windows, and a <a href="https://www.windowsazure.com/en-us/develop/nodejs/" target="_blank">commitment to Node on Windows Azure</a>. We've worked together to build a better experience for developers, IT administrators, and ultimately, the users who use our systems. </p> <p> One of the results of that work is a vastly improved experience for building applications with Node.js on Windows Azure. Glenn Block on the SDK team has done a <a href="http://codebetter.com/glennblock/2012/06/07/windowsazure-just-got-a-lot-friendlier-to-node-js-developers/" target="_blank">fabulous write up</a> on the ways Microsoft is making Azure a great place for Node.js developers. As our favorite VP Scott Guthrie says on his blog, <a href="http://weblogs.asp.net/scottgu/archive/2012/06/07/meet-the-new-windows-azure.aspx" target="_blank">meet the new Windows Azure</a>. </p> <br /> <br /> <h3>Enter WebMatrix 2</h3> Today, getting started with node.js is a relatively simple task. You install node, npm (which is now bundled with the node installers), and get started with your favorite text editor. There are infinite possibilities, and limitless configurations for managing projects, compiling CoffeeScript & LESS, configuring your production settings, and deploying your apps. WebMatrix 2 sets out to provide another way to build node.js apps: everything you need to build great apps is one place. <a href="/images/2012/06/splash.png"> <img src="/images/2012/06/splash.png" alt="Welcome to WebMatrix" /> </a> WebMatrix 2 is first and foremost designed for building web applications. From the start screen, you can create applications using pre-built templates, or install common open source applications from the Web Gallery. The current set of templates support creating applications with <a href="http://nodejs.org/" target="_blank">Node.js</a>, <a href="http://php.net/" target="_blank">PHP</a>, and (of course) <a href="http://www.asp.net/web-pages" target="_blank">ASP.NET Web Pages</a>. Out of the box, WebMatrix 2 includes three templates for Node.js: <ul> <li>Empty Node.js Site</li> <li>Express Site</li> <li>Express Starter Site</li> </ul> <p> The empty site provides a very basic example of using an http server - the same sample that's available on <a href="http://nodejs.org" target="_blank">nodejs.org</a>. The Express Site is a basic application generated using the scaffolding tool in the Node.js framework <a href="http://expressjs.com/" target="_blank">express</a>. The Node Starter Site is where things start to get interesting. This boilerplate is <a href="https://github.com/MicrosoftWebMatrix/ExpressStarter" target="_blank">hosted on GitHub</a>, and shows how to implement sites that include parent/child layouts with jade, LESS css, logins with Twitter and Facebook, mobile layouts, and captcha. When you create a new application using any of these templates, WebMatrix 2 is going to ensure node, npm, and IISNode are installed on your system. If not, it will automatically install any missing dependencies. This feature is also particularly useful if you are building PHP/MySQL applications on Windows. </p> <a href="/images/2012/06/dependencies.png"> <img src="/images/2012/06/dependencies.png" alt="WebMatrix installs node, npm, and iisnode" /> </a> <p>The end result of the Node Starter Site is a fully functional application that includes Express, Jade, LESS, chat with socket.io, logins with EveryAuth, and mobile support with jQuery Mobile:</p> <a href="/images/2012/06/template.png"> <img src="/images/2012/06/template.png" alt="The node starter template" /> </a> <br /> <br /> <h3>IntelliSense for Node.js</h3> <p> One of the goals of WebMatrix 2 is reduce the barrier of entry for developers getting started with Node.js. One of the ways to do that is to provide IntelliSense for the core modules on which all applications are built. The documentation we use is actually built from the docs on the <a href="http://nodejs.org/api/" target="_blank">node.js docs site</a>. </p> <a href="/images/2012/06/moduleIntelliSense.png"> <img src="/images/2012/06/moduleIntelliSense.png" alt="WebMatrix provides IntelliSense that makes it easier to get started" /> </a> <p> In addition to providing IntelliSense for core Node.js modules, WebMatrix 2 also provides code completion for your own JavaScript code, and third party modules installed through NPM. There are infinite ways to build your application, and the NPM gallery recently <a href="https://twitter.com/JavaScriptDaily/status/203878468205817857" target="_blank">surpassed 10,000 entries</a>. As developers start building more complex applications, it can be difficult (or even intimidating) to get started. WebMatrix 2 is making it easier to deal with open source packages: </p> <a href="/images/2012/06/thirdpartyintellisense.png"> <img src="/images/2012/06/thirdpartyintellisense.png" alt="Use third party modules with code completion" /> </a> <br /> <br /> <h3>Support for Jade & EJS</h3> <p> To build a truly useful tool for building Node.js web applications, we decided to provide first class editors for <a href="http://jade-lang.com/" target="_blank">Jade</a> and <a href="http://embeddedjs.com/" target="_blank">EJS</a>. WebMatrix 2 provides syntax highlighting, HTML validation, code outlining, and auto-completion for Jade and EJS. </p> <a href="/images/2012/06/jade.png"> <img src="/images/2012/06/jade.png" alt="WebMatrix has syntax highlighting for Jade" /> </a> <p> If you're into the whole angle bracket thing, the experience in EJS even better, since it's based off of our advanced HTML editor: </p> <a href="/images/2012/06/moduleIntelliSense.png"> <img src="/images/2012/06/ejs.png" alt="WebMatrix has IntelliSense for EJS" /> </a> <h3>The best {LESS} editor on the planet</h3> <p>So I'll admit it - I'm a bit of a CSS pre-processor geek. I don't write CSS because I love it, but because I need to get stuff done, and I want to write as little of it as possible. Tools like <a href="http://lesscss.org/" target="_blank">LESS</a> and <a href="http://sass-lang.com/" target="_blank">Sass</a> provide missing features for programmers in CSS like variables, mixins, nesting, and built in common functions. <a href="/images/2012/06/less.png"> <img src="/images/2012/06/less.png" alt="Write LESS with validation, formatting, and IntelliSense" /> </a> The LESS editor in WebMatrix not only provides syntax highlighting, but also provides LESS specific validation, IntelliSense for variables and mixins, and LESS specific formatting. Most node developers are going to process their LESS on the server using the npm module, but if you want to compile LESS locally, you can use the <a href="http://extensions.webmatrix.com/packages/OrangeBits/" target="_blank">Orange Bits compiler</a> to compile your CSS at design time. <a href="/images/2012/06/sass.png"> <img src="/images/2012/06/sass.png" alt="WebMatrix provides syntax highlighting for Sass" /> </a> <h3>CoffeeScript Editor</h3> <p> In the same way LESS and Sass make it easier to write CSS, <a href="http://coffeescript.org/" target="_blank">CoffeeScript</a> simplifies the way you write JavaScript. WebMatrix 2 provides syntax highlighting, code outlining, and completion that simplifies the editing experience. If you want to use CoffeeScript without compiling it on the server, you can use the <a href="http://extensions.webmatrix.com/packages/OrangeBits/" target="_blank">Orange Bits compiler</a> to compile your CoffeeScript into JavaScript at design time. </p> <a href="/images/2012/06/coffeescript.png"> <img src="/images/2012/06/coffeescript.png" alt="WebMatrix and CoffeeScript" /> </a> <h3>Mobile Emulators</h3> <p> Designing applications for mobile can't be an afterthought. WebMatrix 2 is trying to make this easier in a couple of ways. First - the visual templates (in this case the Node Starter Template) is designed taking advantage of responsive layouts in the main StyleSheet: <ul><li><a href="https://github.com/MicrosoftWebMatrix/ExpressStarter/blob/master/public/stylesheets/style.less" target="_blank">styles.less</a></li></ul> This is great if you don't need to change the content of your site, but is lacking for more complex scenarios. To get around that, the node starter template uses a piece of connect middleware to detect if the user is coming from a mobile device, and sends them to a mobile layout based on jQuery Mobile (more on this in another post). For individual views, there is a convention based system that allows you to create {viewName}_mobile.jade views which are only loaded on mobile devices. </p> <p> It gets even better. What if you need to see what your site will look like in various browsers and mobile devices? WebMatrix 2 provides an extensibility model that allows you to add mobile and desktop browsers to the run menu: </p> </p> <a href="/images/2012/06/emulators.png"> <img src="/images/2012/06/emulators.png" alt="WebMatrix shows all of the browsers and emulators on your system" /> </a> <p>Today, we offer a Windows Phone emulator, and iPhone / iPad simulators. In the future we're looking for people to build support for other emulators *coughs* android *coughs*, and even build bridges to online browser testing applications:</p> <a href="/images/2012/06/iphone.png"> <img src="/images/2012/06/iphone.png" alt="Test your websites on the iPhone simulator" /> </a> <h3>Extensions & Open Source</h3> <p> A code editing tool is only as valuable as the developers that commit to the platform. We want to achieve success with everyone, and grow together. As part of that goal, we've opened up an extensibility model that allows developers to build custom extensions and share them with other developers. The extension gallery is available online (more on this to come) at <a href="http://extensions.webmatrix.com" target="_blank">http://extensions.webmatrix.com</a>. We're planning to move a bunch of these extensions into GitHub, and the NodePowerTools extension is the first one to go open source: <ul> <li><a href="https://github.com/MicrosoftWebMatrix/NodePowerTools" target="_blank">Node Power Tools</a></li> <li><a href="https://github.com/JustinBeckwith/OrangeBits" target="_blank">OrangeBits Compiler</a></li> </ul> In the coming months you'll start to see more extensions from Microsoft, and more open source. </p> <a href="/images/2012/06/extension-gallery.png"> <img src="/images/2012/06/extension-gallery.png" alt="Build extensions and share them on the extension gallery" /> </a> <h3>Everyone worked together</h3> I want to make sure I thank everyone who helped make this release happen, including the WebMatrix team, Glenn Block, Claudio Caldato, our Node Advisory board, Isaac Schlueter, and everyone at Joyent. For more information, please visit: <ul> <li><a href="http://bit.ly/LG7gs8" target="_blank">WebMatrix on Microsoft.com</a></li> <li><a href="https://twitter.com/#!/webmatrix" target="_blank">WebMatrix on Twitter</a></li> <li><a href="https://github.com/MicrosoftWebMatrix" target="_blank">WebMatrix on GitHub</a></li> <li><a href="http://webmatrix.uservoice.com" target="_blank">WebMatrix on UserVoice</a></li> <li><a href="http://www.microsoft.com/web/post/how-to-use-the-nodejs-starter-template-in-webmatrix" target="_blank">WebMatrix and Node on Microsoft.com</a></li> <li><a href="http://codebetter.com/glennblock/2012/06/07/windowsazure-just-got-a-lot-friendlier-to-node-js-developers/" target="_blank">Windows Azure just got a lot friendlier to node.js developers</a></li> <li><a href="http://vishaljoshi.blogspot.com/2012/06/announcing-webmatrix-2-rc.html" target="_blank">Vishal Joshi's blog post</a></li> </ul> <br /> <br /> <h4>Enjoy!</h4> Thu, 07 Jun 2012 00:00:00 +0000 http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/ http://jbeckwith.com/2012/06/07/node-js-meet-webmatrix-2/ Building a user map with SignalR and Bing <a href="http://signalrmap.apphb.com" target="_blank"><img src="/images/2011/10/signalrheader.png" alt="" title="Building a user map with SignalR and Bing" width="430" height="290"/></a> Building asynchronous real time apps with bidirectional communication has traditionally been a very difficult thing to do. HTTP was originally designed to speak in terms of requests and responses, long before concepts of rich media, social integration, and real time communication were considered staples of modern web development. Over the years, various solutions have been hacked together to solve this problem. You can use plugins like flash or silverlight to make a true socket connection on your behalf - but not all clients support plugins. You can use long polling to manage multiple connections via HTTP - but this can be tricky to implement, and can eat up system resources. The <a href="http://dev.w3.org/html5/websockets/" target="_blank">Web Socket standard</a> promises to give web developers a first class socket connection, but browser support is spotty and inconsistent. Various tools across multiple stacks have been release to solve this problem, but in this post I would like to talk about the first real asynchronous client/server package for ASP.NET: <a href="https://github.com/SignalR/SignalR" target="_blank">SignalR</a>. SignalR allows .NET developers to change the way we think about client/server messaging: instead of worrying about implementation details of web sockets, we can focus on the way communication flows across the various components of our applications. <h3>This sounds familiar: socket.io with node.js</h3> Over the last year or so, <a href="http://nodejs.org/" target="_blank">node.js</a> has burst onto the scene as a popular stack for building highly asynchronous applications. The event driven model of JavaScript, paired with a community of inventive developers, led to a platform well suited for these needs. The package <a href="http://socket.io/" target="_blank">socket.io</a> provides what I have found to be the missing piece in the comet puzzle: a front and back end framework that just makes sockets over the web work. No more building flash applications to attempt opening connections over various ports. No more poorly implemented long polling solutions. Most importantly, socket.io made web sockets just plain easy to use: <pre><code class="language-markup"> &lt;script src=&quot;/socket.io/socket.io.js&quot;&gt;&lt;/script&gt; &lt;script&gt; var socket = io.connect('http://localhost'); socket.on('news', function (data) { console.log(data); socket.emit('my other event', { my: 'data' }); }); &lt;/script&gt; </code></pre> Node.js and socket.io paved the way for a series of new tools and frameworks across multiple stacks that enable developers to have a first class client/server messaging experience. Node.js and socket.io are wonderful tools - but let's get back to focusing on SignalR. <h3>Two ways to build apps with SignalR</h3> There are two ways you can go about setting up the server for SignalR. If you want a low level experience, you can add a 'PersistentConnection' class along with a custom route. This will give you basic messaging capabilities, suitable for many apps. Straight from the <a href="https://github.com/SignalR/SignalR" target="_blank">SignalR github</a>, here is an example: <pre><code class="language-csharp"> using SignalR; public class MyConnection : PersistentConnection { protected override Task OnReceivedAsync(string clientId, string data) { // Broadcast data to all clients return Connection.Broadcast(data); } } </code></pre> This works well if you're dealing with simple messaging - the other model SignalR supports is the 'hub' model. This is where things start to get interesting. Using hubs, you can invoke client side functions from the server, and server side functions from the client. Here's another example from the documentation: Here is the server: <pre><code class="language-csharp"> public class Chat : Hub { public void Send(string message) { // Call the addMessage method on all clients Clients.addMessage(message); } } </code></pre> And the client: <pre><code class="language-markup"> &lt;script type=&quot;text/javascript&quot;&gt; $(function () { // Proxy created on the fly var chat = $.connection.chat; // Declare a function on the chat hub so the server can invoke it chat.addMessage = function(message) { $('#messages').append('&lt;li&gt;' + message + '&lt;/li&gt;'); }; $(&quot;#broadcast&quot;).click(function () { // Call the chat method on the server chat.send($('#msg').val()) .fail(function(e) { alert(e); }) // Supports jQuery deferred }); // Start the connection $.connection.hub.start(); }); &lt;/script&gt; &lt;input type=&quot;text&quot; id=&quot;msg&quot; /&gt; &lt;input type=&quot;button&quot; id=&quot;broadcast&quot; /&gt; &lt;ul id=&quot;messages&quot;&gt; &lt;/ul&gt; </code></pre> I chose the high level API, because well... it's just cool. For a wonderful break down of the differences between these two methods, check out <a href="http://www.hanselman.com/blog/AsynchronousScalableWebApplicationsWithRealtimePersistentLongrunningConnectionsWithSignalR.aspx" target="_blank">Scott Hanselman's post on the topic</a>. <h3>Lets build something!</h3> One of the common examples of using these frameworks is a chat room: it has all of the touch points that are otherwise difficult to implement. How do we know when someone joins the room? What about sending a message? What if I want to send a message to multiple people? This is a perfect example of how client/server messaging over the web can make our lives easier. The SignalR folks have a live sample of this application running on their <a href="http://chatapp.apphb.com/" target="_blank">demo site</a>. With the chat idea done, I decided to combine two tools into one project: a user map. I want to maintain a map that uses a pushpin for every user on the page. As users come, a new pushpin will be added in their location in real time. As they leave, the pushpin will be removed. Before we dive into the code, check out the demo at <a href="http://signalrmap.apphb.com/" target="_blank">http://signalrmap.apphb.com/</a>. If no one is in the room, you can slightly randomize your position by using the "random flag" at <a href="http://signalrmap.apphb.com/?random=true" target="_blank">http://signalrmap.apphb.com/?random=true</a>. This will allow you to use multiple browser windows and watch the system add location push pins. <h3>Building the client</h3> The client of SignalRMap includes a Bing map, and some JavaScript to interact with the back end. I used <a href="http://www.asp.net/mvc/mvc3" target="_blank">ASP.NET MVC 3</a> for this example, but this will work just fine with a web form. To start, we need to include a few script files: <pre><code class="language-markup"> &lt;script charset=&quot;UTF-8&quot; type=&quot;text/javascript&quot; src=&quot;http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0&quot;&gt;&lt;/script&gt; &lt;script src=&quot;@Url.Content(&quot;~/Scripts/jquery-1.6.4.min.js&quot;)&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;script src=&quot;@Url.Content(&quot;~/Scripts/jquery.signalR.min.js&quot;)&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;script type=&quot;text/javascript&quot; src=&quot;@Url.Content(&quot;~/signalr/hubs&quot;)&quot;&gt;&lt;/script&gt; </code></pre> The first thing we are including here is the Bing Maps JavaScript SDK - this will do all of the heavy lifting for our maps. The SignalR client is dependent upon JavaScript, so we need to include it along with our SignalR reference. Finally, we include the 'hubs' functionality into our application, linking our client and server side methods. After including our scripts, connecting to a hub is crazy awesome easy: <pre><code class="language-javascript"> // create the connection to our hub var mapHub = $.connection.mapHub; // define some javascript methods the server side hub can invoke // add a new client to the map mapHub.addClient = function (client) { addClient(client); centerMap(); var pins = getPushPins(); $(&quot;#userCount&quot;).html(pins.length) }; // start the hub $.connection.hub.start(function () { // after the hub has started, get the current location from the browser navigator.geolocation.getCurrentPosition(function (position) { // create the map element on the page mappit(position); // notify the server a new user has joined the party var coords = isRandom ? createRandomPosition(position) : position.coords; var message = { 'user': '', 'location': { latitude: coords.latitude, longitude: coords.longitude} }; mapHub.join(message); }); }); </code></pre> There are a few things going on here. First, we reference our connection to the hub created on the server (note: the connection has not been established yet). Notice the mapHub.addClient method - this method will be exposed in a way such that it can be invoked from the server. *scratches head* - this is a neat concept. After defining methods which can be invoked from the server, we start the connection to the hub. Once the connection is established, we get the browser's current location, and send that location back to the server. That's about it. Remember how simple it was to use socket.io? Here we have the same experience. There's a little more client script here to handle managing the map component. For the full client source for the application, check out my <a href="https://github.com/JustinBeckwith/SignalRMap" target="_blank">github</a>. <h3>Server side code</h3> As mentioned above, I chose to take the 'hubs' route for my application. One of the nice things about using a hub is that it doesn't require any custom routing - just create a class that extends 'Hub', and you're set. In this example, I'm storing a persistent list of the clients connected to the application (obviously, this method will only work with a single web server). As users show up at the site, they send their current position to the server. The new MapClient is broadcasted to all of the connected clients, and the new client is given the master list of clients: <pre><code class="language-csharp"> using System; using System.Collections.Generic; using System.Linq; using System.Web; using SignalR.Hubs; namespace SignalRMap { public class MapHub : Hub, IDisconnect { private static readonly Dictionary&lt;string, MapClient&gt; _clients = new Dictionary&lt;string, MapClient&gt;(); public void Join(MapClient message) { _clients.Add(this.Context.ClientId, message); Clients.addClient(message); this.Caller.addClients(_clients.ToArray()); } public void Disconnect() { MapClient client = _clients[Context.ClientId]; _clients.Remove(Context.ClientId); Clients.removeClient(client); } /// &lt;summary&gt; /// model class for the join message. I tried to use dynamic here, but it didn't work. /// &lt;/summary&gt; public class MapClient { public string clientId { get; set; } public Location location { get; set; } public class Location { public float latitude { get; set; } public float longitude { get; set; } } } } } </code></pre> And that's it! SignalR figured out what types of communication my browser supports, managed the tunnel, and just made the connection work. Enjoy! <ul> <li><a href="http://signalrmap.apphb.com/?random=true" target="_blank">View the demo</a></li> <li><a href="https://github.com/JustinBeckwith/SignalRMap" target="_blank">Download the source code</a></li> <li><a href="https://github.com/SignalR/SignalR" target="_blank">SignalR on GitHub</a></li> <li><a href="http://www.bingmapsportal.com/ISDK/AjaxV7" target="_blank">Bing Maps SDK</a></li> </ul> Wed, 12 Oct 2011 00:00:00 +0000 http://jbeckwith.com/2011/10/12/building-a-user-map-with-signalr-and-bing/ http://jbeckwith.com/2011/10/12/building-a-user-map-with-signalr-and-bing/ Using MSBuild to deploy your AppFabric Application <img title="azure3" src="/images/2011/07/azure3.png" alt="Using MSBuild to deploy your AppFabric Application" width="150" height="150" /> I wrote a blog post for the MSDN AppFabric Blog! <a href="http://blogs.msdn.com/b/appfabric/archive/2011/07/20/using-msbuild-to-deploy-your-appfabric-application.aspx" target="_blank"> Using MSBuild to deploy your AppFabric Application </a> Wed, 20 Jul 2011 00:00:00 +0000 http://jbeckwith.com/2011/07/20/using-msbuild-to-deploy-your-appfabric-application/ http://jbeckwith.com/2011/07/20/using-msbuild-to-deploy-your-appfabric-application/ FRINK! - the Reddit client for tablets <img src="/images/2011/04/frink-header1.png" alt="" title="FRINK!" width="430" height="290" /> Frink! is a mobile client for the web site <a href="http://www.reddit.com" target="_blank">Reddit</a>. It is designed specifically to be used with tablets, taking advantage of gestures in a unique user interface. Right now the app is available in the BlackBerry App World: <a href="http://appworld.blackberry.com/webstore/content/38838?lang=en" target="_blank">http://appworld.blackberry.com/webstore/content/38838?lang=en</a> After the code has a little time to settle, I plan on releasing the app to the Android Market as well. The entire project is open source, and available on my <a target="_blank" href="https://github.com/JustinBeckwith/frink">GitHub</a>. <a href="/images/2011/04/comments.png"> <img src="/images/2011/04/comments.png" alt="" title="comments on a post" /> </a> <a href="/images/2011/04/post-details.png"> <img src="/images/2011/04/post-details.png" alt="" title="post details" /> </a> <a href="/images/2011/04/subreddits.png"> <img src="/images/2011/04/subreddits.png" alt="" title="subreddits" /> </a> <a href="/images/2011/04/posts.png"> <img src="/images/2011/04/posts.png" alt="" title="posts" /> </a> For more information, here are a bunch of links that talk about the project: <ul> <li> Frink! on the web: <a target="_blank" href="http://frinkapp.com">http://frinkapp.com</a> </li> <li> Frink! on Reddit: <a target="_blank" href="http://www.reddit.com/r/frinkapp">http://www.reddit.com/r/frinkapp</a> </li> <li> Frink! on Twitter: <a target="_blank" href="http://twitter.com/frinkapp">http://twitter.com/frinkapp</a> </li> <li> Frink! on GitHub: <a target="_blank" href="https://github.com/JustinBeckwith/frink">https://github.com/JustinBeckwith/frink</a> </li> <li> Frink! at BlackBerry App World: <a target="_blank" href="http://appworld.blackberry.com/webstore/content/38838?lang=en">http://appworld.blackberry.com/webstore/content/38838</a> </li> </ul> Mon, 18 Apr 2011 00:00:00 +0000 http://jbeckwith.com/2011/04/18/frink/ http://jbeckwith.com/2011/04/18/frink/ The Cause and Effect of Google's h.264 Decision <a href="http://jbeckwith.com/2011/01/20/google-h264/h264-header-2/" rel="attachment wp-att-262"><img src="/images/2011/01/h264-header1.png" alt="" title="The Cause and Effect of Google&#039;s H.264 Decision" width="383" height="166"/></a> How do the internal workings of a browser that was released only two years ago have an enormous ripple effect on the future of streaming media on the internet? Last week Google announced on their chromium blog that they're <a href="http://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html"> dropping support for the h.264 codec</a>, in favor of the open source <a href="http://www.theora.org/">Ogg Theora</a> and <a href="http://blog.webmproject.org/">WebM/VP8</a> codecs. This is yet another snag in the messy attempt to unify the playback of video in HTML 5, as we now find the #2 and #3 most popular browsers lacking support for what currently is likely the most ubiquitous encoding format. So how did we get here? <h3>The browser wars are back</h3> After years of IE 6 and Firefox being the only real browsers around, the browser wars have exploded again. For the first time since Netscape 4.7 roamed the earth, Internet Explorer has dropped below 50% in the market share. That leaves a lot of space for the likes of Firefox, Chrome, Safari, and Opera. Well, maybe not Opera. The interesting thing in this graph is the dominance of Firefox, and the growth of Chrome. That leaves ~42% of the current desktop browser market with no H.264 native playback, and ~96% of the next desktop browser market that supports WebM (assuming everyone here upgrades to the latest version of course). <div id="browser-ww-monthly-200912-201012" width="600" height="400" style="width:600px; height: 400px;"></div><!-- You may change the values of width and height above to resize the chart --><p>Source: <a href="http://gs.statcounter.com/?PHPSESSID=9ni6qaq0p0vdrb4bjtfm6l51i4">StatCounter Global Stats - Browser Market Share</a></p><script type="text/javascript" src="http://www.statcounter.com/js/FusionCharts.js"></script><script type="text/javascript" src="http://gs.statcounter.com/chart.php?browser-ww-monthly-200912-201012"></script> What's scary about this is the proliferation of new browsers through mobile and embedded devices. As time goes on, iOS, Android, and RIM are going to eat more and more of those beautiful hits on our Google Analytics dashboards. While iOS is currently ahead in terms of volume, <a href="http://blog.nielsen.com/nielsenwire/online_mobile/apple-leads-smartphone-race-while-android-attracts-most-recent-customers/">Android is catching up</a>. Quickly. <a href="/images/2011/01/smartphone-os-nov2010.png"><img src="/images/2011/01/smartphone-os-nov2010.png" alt="" title="Smartphone Market 2010" width="575" height="369" class="aligncenter size-full wp-image-243" /></a> As the homogeneity of the browser market continues to disappear, the likelihood that all browsers will support the same native HTML 5 playback goes down quickly. So why did Google do this? <h3>Is YouTube making money yet? How about now? Now?</h3> In 2006 Google acquired YouTube for $1.65 billion in stock. The costs of running a video delivery site are sky high, and the sale price certainly turned a lot of heads. While I've heard a lot of people question the acquisition, there are estimates that YouTube may be <a href="http://mediamemo.allthingsd.com/20100305/another-youtube-revenue-guess-1-billion-in-2011/">generating as much as 1 billion a year in revenue</a> moving forward. Don't be mistaken, YouTube is a vital piece to controlling advertising in the streaming media market. They will protect their investment, and continue to grow other revenue streams to support the costs of the platform. One of the ways I look for Google to do this is through providing a channel to charge for premium or protected content. MPEG LA makes providing content encoded using H.264 free until 2016 - given the content is available freely to the end user. The moment you charge for the delivery of that content, you are subject to a delivery fee of 2% revenue per title, up to a maximum $5 million cap. While I don't think the $5 million is a huge deal to Google, it's an enormous deal for smaller software shops, startups, integrators, and hardware companies that want to stream and decode video from YouTube on their sites or devices. This model will eventually have a stifling effect on innovation in the streaming media market, which directly affects Google's YouTube line of business. This pretty easily explains why Google <a href="http://blog.streamingmedia.com/the_business_of_online_vi/2009/08/googles-acquisition-of-on2-not-a-big-deal-heres-why.html">purchased the VP8 codec from On2 for $106.5 million</a>. In the short term, this decision isn't going to cause a whole lot of impact. Even if most browsers did support HTML 5, which they don't, most of the video out there today is in H.264 format. All of the hardware devices that support decoding are using H.264. This is a decision that pays dividends 5 years down the road. Estimates have YouTube receiving as much as <a href="http://www.youtube.com/t/fact_sheet">24 hours of content per minute</a>, which is dizzying to think about. No matter how you store it, that's a ton of storage space. More and more of the content being added to the system is in high definition, so that makes the problem even bigger. Now add the fact that you need to encode your content in two different formats over the long haul, and you have a huge problem. I want to know - will Google have the stones to yank H.264 support from YouTube all together? <h3>Where does this leave everyone?</h3> As mentioned above, I don't think this changes a lot in the short term. Here is where I see the big players ending up with the change: <ul> <li><b>Adobe</b> - Adobe comes out of this situation in great shape. This pretty much just guarantees that flash isn't going anywhere for a couple of years, and they've already <a href="http://blogs.adobe.com/flashplatform/2010/05/adobe_support_for_vp8.html">announced support for VP8</a>.</li> <li><b>Microsoft</b> - Microsoft, who doesn't have a horse in this race anymore, has already announced VP8 support for Internet Explorer 9.</li> <li><b>Google</b> - Google gets to protect their YouTube and VP8 investments, while promoting innovation through an open standard.</li> <li><b>Mozilla</b> - Firefox will remain relevant, given their VP8 support in version 4. I doubt the Mozilla Foundation had any intentions of paying MPEG LA $5 million.</li> <li><b>Apple</b> - If Google drops H.264 support of YouTube (which won't happen for a long time), Apple with have their hand forced into supporting WebM. Until that happens, this is a total mystery to me.</li> </ul> <br /> <br /> Overall, I think Google's decision is a good thing for content developers and innovators. Not everyone agrees. For a few dissenting opinions on this, check out: <ul> <li> <a href="http://www.zdnet.com/blog/hardware/chromes-love-of-webm-and-hatred-of-h264-has-nothing-to-do-with-youtube/11021">Chrome's love of WebM and hatred of H.264 has nothing to do with YouTube</a> </li> <li> <a href="http://arstechnica.com/web/news/2011/01/googles-dropping-h264-from-chrome-a-step-backward-for-openness.ars/">Google's dropping H.264 from Chrome a step backward for openness</a> </li> </ul> <br /> And of course, I want to know what you think. So lets start some discussions! Thu, 20 Jan 2011 00:00:00 +0000 http://jbeckwith.com/2011/01/20/google-h264/ http://jbeckwith.com/2011/01/20/google-h264/ Bootstrapping image based bookmarklets <img src="/images/2010/12/featured.png" alt="" title="featured" width="430" height="290" /> Over this holiday break I had the interesting opportunity to write a bookmarklet for a friend who runs a comic based website.   Instead of just manipulating the currently loaded page, the bookmarklet needed to send a list of images to another site.  Often when writing <a title="Wikipedia - Bookmarklets" href="http://en.wikipedia.org/wiki/Bookmarklet" target="_blank">bookmarklets</a>, we tend to only think of loading our code in the context of a HTML content page.  How often do you test your bookmarklets when the browser is viewing an image?  In this article I am going to go through the code I used to bootstrap my bookmarklet script, and discuss some of the interesting challenges I experienced along the way. To get started with this code, I used a fantastic <a href="http://www.smashingmagazine.com/2010/05/23/make-your-own-bookmarklets-with-jquery/" target="_blank">article</a> by <a href="http://www.smashingmagazine.com/author/tommy-iamnotagoodartist/" target="_blank">Tommy Saylor</a> of <a href="http://www.smashingmagazine.com/" target="_blank">Smashing Magazine</a>.  It gave me a good start, but certainly left a lot of details out, and in my case, caused a lot of bugs. <h3>Bookmarklet Architecture</h3> That's right:  we should talk about architecture before diving right into our JavaScript.  When writing a bookmarklet, it's generally a good idea to keep as much code out of the actual bookmark as possible.  This is where 'bootstrapping' comes into play:  we will simply use our bookmark as a piece of code that actually loads the core bits of our JavaScript.  There are actually two reasons why this is a good idea: <ul> <li>Different browsers have various max-lengths of bookmarks.  Keep in mind that a bookmarklet is kind of an accidental feature.  I think the average max length works out to around 2000 characters, but some browsers (like Internet Explorer 6) have limits as low as 508 characters.</li> <li>Users are unlikely to be bothered into refreshing your bookmarklet.  Once somebody bookmarks your code, how are they going to get updates?  It's much easier if your bookmarklet simply loads a JavaScript file from a static URL.  This way we can update the code in the back whenever we want.</li> </ul> After our bootstrapper loads the script we created, any external libraries will be loaded.  For example, I used jQuery and jQuery UI for my most recent project.  After the dependencies are loaded, we will then execute our main code. Another thing to keep in mind when you're building your bookmarklet is how the site behaves after the function is disabled.  For example, if your bookmarklet gives all images on the site a red border, what happens when the user no longer wishes to use the bookmarklet?  For this reason, I tend to create a cleanup method that allows our bookmarklet changes to be undone, and leaves the script in a state that can later be used again. <h3>The bootstrap code</h3> For the purposes of this bookmarklet, I needed to write a piece of code that would interact with a standard HTML page and it's images, or interact with a page that was a single loaded image. For that reason, the first thing we need to do is determine what type of page we're dealing with.  If the page is HTML, we can insert a script.  If the page is an image, we need to behave differently.  While I found that Firefox and WebKit both generated a HTML container to render image pages, their behavior surrounding script events of these pages were too inconsistent to be depended upon. <img src="/images/2010/12/firebug.png" alt="" title="Image url firebug output" width="501" height="635" /> Here is a formatted example of what my a href tag JavaScript looks like: <pre><code class="language-javascript"> // // &lt;a&gt; tag href javascript // javascript:(function() { if( (document.contentType &amp;&amp; document.contentType.indexOf('image/')&gt;-1) ||/.png$/.test(location.href) ||/.jpg$/.test(location.href) ||/.jpeg$/.test(location.href) ||/.gif$/.test(location.href)) { location.href='http://jbeckwith.com/bookmarklet/'; } else if (!window.main) { document.body.appendChild(document.createElement('script')) .src='http://jbeckwith.com/my-bookmarklet.js'; } else { main(); } })(); </code></pre> After tidying up our script, and adding the surrounding tag, here is a final rendered output of our code, I came up with the following: <pre><code class="language-markup"> &lt;!-- &lt;a&gt; tag example --&gt; &lt;a href=&quot;javascript:(function(){if((document.contentType&amp;&amp;document.contentType.indexOf('image/')&gt;-1)||/.png$/.test(location.href)||/.jpg$/.test(location.href)||/.jpeg$/.test(location.href)||/.gif$/.test(location.href)){location.href='http://jbeckwith.com/bookmarklet/';}else if(!window.main){document.body.appendChild(document.createElement('script')).src='http://jbeckwith.com/my-bookmarklet.js';}else{main();}})();&quot;&gt;It's a bookmarklet!&lt;/a&gt; </code></pre> <h3>Loading jQuery and jQueryUI</h3> Now that the bootstrapper is created, I am going to focus the rest of the article on the external JavaScript file that contains the meat of the code. With the script I wrote, I needed to use a good deal of visual effects. I am already comfortable with <a href="http://jquery.com/" target="_blank">JQuery</a>, so I chose to use it as my JavaScript framework: <pre><code class="language-javascript"> // // create javascript libraries required for main // if (typeof jQuery == 'undefined') { // include jquery var jQ = document.createElement('script'); jQ.type = 'text/javascript'; jQ.onload=getDependencies; jQ.onreadystatechange=function() { if(this.readyState=='loaded' || this.readyState=='complete') { getDependencies(); } // end if }; jQ.src = 'http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js'; document.body.appendChild(jQ); } // end if else { getDependencies(); } // end else </code></pre> If you look at the example in the Smashing Magazine article, you will notice a couple of differences. We need to add an event for onreadystatechange to handle Internet Explorer. I found that IE inconsistently set the readyState of the script object to 'loaded' or 'complete' in various parts of the DOM, so as a rule I check for both. If you don't make this change, IE will never notify the script that jQuery is finished loading. Secondly, I have added the getDependencies() method to manage loading required scripts (in addition to jQuery). Since I am depending heavily on a few jQuery UI components, I needed to load both an external JavaScript file and an external CSS file: <pre><code class="language-javascript"> // // getDependencies // function getDependencies() { // make sure jqueryUI is loaded if (!jQuery.ui) { // get the link css tag var jQCSS = document.createElement('link'); jQCSS.type = 'text/css'; jQCSS.rel= 'stylesheet'; jQCSS.href = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css'; document.body.appendChild(jQCSS); // grab jquery ui var jQUI = document.createElement('script'); jQUI.type = 'text/javascript'; jQUI.src = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.7/jquery-ui.min.js'; jQUI.onload=getDependencies; jQUI.onreadystatechange=function() { if(this.readyState=='loaded' || this.readyState=='complete') { getDependencies(); } // end if }; document.body.appendChild(jQUI); } // end if else { main(); } // end else } // end getDependencies function </code></pre> In this case, I'm really only waiting on jQuery and jQuery UI to load. If there were more dependent scripts, I would likely create an array of scripts that need to be loaded, and check all of their completion every turn through the getDepenencies method. <h3>Embedding Styles</h3> With the supporting code written, we're now ready to work on our main method. This is where bookmarklets really are different based on your task. In my case, I'm creating a visual element on the page, complete with styles to match the target site. This works pretty much as expected, with a single caveat: any style definitions you create must be at the very bottom of your appended script. Internet Explorer has a nasty habit of inconsistently handling styles and scripts when appended to the DOM. For some reason beyond my understanding, appended style definitions, whether via script or ajax calls, only work if they are at the very bottom of the appended code. This is fantastically fun to figure out on your own, so hopefully I've saved you some trouble. <pre><code class="language-javascript"> // // main // function main() { // only do this the first time the bar is loaded on the page if ($(&quot;#myBar&quot;).length == 0) { // append the styles and bar var barHtml = &quot;&lt;div id='myBar'&gt;\ &lt;div id='myBar-main' class='dragOff'&gt;\ &lt;span id='myBar-thumbs'&gt;&lt;/span&gt;\ &lt;span id='myBar-text'&gt;drag images to the mainbar&lt;/span&gt;\ &lt;span id='myBar-buttons'&gt;\ &lt;a href='#' id='doneLink'&gt;done&lt;/a&gt;\ &lt;a href='#' id='cancelLink'&gt;cancel&lt;/a&gt;\ &lt;/span&gt;\ &lt;/div&gt;\ &lt;/div&gt;\ &lt;style type='text/css'&gt;\ #myBar {color: #FFFFFF; font-size: 130%; font-weight: bold; left: 0; position: fixed; text-align: center; top: 0; width: 100%; z-index: 99998; display: none; }\ #myBar-main {border-bottom: 3px solid #000000; padding: 7px 0;}\ #myBar-buttons { display: block; float: right; margin-right: 20px; }\ #myBar-buttons a,\ #myBar-buttons a:visited,\ #myBar-buttons a:link,\ #myBar-buttons a:active,\ #myBar-buttons a:hover\ { padding: 4px; font-size: 0.7em; border: 2px solid #008600; background-color: #00cb00; color: #FFFFFF; text-decoration: none; }\ #myBar-thumbs img { padding-left: 2px; padding-right: 2px; cursor: hand; }\ .my-hover { border: 3px solid #4476b8 }\ .dragOff { background-color: #4476b8; }\ .dropHover{background-color: #FF0000; border: 1px dashed #e5a8a8;}\ .dragActive {background-color: #759fd6}\ .dropHighlight{border: 1px solid #000000;}\ .dragHelper {z-index: 99999; border: 1px solid #000000;}\ &lt;/style&gt;&quot;; $(&quot;body&quot;).append(barHtml); </code></pre> This code simply creates a formatted div and adds it to the top of the page. <h3>Cleaning up the mess</h3> If you look at the generated HTML above, you'll notice that I include a cancel link. I like to give the user the option to cancel out of using the current bookmarklet, and even relaunch the bookmarklet without issue. So when you're done, make sure to test closing and re-launching the code. I suggest keeping all of your elements on the page, and simply hiding them from the user: <pre><code class="language-javascript"> // // myBar close evnet // $(&quot;#cancelLink&quot;).click(function(e) { // hide the bar $(&quot;#myBar&quot;).fadeOut(750); // remove any img classes or handlers $(&quot;img&quot;).removeClass('my-hover').unbind().draggable(&quot;destroy&quot;); // reset the thumbnail span $(&quot;#myBar-thumbs&quot;).html(''); // reset the text $(&quot;#myBar-text&quot;).html(&quot;drag images to the mybar&quot;); }); </code></pre> And for now, that's it. For the source to this project, visit my <a href="https://github.com/JustinBeckwith/Chogger-Bookmarklet" target="_blank">GitHub</a>. Tue, 28 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/28/bootstrapping-image-based-bookmarklets/ http://jbeckwith.com/2010/12/28/bootstrapping-image-based-bookmarklets/ Virtual Labs <img src="/images/2010/12/lab-header.png" alt="" title="Lab Header" width="475" height="230" /> When a student takes a course in chemistry, it is often accompanied by a hands on lab. After sitting through a lecture, and performing homework, students need to reinforce the learned concepts by doing. Why should technology education be any different? VTE Virtual Labs provide a sand-boxed environment for students to practice interacting with simple or complex ephemeral computing environments. These environments may be designed by a course instructor or instructional designer to promote learning by interacting with a real (as real as it needs to be) system. Especially useful for security research, these systems may contain full environments including domain controllers, mail servers, web servers running various versions of Windows or Linux. You can even configure internal routing and switching between virtual hosts. Students can install malware, viruses, bots, hacking tools, anything they want - and when they're finished, the environment is completely disposed, with no harm done. Designed at the Software Engineering Institute of Carnegie Mellon University, students and interact with the system entirely over the web, in the browser. It combines an ASP.NET MVC back end with client elements including JQuery and Adobe Flex. The back end infrastructure includes a BigIP F5, NetApp SAN, Cisco ASA, and vSphere cluster. <h3>The Student Perspective</h3> <hr />After watching a presentation on a particular technical topic, the student may be asked to practice their new skill inside of a virtual lab. To prepare for the lab, VTE also provides demos and quizzes. From a course syllabus, the student will select the lab they would like to launch: <a href="/images/2010/12/course-outline.png"><img class="alignnone" title="Course Outline" src="/images/2010/12/course-outline.png" alt="" style="width:100%" /></a> This will start spooling up the required virtual machines and networking gear in vSphere. The student is presented with a structured set of tasks they are expected to perform in order to reinforce the concepts taught in the previous lecture. As each task is completed, the student's progress is saved, and they may come back at a later time to complete the lab: <a href="/images/2010/12/lab-player-1.png"><img class="alignnone" title="Lab Player - The Platform and Task View" src="/images/2010/12/lab-player-1.png" alt="" style="width:100%" /></a> Students may select any of the virtual machines from the lab platform, and engage in a VNC session that is performed using adobe flash. The system is capable of establishing a standard VNC socket connection over 5900 or using a comet style connection to proxy the data over 80/443. The system should behave just like administering any other remote system: <a href="/images/2010/12/lab-player-3.png"><img class="alignnone" title="Lab Player - Completing a Task" src="/images/2010/12/lab-player-3.png" alt="" style="width:100%" /></a> After the student completes the required steps, they are free to submit the lab, and continue on with the other work in their course. <h3>The author perspective</h3> <hr />Instructors, content authors, and instructional designers have the ability to author their own virtual lab environments. After creating a new lab, you have the option to start with a list of predefined templated virtual machines, similar to what Amazon EC2 provides it's users: <a href="/images/2010/12/lab-author-step2.png"><img class="alignnone" title="Lab Author - Base Disks" src="/images/2010/12/lab-author-step2.png" alt="" style="width:100%" /></a> In this example, I am only going to use a single virtual machine. It's entirely acceptable to use multiple virtual machines and multiple networking devices. After all of the machines have been dragged to the stage, they need to be prepared for an initial task authoring state. All this really means is that we're going to copy the base image we started with, and make any changes needed for the specifics of this lab. Examples would include installing custom software, installing the latest patches, or creating files needed in order to the complete the lab. The final state of these machines in this step will represent the first step students see when they launch the exercise: <a href="/images/2010/12/lab-author-step3.2.png"><img class="alignnone" title="Lab Authoring - Preparing the Virtual Machines" src="/images/2010/12/lab-author-step3.2.png" alt="" style="width:100%" /></a> After the author has placed all of the machines in the desired start state, you can begin writing out the individual tasks of the lab. For longer labs, several exercises may be used. A single exercise should encompass a single task that may be completed in a sitting. Several exercises may be combined to create a lab with a broader theme. For example, if you wanted to create a lab on securing Linux, you would like have multiple exercises including 'Installing and configuring the firewall', and 'User management'. An exercise may contain multiple tasks - a task is a simple task that be completed relatively quickly. Tasks contain a brief description on what the student is supposed to be doing in this particular step, and may contain a screen-shot of the desired result: <a href="/images/2010/12/lab-author-step4.2.png"><img class="alignnone" title="Lab Authoring - Tasks" src="/images/2010/12/lab-author-step4.2.png" alt="" style="width:100%" /></a> Upon completion of these steps, the lab can be made available to students. For more information, visit <a title="Virtual Labs" href="http://vte.cert.org/labs/" target="_blank">http://vte.cert.org/labs/</a>. Wed, 22 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/22/virtual-labs/ http://jbeckwith.com/2010/12/22/virtual-labs/ Using Ant with Adobe Flex - Part 1 <a href="/images/2010/12/build-screenshot1.png"><img title="build-screenshot" src="/images/2010/12/build-screenshot1.png" alt="" width="430" height="290" /></a> Welcome to the first part in a multi-part series on building <a title="Adobe Flex" href="http://www.adobe.com/devnet/flex.html" target="_blank">Adobe Flex</a> projects using <a title="The Apache Ant Project" href="http://ant.apache.org/" target="_blank">The Apache Ant Project</a>. So why would we want to use ant to build our flex projects?  Flash Builder does a great job of building our actionscript and mxml.  But it does not do a great job of integrating into our existing automated build frameworks.  For those of us who have been writing Java in an enterprise environment, Ant is common knowledge.  If you've spent any time working with the Microsoft .NET platform, you may have been exposed to <a title="NAnt" href="http://nant.sourceforge.net/" target="_blank">NAnt</a> or <a title="MSBuild" href="http://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx" target="_blank">MSBuild</a>.  The idea is that we need to have a reliable, repeatable build process that can execute outside of the context of our development environment.  For my team, this means an independent build server (in my case, a virtual machine).  An independent build server means nightly builds, and software that can run without the user at the keys. Before we get started, I think it's a good idea to run through the list of tools I'm using for this article: <ul> <li>Apache Ant - v.1.8.1</li> <li>Flash Builder - v.4.0.1</li> <li>Flex SDK - v.3.5.0, v.4.1.0</li> </ul> So lets get started! <h3>Download, Install, and Configure Ant</h3> The first step is to download ant.  At the time of this article, you can download the binaries at http://ant.apache.org/bindownload.cgi.  The binaries are included as a *.zip file, so we need to unpackage our tool in a place that makes sense.  I chose to create a directory structure that was consistent with other installed software on my system: C:\Program Files (x86)\Apache\apache-ant-1.8.1 <a href="/images/2010/11/ant-install-folder1.png"><img class="alignnone size-full wp-image-25" title="ant-install-folder" src="/images/2010/11/ant-install-folder1.png" alt="" width="536" height="360" /></a> After Ant is installed in the appropriate location for your system, you need to create/modify a few system variables in order to use it.  Start by right clicking on 'Computer', and navigate to 'Properties'.  Click on the 'Advanced System Settings' option, and then click on the 'Environment Variables' button. The variable you need to create is ANT_HOME.  Under system variables, click on the 'New...' button.  Enter the name ANT_HOME, and enter the path you used to install Ant.  For me, this is 'C:\Program Files (x86)\Apache\apache-ant-1.8.1': <a href="/images/2010/11/ANT_HOME1.png"><img class="alignnone size-full wp-image-40" title="Setting Environment Variables" src="/images/2010/11/ANT_HOME1.png" alt="" width="617" height="362" /></a> We also need to modify the PATH variable, which will allow us to invoke Ant from the command line.  Find the PATH variable in your system variables, and choose 'Edit...'.  At the end of the existing property value, add the full path to your Ant installation, with the addition of the bin.  For me, this is 'C:\Program Files (x86)\Apache\apache-ant-1.8.1\bin;'.  We are now ready to use Ant. <h3>Configuring The Flex SDK</h3> For the purposes of this post, I am going to assume that you've already installed Flash Builder.  In order for Ant to find the Flex SDK, we need to create an environment variable that points to the appropriate location.  Instead of creating an environment variable that points to a specific SDK directory, I like to create a variable that points to the root of all SDKs.  This allows us to choose the appropriate SDK version inside of the build file, and allows for building bits that use various SDK versions easily.  Create a new environment variable named FLEX_HOME.  Set the path to the root of your Flex SDK installations; for me this is: 'C:\Program Files (x86)\Adobe\Adobe Flash Builder 4\sdks'.  In the case of an independent build machine, you can install the Flex SDKs you need to use independent of Flash Builder. <h3>Configuring Flash Builder to Invoke Ant (optional)</h3> Generally, I invoke my Ant scripts from the command line.  If you're working from a development machine, you may choose to configure Flash Builder to invoke your Ant scripts directly from the IDE.  To get this working, I followed the tutorial listed here: <a href="http://www.zoltanb.co.uk/Flash-Articles/fb4-standalone-how-to-install-ant-in-flash-builder-4-premium.php" target="_blank">http://www.zoltanb.co.uk/Flash-Articles/fb4-standalone-how-to-install-ant-in-flash-builder-4-premium.php</a> To enable Ant from Flash Builder, use the following steps: <ol> <li> Go to Help &gt; Install New Software</li> <li> Click on Available Software Sites</li> <li> Click on 'Add..'</li> <li> Type in: Name: Galileo - Location: <a title="http://download.eclipse.org/releases/galileo/" rel="nofollow" href="http://download.eclipse.org/releases/galileo/">http://download.eclipse.org/releases/galileo/</a></li> <li> Go back to Help&gt;Install New Software</li> <li> Select Galileo from the drop down:</li> <li> Wait until the List gets populated. It might take a long time!</li> <li> Type in 'Eclipse Java' in the search box to narrow down the search</li> <li> Select Eclipse Java Development Tools</li> <li> Click on Next</li> <li> Accept the Terms and click on Finish</li> <li> Click on Yes to restart FB4 and apply your changes:</li> <li> Go To Window&gt; Other Views</li> <li> Select Ant and click OK</li> </ol> These steps will allow you to build your project in Flash Builder using Ant.  Now our environment is set up and configured.  In the next part of this series, I will go over how to write your Ant scripts. Wed, 15 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/15/using-ant-with-adobe-flex-part-1/ http://jbeckwith.com/2010/12/15/using-ant-with-adobe-flex-part-1/ Virtual Training Environment The Virtual Training Environment (VTE) is a Learning Management System designed at the Software Engineering Institute of Carnegie Mellon University. This system is designed to provide students and instructors with a self managed ecosystem, including user generated content and aspects of social networking. It may be used for independent learners, synchronous instruction, or semi-synchronous instruction. Courses may be built using SCORM content, RECast presentations, podcasts, demos, quizzes, surveys, assignments, or virtual labs. I am going to do a detailed writeup on this system in the future, but until our launch, here is a gallery of screen-shots: <a href="/images/2010/12/lab-section-details.png"><img src="/images/2010/12/lab-section-details.png" alt="" title="LMS Section Details" /></a> <a href="/images/2010/12/lms-recast.png"><img src="/images/2010/12/lms-recast.png" alt="" title="LMS Launch RECast"/></a> <a href="/images/2010/12/lms-notifications.png"><img src="/images/2010/12/lms-notifications.png" alt="" title="LMS Notifications" ></a> <a href="/images/2010/12/lms-enroll.png"><img src="/images/2010/12/lms-enroll.png" alt="" title="LMS Course Enrollment" ></a> <a href="/images/2010/12/lms-contact-instructors.png"><img src="/images/2010/12/lms-contact-instructors.png" alt="" title="LMS Contact Instructors" ></a> For more information, visit <a title="VTE" href="http://vte.cert.org/lms/" target="_blank">http://vte.cert.org/lms/</a> Mon, 13 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/13/virtual-training-environment/ http://jbeckwith.com/2010/12/13/virtual-training-environment/ RECast <img src="/images/2010/12/recast-header.png" title="RECast- video for online education" width="255" height="90" class="alignnone size-full wp-image-303" /> RECast is a video playback system designed at the Software Engineering Institute of Carnegie Mellon University.  This system focuses on providing students with a with an experience as close as possible to sitting in the actual classroom. Let's face it - training is a hassle. On site classes are expensive, require travel, and require everyone to learn at the same time. RECast aims to fix this problem by providing the same material online with a unique learning experience. RECast combines an ASP.NET MVC back end with client elements including JQuery and Adobe Flex. <h3>The Student Perspective</h3> <hr /> Typically RECast is used with a Learning Management System. The current release is intended to integrate with the <a href="http://vte.cert.org/lms/" target="_blank">Virtual Training Environment</a> at Carnegie Mellon University. When a student enrolls in a course, they will be presented with an outline of material which they need to complete. Think of this as the course syllabus. Students can watch lectures, demos, complete virtual labs, take quizzes, or interact with any content that has been released in <a href="http://en.wikipedia.org/wiki/Sharable_Content_Object_Reference_Model" target="_blank">SCORM</a> format: <a href="/images/2010/12/lab-section-details1.png"><img src="/images/2010/12/lab-section-details1.png" alt="" title="Section Details" class="aligncenter size-full wp-image-296" /></a> Content that is authored in RECast will be launched in the RECast player. This player provides students with a best possible re-creation of the original learning environment. This means a view of the instructor, and any supplemental materials included in the course. RECast supports multiple track video, video with slide presentations, slides over audio, or just plain audio. Any media imported in the system is transcribed and indexed, allowing students to read the lecture at their own pace, and search on the content of the media. The presentation below is a typical RECast presentation: <a href="/images/2010/12/player.png"><img src="/images/2010/12/player.png" alt="" title="RECast Player" class="aligncenter size-full wp-image-298" /></a> As the student watches the lecture, they may wish to take notes. RECast supports using sticky notes, and transcript highlighting. In the case that the user wants to print a copy of the lecture, notes and transcripts will be included with any slide presentations. For registered users, these notes and highlights will be preserved along with their progress for the next time they launch the video: <a href="/images/2010/12/player-advanced.png"><img src="/images/2010/12/player-advanced.png" alt="" title="The player includes sticky notes and highlighting" class="aligncenter size-full wp-image-299" /></a> Now that I've reviewed the student experience, let's talk a little about how content is created. <br /> <br /> <h3>The Author Perspective</h3> <hr /> RECast is designed to allow the import of most types of media, and support most types of presentations. This means supporting standard slide presentations, voice over slides, podcasts, or screencasts. Authors in the system are given the option to choose a presentation type: <a href="/images/2010/12/new-session-info.png"><img src="/images/2010/12/new-session-info.png" alt="" title="Create New Session" /></a> After some introductory details, the author can import any lecture material that has been prepared from the course capture. This includes any videos, PowerPoint presentations, images, or audio tracks. The media is uploaded, queued, and transcoded into the appropriate format for our system. This can take a little bit of time! <a href="/images/2010/12/asset-uploader.png"><img src="/images/2010/12/asset-uploader.png" alt="" title="Asset Uploader" /></a> After all of the content has been uploaded to the system, authors can start to build their presentation. Currently RECast supports two tracks - People and Content. The 'People' track generally includes a video of the speaker, and the 'Content' track generally includes a slide presentation. As part of the import process, videos are automatically transcribed, and made available for edits by the content author: <a href="/images/2010/12/assembler.png"><img src="/images/2010/12/assembler.png" alt="" title="Session Create - Assembler" /></a> After layout out the content on a timeline, authors have the option to create multiple clips. Think of a clip as a subset of a session - a recording session may include 3 hours of recorded video content, but we don't really want to present all of that at once to the user. Instead, try splitting up the video into smaller consumable chunks (we aim for under 20 minutes). Now that you've created the session, it will appear under your list of available sessions: <a href="/images/2010/12/session-list.png"><img src="/images/2010/12/session-list.png" alt="" title="List of Sessions" /></a> To make the clips available to students, you need to publish them to an LMS: <a href="/images/2010/12/publishing-point.png"><img src="/images/2010/12/publishing-point.png" alt="" title="LMS Publishing Point"/></a> And that's it! For more information, visit <a title="RECast" href="http://vte.cert.org/recast/" target="_blank">http://vte.cert.org/recast/</a>. Mon, 13 Dec 2010 00:00:00 +0000 http://jbeckwith.com/2010/12/13/recast/ http://jbeckwith.com/2010/12/13/recast/