Please take this personally

01 February 2015 Posted Under: Product Management [0] comments
 

A few weeks ago I got pulled into a meeting. There’s another team at Microsoft that’s using our SDK to build their UI, and they had a few questions. Their devs had a chance to get their hands on our SDK, and like most product guys, I was interested in getting some unfiltered feedback from an internal team. Before getting into details, someone dropped the phrase “Please don’t take this personally, but…“

The feedback that comes after that sentence is really important. We write it down. We share it with the team. We stack it up against other priorities, compare it with feedback from teams who have similar pain points, and use it to find a way to make our product better. Customer feedback (from an internal team or external customer) is so incredibly critical, that any product team has a similar pattern/process for dealing with it. So what’s the problem?

Any feedback that starts with “Don’t take this personally” really pisses me off. When you say this to someone, you’re making one of two judgments about this person:

  1. They are not personally invested in their work. They go to their job, they do whatever work is put in front of them, and then they go home. If what they’ve made is not good, it doesn’t bother them.

  2. They are personally invested in their work. They want to create something amazing, and will go to great lengths to do so. Whatever you’re about to say - despite your warning - They’re going to take it personally.

For me, that expression elicits a sort of Marty Mcfly “nobody calls me chicken” response.

What’s more personal to me than my product!? I work at Microsoft because I genuinely believe it’s the best place for me to build stuff that has a real tangible impact. I went through years of school so I could do *this*. I moved my family moved across the country. I work 50-60 hours a week (probably more than I should) because I wanted to build *this*. My product is in many ways a reflection of me. What could possibly be more personal?

Does this mean I don’t want criticism? Of course I do! Objective criticism from an informed customer who has used your product is the greatest gift a product manager can receive. It’s how we get better. Just expect me to take it personally.

 
 

Comparing Go and .NET

04 January 2015 Posted Under: Go [0] comments
 

"The gopher image is Creative Commons Attributions 3.0 licensed. Credit Renee French."

2014 was a crazy year. I spent most of the year thinking about client side code while working on the new Azure Portal. I like to use the holiday break as a time to hang out with the family, disconnect from work, and learn about something new. I figured a programming language used for distributed systems was about as far from client side JavaScript as I could get for a few weeks, so I decided to check out golang.

Coming at this from 0 - I used a few resources to get started:

  • Tour of Go - this was a great getting started guide that walks step by step through the language.
  • Go by Example - I actually learned more from go by example than the tour. It was great.
  • The github API wrapper in Go - I figured I should start with some practical code samples written by folks at Google. When in doubt, I used this to make sure I was doing things ‘the go way’.

I’m a learn-by-doing kind of guy - so I decided to learn by building something I’ve built in the past - an API wrapper for the Yelp API. A few years ago, I was working with the ineffable Howard Dierking on a side project to compare RoR to ASP.NET. The project we picked needed to work with the Yelp API - and we noticed the lack of a NuGet package that fit the bill (for the record, there was a gem). To get that project kickstarted, I wrote a C# wrapper over the Yelp API - so I figure, why not do the same for Go? You can see the results of these projects here:

To get a feel for the differences, it’s useful to poke around the two repositories and compare apples to apples. This isn’t a “Why I’m rage quitting .NET and moving to Go” post, or a “Why Go sucks and is doing it wrong” post. Each stack has it’s strengths and weaknesses. Each is better suited for different teams, projects and development cultures. I found myself wanting to understand Go in terms of what I already know about .NET and nodejs, and wishing there was a guide to bridge those gaps. So I guess my goal is to make it easier for those coming from a .NET background to understand Go, and get a feel for how it relates to similar concepts in the .NET world. Here we go!

disclaimer: I’ve been writing C# for 14 years, and Go for 14 days. Please take all judgments of Go with a grain of salt.

Environment Setup

In the .NET world, after you install Visual Studio - you’re really free to set up a project wherever you like. By default, projects are created in ~/Documents, but that’s just a default. When we need to reference another project - you create a project reference in Visual Studio, or you can install a NuGet package. Either way - there really aren’t any restrictions on where the project lives.

Go takes a different approach. If you’re getting started, it’s really important to read/follow this guide:

How to write go code

All go code you write goes into a single root directory, which has a directory for each project / namespace. You tell the go tool chain where to find that directory via the $GOPATH environment variable. You can see on my box, I have all of the pieces I’ve written or played around with here:

$GOPATH

For a single $GOPATH, you have one set of dependencies, and you keep a single version of the go toolchain. It felt a little uncomfortable, and at times it made me think of the GAC. It’s also pretty similar to Ruby. Ruby has RVM to solve this problem, and Go has GVM. If you’re working on different Go projects that have different requirements for runtime version / dependencies - I’d imagine you want to use GVM. Today, I only have one project - so it is less of a concern.

Build & Runtime

In .NET land we use msbuild for compilation. On my team we use Visual Studio at dev time, and run msbuild via jenkins on our CI server. This is a pretty typical setup. At compile time, C# code is compiled down into IL, which is then just-in-time compiled at runtime to the native assembly language of the system’s host.

Go is a little different, as it compiles directly to native code on the current platform. If I compile my yelp library in OSX, it creates an executable that will run on my current machine. This binary is not interpreted, or IL - it is a good ol’ native executable. When linking occurs, the full native go runtime is embedded in your binary. It has much more of a c++ / gcc kind of feel. I’m sure this doesn’t hurt in the performance department - which is one of the reasons folks move from Python/Ruby/Node to Go.

Compilation of *.go files is done by running the go build command from within the directory that contains your go files. I haven’t really come across many projects using complex builds - its usually sufficient to just use go build. Outside of that - it seems like most folks are using makefiles to perform build automation. There’s really no equivalent of a *.csproj file, or *.sln file in go - so there’s no baked in file you would run through an msbuild equivalent. There are just *.go files in a directory, that you run the build tool against. At first I found all of this alarming. After a while - I realized that it mostly “just worked”. It feels very similar to the csproj-less build system of ASP.NET vNext.

Package management

In the .NET world, package management is pretty well known: it’s all about NuGet. You create a project it, add a nuspec, compile a nupkg with binaries, and publish them to nuget.org. There are a lot of NuGet packages that fall under the “must have” category - things like JSON.NET, ELMAH and even ASP.NET MVC. It’s not uncommon to have 30+ packages referenced in your project.

In .NET, we have a packages.config that contains a list of dependencies. This is nice, because it explicitly lays out what we depend upon, and the specific version we want to use:

<packages>
  <package id="Newtonsoft.Json" version="4.5.11" targetFramework="net40" />
  <package id="RestSharp" version="104.1" targetFramework="net40" />
</packages>

Go takes a bit of a different approach. The general philosophy of Go seems to trend towards avoiding external dependencies. I’ve found Blake Mizerany’s talk to be pretty standard for sentiment from the community:

In Go - there’s no equivalent of packages.config. It just doesn’t exist. Instead - dependency installation is driven from Git/Hg repositories or local paths. The dependency is installed into your go path with the go get command:

> go get github.com/JustinBeckwith/go-yelp/yelp

This command pulls down the relevant sources from the Git/Hg repository, and builds the binaries specific to your OS. To use the dependency, you don’t reference a namespace or dll - you just import the library using the same url used to acquire the library:

import "github.com/JustinBeckwith/go-yelp/yelp"
...
client := yelp.New(options)
result, err := client.DoSimpleSearch("coffee", "seattle")

When you go compile, the compiler walks through each *.go file, finds the list of external (non BCL) libraries, and implicitly does a go get if needed. This is both awesome and frightening at the same time. It’s great that go doesn’t require explicit dependencies. It’s great that I don’t need to think of the package and the namespace as different entities. It’s not cool that I cannot choose a specific version of a package. No wonder the go community is skeptical of external dependencies - I wouldn’t reference the tip of the master branch of any project and expect it to keep working for the long haul.

To get get around this limitation, a few package managers started to pop up in the community. There are a lot of them. Given the lack of a single winner in this space, I chose to write my package ‘the go way’ and not attempt using a package manager.

Tooling

In the .NET world - Visual Studio is king. I know a lot of folks that use things like JetBrains or SublimeText for code editing (I’m one of those SublimeText folks), but really it’s all about VS. Visual Studio gives us project templates, IntelliSense, builds, tests, refactoring, code outlines - you get it. It’s all in the box. A giant box.

With Go, most developers tend to use a more stripped down code editor. There are a lot of folks using vim, sublimetext, or notepad++. Here are some of the more popular options:

You can find a good conversation about the topic on this reddit thread. Personally - I’m comfortable with SublimeText, so I went with that + the GoSublime plugin. It gave me syntax highlighting, auto-format on save, and some lightweight IntelliSense for core packages. That having been said, LiteIDE feels a little more close to a full featured IDE:

LiteIDE

There are a lot of options out there - which is a good thing :) Go comes with a variety of other command line tools that make working with the framework easier:

  • go build - builds your code
  • go install - builds the code, and installs it in the $GOPATH
  • go test - runs all tests in the project
  • gofmt - formats your source code matching go coding standards
  • gocov - perform code coverage analysis

I used all of these while working on my library.

Testing

In YelpSharp, I have the typical unit test project included with my package. I have several test files created, each of which has several test functions. I can then run my test through Visual Studio or the test runner. A typical test would look like this:

[TestMethod]
public void VerifyGeneralOptions()
{
    var y = new Yelp(Config.Options);

    var searchOptions = new SearchOptions();
    searchOptions.GeneralOptions = new GeneralOptions()
    {
        term = "coffee"
    };

    searchOptions.LocationOptions = new LocationOptions()
    {
        location = "seattle"
    };

    var results = y.Search(searchOptions).Result;
    Assert.IsTrue(results.businesses != null);
    Assert.IsTrue(results.businesses.Count > 0);            
}

The accepted pattern in Go for tests is to write a corresponding <filename>_test.go for each Go file. Every method that starts with Test<RestOfFunctionName> in the name, is executed as part of the test suite. By running go test, you run every test in the current project. It’s pretty convenient, though I found myself wishing for something that auto-compiled my code and auto-ran tests (similar to the grunt/mocha/concurrent setup I like to use in node). A typical test function in go would look like this:

// TestGeneralOptions will verify search with location and search term.
func TestGeneralOptions(t *testing.T) {
	client := getClient(t)
	options := SearchOptions{
		GeneralOptions: &GeneralOptions{
			Term: "coffee",
		},
		LocationOptions: &LocationOptions{
			Location: "seattle",
		},
	}
	result, err := client.DoSearch(options)
	check(t, err)
	assert(t, len(result.Businesses) > 0, containsResults)
}

The assert I used here is not baked in - there are no asserts in Go. For code coverage reports, the gocov tool does a nice job. To automatically run test against my GitHub repository, and auto-generate code coverage reports - I’ve using Travis CI and Coveralls.io. I’m planning on writing up another post on the tools you can use to build an effective open source Go library - so more on that later :)

Programming language

Finally, let’s take a look at some code. C# is amazing. It’s been around now for 15 years or so, and it’s grown methodically (in a good way). In terms of basic syntax, it’s your standard C derivative language:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace YelpSharp.Data.Options
{
    /// <summary>
    /// options for locale
    /// </summary>
    public class LocaleOptions : BaseOptions
    {
      
        /// <summary>
        /// ISO 3166-1 alpha-2 country code. Default country to use when parsing the location field.
        /// United States = US, Canada = CA, United Kingdom = GB (not UK).
        /// </summary>
        public string cc { get; set; }

        /// <summary>
        /// ISO 639 language code (default=en). Reviews written in the specified language will be shown.
        /// </summary>
        public string lang { get; set; }

        /// <summary>
        /// format the properties for the querystring - bounds is a single querystring parameter
        /// </summary>
        /// <returns></returns>
        public override Dictionary<string, string> GetParameters()
        {
            var ps = new Dictionary<string, string>();
            if (!String.IsNullOrEmpty(cc)) ps.Add("cc", this.cc);
            if (!String.IsNullOrEmpty(lang)) ps.Add("lang", this.lang);
            return ps;
        }
    }
}

C# supports both static and dynamic typing, but generally trends towards a static type style. I’ve always appreciated the way C# can appeal to both new and experienced developers. The best part about the language in my opinion has been the steady, thoughtful introduction of new features. Some of the things that happened between C# 1.0 and C# 5.0 (the current version) include:

There’s more great stuff coming in C# 6.0.

With Go - I found most language features to be… well… missing. It’s a really basic language - you get interfaces (not the way we know them), maps, slices, arrays, and some primitives. It’s very minimal - and this is by design. I was surprised when I started using go and found a few things missing (in no order):

  • Generics
  • Exception handling
  • Method overloading
  • Optional parameters
  • Nullable types
  • Implementing an interface explicitly
  • foreach, while, yield, etc

To understand why Go doesn’t have these features - you have to understand the roots of the language. Rob Pike (one of the designers of Go) really explains the problems they were trying to solve, and the design decisions of the language well in his post “Less is exponentially more”. The idea is to provide a set of base primitive constructs: only the ones that are absolutely required.

Interfaces, methods, and not-classes

Having written a lot of C# - I got used to having all of these features at my disposal. I got used to traditional OOP features like classes, inheritance, method overloading, and explicit interfaces. I also write a lot of JavaScript - so I understand (and love) dynamic typing. What’s weird about Go is that is both statically typed - and doesn’t provide many of the OOP features I’ve grown to lean upon.

To see how this plays out - lets look at the same structure written in Go:

package yelp

// LocaleOptions provide additional search options that enable returning results
// based on a given country or locale.
type LocaleOptions struct {
	// ISO 3166-1 alpha-2 country code. Default country to use when parsing the location field. 
	// United States = US, Canada = CA, United Kingdom = GB (not UK).
	cc   string 
	
	// ISO 639 language code (default=en). Reviews written in the specified language will be shown.
	lang string
}

// getParameters will reflect over the values of the given struct, and provide a type appropriate 
// set of querystring parameters that match the defined values.
func (o *LocaleOptions) getParameters() (params map[string]string, err error) {
	params = make(map[string]string)
	if o.cc != "" {
		params["cc"] = o.cc
	}
	if o.lang != "" {
		params["lang"] = o.lang
	}
	return params, nil
}

There are a few interesting things to call out from these two samples:

  • The Go sample and C# are close to the same size - XMLDoc in this case really makes C# seem longer.
  • I’m not using a class - but rather an interface. Interfaces can support methods via the syntax above. If you define a func that takes a pointer to an object of the interface type - it now supports methods.
  • In my C# sample, this structure implements an interface. In Go, you write an interface, and then structures implement them ambiently - there is no implements keyword.
  • Pointers are an important concept in Go. I haven’t had to think about pointers since 2001 (the last time I wrote C++). It’s not a big deal, but not something I expected to run into.
  • Notice that the getParameters() function returns multiple results - that’s new (and kind of cool).
  • The getParameters() method returns an error as one of the potential return values. You need to do that since there is no concept of an exception in Go.

Error handling

Let that one sink for a moment. Go takes a strange (but effective) approach to error handling. Instead of tossing an exception and expecting the caller to catch and react, many (if not most) functions will return an error. It’s on the caller to check the value of that error, and choose how to react. You can learn more about error handling in Go here. The net result, is that I wrote a lot of code like this:

// DoSearch performs a complex search with full search options.
func (client *Client) DoSearch(options SearchOptions) (result SearchResult, err error) {

	// get the options from the search provider
	params, err := options.getParameters()
	if err != nil {
		return SearchResult{}, err
	}

	// perform the search request
	rawResult, _, err := client.makeRequest(searchArea, "", params)
	if err != nil {
		return SearchResult{}, err
	}

	// convert the result from json
	err = json.Unmarshal(rawResult, &result)
	if err != nil {
		return SearchResult{}, err
	}
	return result, nil
}

In this example, the DoSearch method returns multiple values (get used to this), one of which is an error. There are 3 different method calls made in this function - all of which may return an error. For each of them, you need to check the err value, and choose how to react - oftentimes, just bubbling the error back up through the callstack by hand. I haven’t quite learned to love this aspect of the language yet.

Writing async code

In the previous sample, you may have noticed something fishy. On the following line, I’m making an HTTP request, checking for an error, and them moving forward:

rawResult, _, err := client.makeRequest(searchArea, "", params)
if err != nil {
	return SearchResult{}, err
}
...

That code is synchronous. When I first wrote this code - I was fairly certain I was making a mistake. Years of callbacks or promises in node, and years of tasks and async/await in C# had taught me something really clear - synchronous methods that block the thread are bad. But here’s Go - just doing it’s thing. I thought I was making a mistake, until I started poking around and found a few people with the same misunderstanding. To make a call asynchronously in Go, it’s largely up to the caller, using a goroutine. Goroutines are kind of cool. You essentially point at a function and say ‘run this asynchronously’:

func Announce(message string, delay time.Duration) {
    go func() {
        time.Sleep(delay)
        fmt.Println(message)
    }()  // Note the parentheses - must call the function.
}

Running go <func> in this manner will run the function concurrently in the same process. This does not create a system thread or fork the process - it’s completely internal to the go runtime. Like most things with Go - I was confused and scared at first, as I tried to apply what I know about C# and JavaScript to their model. I haven’t written enough of this style of asynchronous code to have a great feel for the subject, but I plan to spend a lot of time here in the coming weeks.

What’s next

My experience so far with Go has been at times frustrating, but certainty not boring. The best advice I can give for those new to the language is to let go of your preconceived notion of how [insert concept or task] works - it’s almost like the designers of Go tried to do things differently for the sake of doing it differently at times. And that’s ok :) I was cursing far less at the end of the project than the beginning, and I’ve now started to move towards understanding why it’s different (except for the lack of generics - that’s just weird). It’s helping me question some of the design decisions I’ve made on my own APIs at work, and helped me better appreciate the niceties of C#.

I’ve really only scratched the surface of what’s out there for Go. Now that I’ve put together a library, I’m going to take the next step and start playing around with revel, which provides an ASP.NET style web framework on top of Go. From there, I’m going to keep on building, and see where this goes. Happy coding!

The gopher image is Creative Commons Attributions 3.0 licensed. Credit Renee French.

 
 

es6: Getting ready for the next version of JavaScript

11 November 2014 Posted Under: JavaScript [0] comments
 

Rockin' the big stage at Øredev

While attending the developer conference Øredev last week, I had the pleasure of giving a talk on es6. Many of the new features I covered touch on challenges we’ve faced building the Azure Portal. Instead of covering each new feature one by one (Luke Hoban already does a nice job of that) I decided to cover a few high level features that fundamentally affect the way teams build large scale JavaScript applications.

Timeline

  • 00:00 - Intro
  • 01:45 - History of JavaScript
  • 05:00 - Challenges with large scale applications
  • 08:45 - Modules
  • 13:20 - Classes
  • 15:15 - Using the Traceur compiler
  • 17:30 - TypeScript & AMD
  • 20:15 - Scope / What is ‘this’
  • 21:30 - Arrow functions
  • 22:30 - Let vs var
  • 25:00 - Browser support
  • 28:45 - Promises
  • 34:00 - Node.js support
  • 36:50 - More features
  • 37:13 - es7
  • 38:50 - Closing

Watch the video

ES6: GETTING READY FOR JAVASCRIPT VNEXT from Øredev Conference on Vimeo.

Thanks!

 
 

Building web applications with ASP.NET vNext

09 November 2014 Posted Under: asp.net [0] comments
 

Rockin' the big stage at Øredev

Last week I had the amazing opportunity to give a talk on ASP.NET vNext at the Øredev developer conference. I had a blast - especially the part where I got to show off the new bits to a few hundred people on the big stage. It’s especially fun showing off the new features that open up ASP.NET development with Mono on OSX. There’s a lot of great stuff in this release - the new request pipeline, bin deployable CLR, command line tools, configuration APIs, SublimeText support, fewer dependencies on Visual Studio - and lots of open source.

Timeline

  • 00:00 - Intro
  • 04:20 - Challenges with the current stack
  • 06:25 - Intro to ASP.NET vNext
  • 07:50 - ASP.NET vNext project templates
  • 10:05 - Controllers, Models, Views
  • 12:30 - Reference model
  • 14:45 - project.json
  • 16:30 - Commands
  • 18:20 - KVM, KPM, & K
  • 22:45 - Startup.cs
  • 23:50 - Configuration
  • 26:15 - Services
  • 27:00 - Module registration
  • 31:00 - Open source ASP.NET
  • 34:15 - Publishing with CoreCLR & MVC
  • 36:00 - OSX, Mono, SublimeText
  • 40:15 - Timeline
  • 41:30 - Closing

Watch the video

Building web applications with ASP.NET from Øredev Conference on Vimeo.

Thanks!

 
 

Under the hood of the new Azure Portal

20 September 2014 Posted Under: azure [0] comments
 

Damn, we look good.

So - I haven’t been doing much blogging or speaking on WebMatrix or node recently. For the last year and a half, I’ve been part of the team that’s building the new Azure portal - and it’s been quite an experience. A lot has been said about the end to end experience, the integration of Visual Studio Online, and even some of the new services that have been released lately. All of that’s awesome, but it’s not what I want to talk about today. As much as those things are great (and I mean, who doesn’t like the design), the real interesting piece is the underlying architecture. Let’s take a look under the hood of the new Azure portal.

A little history

To understand how the new portal works, you need to know a little about the current management portal. When the current portal was started, there were only a handful of services in Azure. Off of the top of my head, I think they were:

  • Cloud Services
  • Web sites
  • Storage
  • Cache
  • CDN

Out of the gate - this was pretty easy to manage. Most of those teams were all in the same organization at Microsoft, so coordinating releases was feasible. The portal team was a single group that was responsible for delivering the majority of the UI. There was little need to hand off responsibility to the individual experiences to the teams which wrote the services, as it was easier to keep everything in house. There is a single ASP.NET MVC application, which contains all of the CSS, JavaScript, and shared widgets used throughout the app.

The current Azure portal, in all of it's blue glory

The team shipped every 3 weeks, tightly coordinating the schedule with each service team. It works … pretty much as one would expect a web application to work.

And then everything went crazy.

As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However - it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.

Scaling to ∞ services

Given the difficulties of the engineering and ship processes with the current portal, scaling to 200 different services didn’t seem like a great idea with the current infrastructure. The next time around, we took a different approach.

The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.

Sandboxing in the browser

To claim we’re an OS, we had to build a sandboxing model. One badly behaving application shouldn’t have the ability to bring down the whole OS. In addition to that - an application shouldn’t be able to grab data from another, unless by an approved mechanism. JavaScript by default doesn’t really lend itself well to this kind of isolation - most web developers are used to picking up something like jQuery, and directly working against the DOM. This wasn’t going to work if we wanted to protect the OS against badly behaving (or even malicious) code.

To get around this, each new service in Azure builds what we call an ‘extension’. It’s pretty much an application to our operating system. It runs in isolation, inside of an IFRAME. When the portal loads, we inject some bootstrapping scripts into each IFRAME at runtime. Those scripts provide the structured API extensions use to communicate with the shell. This API includes things like:

  • Defining parts, blades, and commands
  • Customizing the UI of parts
  • Binding data into UI elements
  • Sending notifications

The most important aspect is that the extension developer doesn’t get to run arbitrary JavaScript in the portal’s window. They can only run script in their IFRAME - which does not project UI. If an extension starts to fault - we can shut it down before it damages the broader system. We spent some time looking into web workers - but found some reliability problems when using > 20 of them at the same time. We’ll probably end up back there at some point.

Distributed continuous deployment

In this model, each extension is essentially it’s own web application. Each service hosts their own extension, which is pulled into the shell at runtime. The various UI services of Azure aren’t composed until they are loaded in the browser. This lets us do some really cool stuff. At any given point, a separate experience in the portal (for example, Azure Websites) can choose to deploy an extension that affects only their UI - completely independent of the rest of the portal.

IFRAMEs are not used to render the UI - that’s all done in the core frame. The IFRAME is only used to automate the JavaScript APIs that communicate over window.postMessage().

Each extension is loaded into the shell at runtime from their own back end

This architecture allows us to scale to ∞ deployments in a given day. If the media services team wants to roll out a new feature on a Tuesday, but the storage team isn’t ready with updates they’re planning - that’s fine. They can each deploy their own changes as needed, without affecting the rest of the portal.

Stuff we’re using

Once you start poking around, you’ll notice the portal is big single page application. That came with a lot of challenges - here are some of the technologies we’re using to solve them.

TypeScript

Like any single page app, the portal runs a lot of JavaScript. We have a ton of APIs that run internal to the shell, and APIs that are exposed for extension authors across Microsoft. To support our enormous codebase, and the many teams using our SDK to build portal experiences, we chose to use TypeScript.

  • TypeScript compiles into JavaScript. There’s no runtime VM, or plug-ins required.
  • The tooling is awesome. Visual Studio gives us (and partner teams) IntelliSense and compile time validation.
  • Generating interfaces for partners is really easy. We distribute d.ts files which partners use to program against our APIs.
  • There’s great integration for using AMD module loading. This is critical to us for productivity and performance reasons. (more on this in another post).
  • JavaScript is valid TypeScript - so the learning curve isn’t so high. The syntax is also largely forward looking to ES6, so we’re actually getting a jump on some new concepts.

Less

Visually, there’s a lot going on inside of the portal. To help organize our CSS, and promote usability, we’ve adopted {LESS}. Less does a couple of cool things for us:

  • We can create variables for colors. We have a pre-defined color palette - less makes it easy to define those up front, and re-use the same colors throughout our style sheets.
  • The tooling is awesome. Similar to TypeScript, Visual Studio has great Less support with full IntelliSense and validation.
  • It made theming easier.

The dark theme of the portal was much easier to make using less

Knockout

With the new design, we were really going for a ‘live tile’ feel. As new websites are added, or new log entries are available, we wanted to make sure it was easy for developers to update that information. Given that goal, along with the quirks of our design (extension authors can’t write JavaScript that runs in the main window), Knockout turned out to be a fine choice. There are a few reasons we love Knockout:

  • Automatic refreshing of the UI - The data binding aspect of Knockout is pretty incredible. We make changes to underlying model objects in TypeScript, and the UI is updated for us.
  • The tooling is great. This is starting to be a recurring theme :) Visual Studio has some great tooling for Knockout data binding expressions (thanks Mads).
  • The binding syntax is pure - We’re not stuck putting invalid HTML in our code to support the specifics of the binding library. Everything is driven off of data-* attributes.

I’m sure there are 100 other reasons our dev team could come up with on why we love Knockout. Especially the ineffable Steve Sanderson, who joined our dev team to work on the project. He even gave an awesome talk on the subject at NDC:

What’s next

I’m really excited about the future of the portal. Since our first release at //build, we’ve been working on new features, and responding to a lot of the customer feedback. Either way - we really want to know what you think.