Refactoring Switch Statements to Dictionaries

posted on 12/13/12 at 07:05:33 pm by Joel Ross

I recently came across some code that was causing an exception because a value was added to an enumeration, and the code never got updated to handle the new value. Fixing it is pretty simple. Just add another case statement, and be done with it.

But fixing it correctly is a bit more work. And any time I'm editing code, my goal is to leave it better than it was when I started. While adding the handling for the missing enumeration value seems fine, it's really not, because it's just putting a band-aid on the issue.

So what's the correct way to fix this? Well, like most things, "correct" is subjective, and as our understanding of software development changes, so does the definition of correct, so take my solution as it is: my current favorite way to handle situations like this.

First, let's look at the offending code. This is a form that shows a keyboard in our application, and it can be configured differently depending on what is being entered.

   1: private void ShowKeyboard()
   2: {
   3:   KeyboardRequest request = null;
   4:   
   5:   switch (entryMode) 
   6:   {
   7:     case EntryModes.SerialNumber:
   8:       request = GetKeyboardRequestForSerialNumber(View.EnteredValue);
   9:       break;
  10:  
  11:     case EntryModes.BarCode:
  12:       request = GetKeyboardRequestForSerialNumber(View.EnteredValue);
  13:       break;
  14:   
  15:     case EntryModes.LotNumber:
  16:       request = GetKeyboardRequestForLotNumber(View.EnteredValue);
  17:       break;
  18:   
  19:     case EntryModes.ValveNumber:
  20:       request = GetKeyboardRequestForValveNumber(View.EnteredValue);
  21:       break;
  22:   }
  23:   
  24:   var response = GetKeyboardEntryFor(request);
  25:  
  26:   if (response.Status == ServiceResultStatus.Success)
  27:   {
  28:     View.EnteredValue = response.Data;
  29:   }
  30: }

This code has a few problems. First, the main functionality is hidden because the majority of the method is a giant switch statement. Yeah, that section could be extracted out. But that won't solve the second issue, which is that this code has to be updated if we ever add another entry mode for this class.

Moving this to use a dictionary solves both issues. First, let's look at how the code changes

   1: private IDictionary<EntryModes, Func<string, KeyboardRequest>> getKeyboardRequest = 
   2:   new Dictionary<EntryModes, Func<string, KeyboardRequest>>
   3:   {
   4:     { EntryModes.SerialNumber, value => GetKeyboardRequestForSerialNumber(value) },
   5:     { EntryModes.BarCode, value => GetKeyboardRequestForBarCode(value) },
   6:     { EntryModes.LotNumber, value => GetKeyboardRequestForLotNumber(value) },
   7:     { EntryModes.ValveNumber, value => GetKeyboardRequestForValveNumber(value) },
   8:   };
   9:  
  10: private void ShowKeyboard()
  11: {
  12:   var request = getKeyboardRequest[entryMode](View.EnteredValue);
  13:   var response = GetKeyboardEntryFor(request);
  14:  
  15:   if (response.Status == ServiceResultStatus.Success)
  16:   {
  17:     View.EnteredValue = response.Data;
  18:   }
  19: }

The method is now much easier to see figure out what it is doing because the focus of the method isn't on how the request is created.

With the way it's currently written, it doesn't solve the second issue - this code needing to be modified when a new enumeration value needs to be added, but it's a lot easier to extract the initialization of the dictionary to be external to the class (passed in via the constructor and/or built by an IoC container). But, even with the way it's done now, you don't to modify the method to add a new enumeration value.

I've used this method quite a bit to clean up my code, and so far, I've been happy with the results. It makes the code a lot cleaner, and easier to maintain.

Discuss this post

Categories: Development, C#


 

The BusyLight and Skype

posted on 12/06/12 at 06:02:38 pm by Joel Ross

BusylightA few months back, I came across a post by Scott Hanselman about the BusyLight. I thought it was pretty cool, and I wished it worked with Skype, since I don't use Lync, but I do use Skype on a nearly daily basis, and I work from home so it's not at all uncommon for The Wife or my kids to pop into my office. Having a visual indicator would definitely help limit the interruptions when I'm a call.

But alas, there was no Skype support, so I didn't think too much about it until I saw a post about hacking the BusyLight and figuring out how to get it to work without Lync driving it. I was intrigued, so of course, I tweeted about it.

Shortly after that, I was contacted by Plenom, the company behind BusyLight. I discussed a few ideas with them, both about my plans and about theirs (an SDK!). As a result, I decided to build SkypeLight. It marries Skype and BusyLight, so whenever I get a call on Skype, the BusyLight turns red.

I first started with the Skype API to determine how I could detect when a call started, ended, or is in progress. The simplest way to do this is to register and reference the Skype4COM dll. From there, you can handle an event called CallStatus.

   1: var callStatus = CallStatus.NotOnCall;
   2: foreach (var item in skype.ActiveCalls)
   3: {
   4:   if (item is Call)
   5:   {
   6:     var call = item as Call;
   7:  
   8:     if (call.Status == TCallStatus.clsInProgress 
   9:         && (callStatus != CallStatus.OnVideoCall && callStatus != CallStatus.Ringing ))
  10:     {
  11:       callStatus = CallStatus.OnAudioCall;
  12:       if (call.VideoStatus == TCallVideoStatus.cvsBothEnabled 
  13:           || call.VideoStatus == TCallVideoStatus.cvsReceiveEnabled 
  14:           || call.VideoStatus == TCallVideoStatus.cvsSendEnabled)
  15:       {
  16:         callStatus = CallStatus.OnVideoCall;
  17:       }
  18:     }
  19:     if (call.Status == TCallStatus.clsRinging || call.Status == TCallStatus.clsRouting)
  20:     {
  21:       callStatus = CallStatus.Ringing;
  22:       break;
  23:     }
  24:   }
  25: }
  26:  
  27: DomainEvents.Raise(new CallStatusChanged(callStatus));

When the application starts up, I get a reference to Skype, and when the call status changes, I check each call to determine which kind of call it is. Once that's determined, I use eventing to let any interested parties know that the call status has changed.

Once I proved that I could get call status changes through Skype, I started working on the BusyLight side. As of right now, BusyLight does not have a published / documented SDK, so I don't have what I would consider to be a full implementation. Right now, it shows Green (not on a call), Red (on a call), or Yellow (incoming our outgoing call attempt). The BusyLight supports audio as well as pulsing lights, but I couldn't get any of that working through my hacking. Once an SDK is published, I'll go back and put some finishing touches on it.

The code to actually change the BusyLight isn't really all that interesting. It's basically the same code Tom wrote in his post on hacking it. Once I get audio working, or a pulsating light, then maybe the code will get a little more interesting. If you're interested in the BusyLight adapter I wrote, it's available here. This code is pretty portable. I used it as-is when I added it as a build indicator for Traffic Light.

If you've got a BusyLight and want to use it with Skype, check out the GitHub repo for instructions to set it up. I've been using it for a few weeks now, and it's working great.

One last note: Plenom was nice enough to send me a BusyLight free of charge that I could use. I probably would have ordered one anyway, but in the interest of full disclosure, I figured I'd share that info as well.

Discuss this post

Categories: Development, Software, C#


 

How Technology Helped Me Enjoy Running

posted on 11/29/12 at 06:25:30 pm by Joel Ross

When I was in high school, I used to run quite a bit. Not because I wanted to, but because I had to. I played soccer, and I knew that come the middle of August, I'd be be doing 120s, and our coach assumed we'd be in shape. For those unfamiliar with 120s:

Sprint the full length of a typical 120-yard soccer field. Begin the drill on one end line with a coach or teammate timing you. On command, sprint the full length of the field, looking for a time of 18 seconds. Jog back to the start in 25 [we did 30] seconds and rest for 30 more seconds. Complete six repetitions and build up to 10 repetitions.

If you've ever tried doing that in August heat without being in shape, I can tell you that it's not pretty - so ugly that it's motivation enough to run through the spring and summer to ensure that you're in shape when the time comes.

But once high school ended, so did my motivation. I hated running during high school, but I had a reason to do it. Once out of high school, I would have been happy to never run a mile again.

Or so I thought. This spring, The Wife challenged me to get to the gym a couple times a week and be a little more active. Apparently the few trips I made upstairs from The Dungeon to refresh my water wasn't enough!

I started running for the first time in more than 15 years back in February. I started out very slow, but I kept at it. I was running indoors, but started using RunKeeper to track how far I was running and roughly how long it took. I started getting to the point where I could run a couple of miles without stopping. Being able to track my progress helped motivate me, but I still wasn't quite ready to say that I liked running.

Eventually, it got warm enough to comfortably run outside, and that's when things changed for me. I went from merely tolerating running to actually enjoying it and looking forward to my lunch hour, which was when I would slip out to go for my run. Being able to visualize my run gave me a better feel for it, and RunKeeper gave me the information I wanted, including a nice graph of my route:

PieRoute

I can safely say that technology was the main factor in my new-found fondness for running. Being able to track my path, time, and speed at every moment was very enlightening, and allowed me to strive to improve in ways that aren't (easily) possible without technology. When I was in high school, I'd lay out a 2 or 3 mile route, and I'd know how I was doing only by seeing how long it took me to complete the whole run. Now, I can see exactly how fast I was running at any point during my route and understand exactly where I have problems. For example, I learned that I hit a wall that I had to will myself through at about a 1/2 mile and it lasted for almost a mile. Once I got past that, I could finish up my 3.14 mile route (because you know, I'm a geek) faster than I started. The more I ran, the further that wall got - right now, it's closer to 1 1/2 miles - and the duration got smaller - down to about a 1/2 mile now.

The other aspect of technology that makes running enjoyable is music. I run with my phone and I have a Pandora channel I listen to, as well as a set of Bluetooth headphones so I don't have wires tangling me up. Listening to music drowns out my body telling me to stop, and it can really help to set the pace. Without it, I'm not sure I'd be able to push through the hard parts where I really want to stop and walk for a bit.

For most of the summer, I alternated between running and cycling. I found a 12 mile route around Spring Lake (the actual lake, not the village) that I could complete during my lunch as well. On a good week, I'd run 3 days and ride 2 days, although that was rare. More common was running 2 days and riding 1 or 2 days.

MyCurrentPaceHow have I done? Well, when I first started running, I was averaging around 12 minute miles. My first outdoor run (after running for a month indoors) was just above a 9:00 / mile pace. It took me most of the summer before I could average less than 8 minutes per mile, but getting over that hurdle was a breakthrough for me, and since then I've been able to push myself to achieve the numbers in the image to the right - a 7:27 / mile pace. Riding was both a lot easier and a lot harder at the same time. I can do the 12 mile route at just over a 4 minute per mile pace (the easy part), but no matter how hard I seemed to try, I just couldn't improve on it (the hard part).

I know the above numbers aren't really that impressive. I remember being able to do 6 to 6 1/2 minute miles in high school, and I doubt I'll be able to get close to 6 minutes in the foreseeable future. On the other hand, I also remember not being able to mentally push myself back then. When I got tired, I just stopped running. I couldn't push myself through the hard parts, so I guess it's a good thing I had the physical ability to do it.

Ah, to have the body I had then with the mental fortitude I have now.

Discuss this post

Categories: Personal


 

Trello For Life!

posted on 11/15/12 at 05:24:44 pm by Joel Ross

Hi. My name is Joel and I'm addicted to Trello.

I'm not afraid to admit it. I started using it just for one little thing - tracking what I was working on for Develomatic. My work with Develomatic usually only takes place after 9 or 10 at night, and remembering exactly where I left off can sometimes be difficult. Trello allowed me to quickly and easily remember where I was when I crashed the night before.

But Trello slowly crept into other facets of my life. It started out being just development related. I tracked side projects with it. Then I started using it to track my TrackAbout tasks. Then I expanded my Develomatic board so we could use it as a team to manage releases. Soon though, I started using it beyond development. I started tracking projects I wanted to do around the house. I tracked my bills. I basically starting using it for everything I could. As I look through my list of boards, I find I have quite a few:

  • Books & Movies: I keep track of what I've read/seen, as well as what I want to read/see. I also use tags to indicate where I can get it (for example, is a movie available on Netflix streaming? Or can I get the ebook from my Library?). The ability to add cover photos is also nice here, because it gives you a nice visual for the item. I also have different lists for movies I want to see versus ones for my family and ones for The Wife and I to watch without kids.
  • Bills: When I get a bill in the mail, I put it on my bills board. I add a card with an amount and due date. Then, when I'm ready to pay bills, I can just go to the board and see what's due soon, and pay them all at once. Eventually, I'd like to get The Wife involved in this one, so she can put new bills in there when she opens the mail or knows about a bill coming soon. I also use Trello Calendar, combined with an ICS feed, to get my due dates right in Google Calendar.
  • Blogging: For a while, I struggled with what to write (evidenced by my lack of posts over the past couple of years). I'd get ideas, but I'd forget about them or not have enough of a thought to constitute a whole post. Now, when I get an idea, I add it to the board. Now that I see it all the time, I'm thinking about it more, and I can add info to the card for the post as I think of new ideas. Eventually, I'll have enough to write a whole blog post, and I'll do it. And now that I'm writing a little bit more, I can use it to track what's been written and schedule when I want to post it, so if I have a week when I don't feel like writing, I can get away with it, and the blog doesn't sit dormant.
  • Event Planning: TrackAbout has an annual DevCon, and we're using a board to plan out our week, including what project(s) we're going to tackle, what we'd like to do for fun in the evenings, and most importantly where we'd like to eat.
  • Development: The bread and butter of Trello for me. I used to put all of my tasks in one board, but eventually I got used to the idea of switching boards often, I eventually split the boards out by project. I now have several development-focused boards, including one for my side projects, another for my TrackAbout tasks, a couple for my Develomatic tasks, and several boards for projects we're working on at work. I also have a public board for Traffic Light.
  • Ideas: I am involved in several boards that just track ideas. These are mostly software ideas, but we also use one to organize our sprint retrospectives after every release.
  • Housework: We'll be putting our house on the market next year, and there's a few things we need to get done. Tracking it is about the only way I can think of to ensure we do it all.

There are also a few others, like ones we used when we were hiring to track candidates through the pipeline and how we're going to divide up the development team to get the work done we need to get done each sprint.

Some of my long term readers may remember that I wrote a post similar to this about 3 years ago talking about how I was managing my tasks then. You'll notice that I've switched tools, and that's because the friction of creating new cards, updating cards, creating new boards, etc. is significantly lower with Trello.

I will say that Trello is not the best tool for tracking some of these things. For example, Goodreads is probably a better way to track books, but I use Trello because it's all in one place, and while it's not the best, it accomplishes my goal.

Still, there's a few things lacking from Trello that I would love to see:

  • Email a card: A lot of my tasks come through email. Rather than use my Inbox as a to-do list, I create a card with the relevant information on it and then file the email. If there was a way to forward an email to Trello and have it create a card for me, that would save some time. I see this working a lot like forwarding plans to TripIt.
  • Notification reminders for due dates. Scratch that. In the time since I started planning this post (in a card on my blogging board, of course!), the Trello team pushed this feature out, and you now get notifications for upcoming due dates.
  • Specifically for the Android app, I want it to remember which list I last looked at for each board. My main board has columns for Far, Near, Here and Done. I spend the majority of my time looking at Here, since this is what I'm doing right now. It's the list I go to every time I pull up the board, but in the Android app, it always starts me out on the Far list. Not a major issue, but an annoyance none the less.

I love that Trello is being actively developed and I'm sure I'll see new features that will help me better organize my life. As it is, I feel I have a much better understanding of what I need to do and I'm more productive because I know what's coming next.

What tools are you using to manage your tasks? Do you keep separate lists for personal versus work tasks? Let me know!

Categories: General, Development


 

Automating Jasmine Tests with Chutzpah

posted on 11/08/12 at 05:01:02 pm by Joel Ross

Last time, I wrote about how I could use Jasmine to validate the functionality of a page. While that's a good outcome, it's not maintainable to load each Jasmine test runner into the browser and verify that all of the tests pass. Our site has hundreds of pages, so if this is going to work, it clearly needs to be automated.

Enter PhantomJS. It's a tool that allows you to run JavaScript from the command line. PhantomJS isn't explicitly designed to automate unit tests - it actually has quite a few other uses, including screen captures and network monitoring. Getting it set up to run unit tests on it's own was a bit cumbersome. Luckily I found Chutzpah, which uses PhantomJS, but is specifically meant to be used for automating unit tests.

The first step toward automation was to get our tests running from the command line. It turns out to be quite simple, and we can use the same HTML test runner page we created for our unit of measure page from last time. The command line just requires that you pass it the HTML test runner page to chutzpah:

chutzpah.console.exe index.html

Running that command results in output that shows that the tests passed:

chutzpahCommandLineTestRunnerHTML

Now we've automated the testing of one page. That's nice, but not quite what we want, since I don't really want to have to run this command line each time for each page. Luckily, Chutzpah allows you to run multiple test files at once - but only if you run the console runner against the JavaScript spec files directly, rather than using the HTML test runner pages. This is actually a good thing, since now we don't have to create an HTML test page for each and every spec we create. So I guess it's time to figure out how that works.

Our spec file doesn't have to change that much to get this to work. Remember that the main thing our test runner HTML page did for us was relate specs to the JavaScript file it was testing. So we're going to have to figure out how to link them without using an HTML file. Chutzpah also helps us here. It supports adding comments at the top of the spec file to indicate which JavaScript files it should include:

   1: /// <reference path="jquery.js" />
   2: /// <reference path="jasmine.js" />
   3: /// <reference path="jasmine-html.js" />
   4: /// <reference path="jasmine-jquery.js" />
   5: /// <reference path="index.js" />

Again, I'm assuming all of the files are in the same directory for simplicity, but in reality, you'd probably organize your Javascript a bit better than that.

Now, we can rerun our chutzpah command, but this time, we'll pass in the test spec JavaScript file:

chutzpah.console.exe index.specs.js

Running that will result in the same output as the original command we ran, which is what we expect, but not quite what we want - we wanted to be able to run all of the specs at once. Simple enough. Instead of passing in a file name to the console runner, we can just pass in a folder, and it'll run all of the specs in that folder.

chutzpah.console.exe .\

For the test repo I set up, I have three test files in the folder, and you can see the results from each file, as well as a summary of all of the tests run.

ChutzpahCommandLineAllTests

This is what we wanted. Now we can run one command, and all of the tests we have are executed at once. Well, it's almost what we wanted. What we really want is to run these automatically whenever we check in code - so it runs on our continuous integration server.

As it turns out, this was the easiest part. We use Jenkins at TrackAbout, so I set up a Jenkins server locally, set it up to look at my GitHub repo, and then made a few changes to see what happens when a test fails. To get Chutzpah and Jenkins working together correctly, you just add a "/silent" parameter to the command. And what happens when a test fails? Sure enough, Chutzpah reports it and fails the build without any extra work on my part:

JenkinsChutzpahFailedBuild

If you enlarge the image, you'll see that the build failed, and the log reports exactly which test failed in which file. Perfect.

As a result of the work over the past three posts, I now have a way to validate that the JavaScript code I write is working correctly, and have the running of those tests completely automated.

Discuss this post

Categories: Development


 

Real World Jasmine

posted on 11/01/12 at 07:06:05 pm by Joel Ross

Last time, I was just starting to play with Jasmine, so I picked something simple - the FizzBuzz kata. It worked well, but I can't remember the last time I was searching for a good FizzBuzz implementation when I was developing a web page. So I went back and found a page that I'd recently written, and looked at how I could test it.

About The Page

UnitsOfMeasureIt's a pretty simple page. The system has a few different units of measures and for each type, the user can select a default one. There's an unknown number of types and for each type, there can be any number of units of measure. For example, the page might show volume, weight, and temperature, and temperature could be set to Celsius or Fahrenheit. An example of the rendered output is on the right. The page is dynamically rendered based on the number of unit of measure types and the number of units per type.

The Initial Solution

Like I said, I wanted to write tests for a real world page, so the solution was written without any tests. It also isn't really that well organized, and there's no separation between code that performs logic and code that manipulates the DOM.

I've simplified the page quite a bit to boil it down to just the essentials, so the HTML is pretty straightforward.

   1: <html>
   2:   <head>
   3:     <title>Unit of Measure Defaults</title>
   4:     <script type="text/javascript" src="/lib/jquery.js"></script>
   5:     <script type="text/javascript" src="/src/index.js"></script>
   6:   </head>
   7:   <body>
   8:     <div>Select Unit of Measure Defaults</div>
   9:     <div id="divUnitsOfMeasure"></div>
  10:     <div><input type="submit" name="btnSave" value="Save Defaults" onclick="saveSelectedValues();" id="btnSave" /></div>
  11:     <input type="hidden" name="hdnTypes" id="hdnTypes" />
  12:   </body>
  13: </html>

This is an ASP.NET web form page, so when it's rendered, a JSON representation of the units of measure is injected into the page (variable name "uomTypes", and then serialized to hdnTypes when the user clicks the "Save Defaults" button. The JavaScript to accomplish this is below.

   1: $(document).ready(function() {
   2:     addRows();
   3: });
   4:  
   5: function addRows() {
   6:     var divToAppend = $("#divUnitsOfMeasure");
   7:     for (var typeIndex in uomTypes) {
   8:         var uomType = uomTypes[typeIndex];
   9:         var id = "UoMType" + uomType.Id;
  10:         var selectBox = $("<select id=\"" + id + "\" name \"" + id + "\" />");
  11:         for (var uomIndex in uomType.UnitsOfMeasure) {
  12:             var uom = uomType.UnitsOfMeasure[uomIndex];
  13:             $("<option />", { value: uom.Id, text: uom.Name, selected: uom.IsDefault }).appendTo(selectBox);
  14:         }
  15:         var label = $("<label for=\"" + id + "\" >" + uomType.Name + ":&nbsp;</label>");
  16:         var row = $("<div />");
  17:         row.append(label);
  18:         row.append(selectBox);
  19:         divToAppend.append(row);
  20:     }
  21: }
  22:  
  23: function saveSelectedValues() {
  24:     var field = $('#hdnTypes');
  25:     for (var typeIndex in uomTypes) {
  26:         var uomType = uomTypes[typeIndex];
  27:         var id = "UoMType" + uomType.Id;
  28:  
  29:         var selectBox = $("#" + id);
  30:  
  31:         $("#" + id + " > option").each(function(i) {
  32:             uomType.UnitsOfMeasure[i].IsDefault = this.selected;
  33:         });
  34:     }
  35:  
  36:     field.val(JSON.stringify(uomTypes));
  37: };

Lots of code that accomplishes two things:

  1. On load, it creates drop downs for each unit of measure type.
  2. When saved, it serializes the units of measure to a hidden field, so it can be sent to the server and saved.

It's not the nicest JavaScript code, but it gets the job done.

Creating Tests

Now we need to write some tests for the page. First, we create a test harness for the page that we can load in the browser to run the tests. I'll leave the page out because it's very similar to the page I used for my FizzBuzz kata, with one exception. I added jasmine-jquery, a nice library that helps with creating HTML fixtures in your specs and some nice matchers for jQuery.

I wanted to test two main things:

  1. Are drop downs created for the unit of measure types?
  2. Are the units of measure stored into the hidden field when saved?

So I set out to test #1. That'd be testing the addRows() function. I quickly ran into my first issue. My functions are tightly coupled to jQuery and the DOM. Specifically, addRows() is responsible for finding the particular div that the selects will be added to. Given that I'm just testing the JavaScript and not the actual page, that div won't exist.

Remember when I said I added jasmine-jquery to my tests? This is why. With it, I can run some set up code that adds HTML that can then be used by the specs. This also lead to my first refactoring: instead of having the function find the DOM element, I'll pass it in. This makes the class less dependent on the page, and potentially reusable.

I also updated the JavaScript to use a module, instead of putting all of the methods in the global namespace, and then I added an initialize() method on the module. So that's what I'm testing with my first test:

   1: describe('When loading the view', function() {
   2:     beforeEach(function(){
   3:         jasmine.getFixtures().set("<div id='divUnitsOfMeasure'></div>");
   4:     });
   5:  
   6:     it ('it should create drop downs', function() {
   7:         RossCode.UoM.addRowTo($('#divUnitsOfMeasure'));
   8:         expect($('#divUnitsOfMeasure').children().length).toBeGreaterThan(0);
   9:     });
  10: });

I could have tested for the existence of a specific dropdown based on the units of measure passed in, but for my first test, it was good enough just to ensure that something got added to the div. Notice the beforeEach() function, where it calls jasmine.getFixture().set() to stub in the div that will later be used to create the dropdowns. That's how we can get away without having a whole HTML page and just stub in enough to satisfy the method we are testing.

Luckily, the code I already had works to make this test pass, so with a few minor changes, we can get this test to pass.

   1: (function($) {
   2:     window.RossCode = window.RossCode || { };
   3:     window.RossCode.UoM = {
   4:         addRowTo: function(divToAppend) {
   5:                       for (var typeIndex in uomTypes) {
   6:                           var uomType = uomTypes[typeIndex];
   7:                           var id = "UoMType" + uomType.Id;
   8:                           var selectBox = $("<select id=\"" + id + "\" name \"" + id + "\" />");
   9:                           for (var uomIndex in uomType.UnitsOfMeasure) {
  10:                               var uom = uomType.UnitsOfMeasure[uomIndex];
  11:                               $("<option />", { 
  12:                                                   value: uom.Id, 
  13:                                                   text: uom.Name, 
  14:                                                   selected: uom.IsDefault 
  15:                                               }).appendTo(selectBox);
  16:                           }
  17:                           var label = $("<label for=\"" + id + "\" >" + uomType.Name + ":&nbsp;</label>");
  18:                           var row = $("<div />");
  19:                           row.append(label);
  20:                           row.append(selectBox);
  21:                           divToAppend.append(row);
  22:                       }
  23:                   }
  24:     }
  25: }($));

There's two main changes:

  1. Like I said, it's now a module, so there's some extra code for that.
  2. Instead of the addRows method retrieving the div itself, it's passed in.

Other than that, the code is exactly the same as before, which makes sense, since we started with working code.

Next up would be to write more tests around the population of the dropdowns - things like checking if the right number of units of measure are created, are the right defaults selected, etc. but in the interest of space, I'll leave those out.

The next thing to test is if the page saves correctly. It populates a hidden field, so the test is very similar. When the function that saves the units of measure is called, we check to see if the hidden field is populated correctly.

   1: describe('When saving the view', function() {
   2:     beforeEach(function(){
   3:         jasmine.getFixtures().set('<input type="hidden" id="hdnTypes" />');
   4:     });
   5:  
   6:     it('it should save the new json to the hidden field', function() {
   7:         RossCode.UoM.saveUnitOfMeasuresJsonTo($('#hdnTypes'));
   8:         expect($('#hdnTypes').val()).toBeDefined();
   9:     });
  10: });

This is very similar to the first test, in that my refactoring involved pulling out the retrieval of the hidden field from the method itself and just passed it in. I'm also adding a fixture so I can add the HTML to the test that I need.

The code to pass this test can be added to module pretty easily:

   1: saveUnitOfMeasuresJsonTo: function(field) {
   2:     for (var typeIndex in uomTypes) {
   3:         var uomType = uomTypes[typeIndex];
   4:         var id = "UoMType" + uomType.Id;
   5:  
   6:         var selectBox = $("#" + id);
   7:  
   8:         $("#" + id + " > option").each(function(i) {
   9:             uomType.UnitsOfMeasure[i].IsDefault = this.selected;
  10:         });
  11:     }
  12:     field.val(JSON.stringify(uomTypes));
  13: }

Again, more tests could be written that check the script in more detail, but for simplicity, I'll leave that as an exercise  for the reader.

The Final Page

Rather than include a rehash of everything I've shown above and just putting it all together, I've pushed the code up to GitHub. My FizzBuzz specs are there, as well as my attempt at the string calculator kata.

Some Parting Thoughts on Jasmine

I've now written tests for both JavaScript that works closely with the DOM and for library-type JavaScript. I think the fact that it's extensible (like jasmine-jquery) makes it very powerful to test any type of code you want to write. It also forces you to think about whether the code you're writing has (and should have) dependencies - something I do by instinct when writing server-side code, but not so much when writing client-side code.

I really like the describe() / it() style of testing. It pretty closely follows the style of tests I'm writing to test my C# code, and I find it makes writing tests a lot quicker than writing a new test fixture or extracting out a base class.

Jasmine is definitely something I'd like to start using on a regular basis. I'm writing better code because I'm putting more thought into how it's organized than I have in the past. It's no longer Wild, Wild West coding. It's still not where I'd like to be, but I'm at least heading in the right direction.

Discuss this post

Categories: ASP.NET, Development


 

Using Jasmine To Test JavaScript

posted on 10/24/12 at 04:24:06 pm by Joel Ross

I'm not good at writing JavaScript. I know that. I'm trying to get better, but JavaScript development still feels like the Wild, Wild West of software development to me. To help fix that problem, I've started to investigate ways to get better. First, I looked at Backbone, and that helped quite a bit because it really leads you to organize your code. But the more I thought about it, the more I realized that what I wanted was a way to verify that the code I was writing actually worked. So I did a little research on JavaScript unit testing. I found a few different options. The one I liked the most was Jasmine, because it follows closely with the style of tests that I write at TrackAbout.

To get started, I wanted to do something simple, so I did a couple of Katas. Doing a kata was nice because the code I was writing didn't have any external dependencies (like the DOM), and it allowed me to focus solely on learning Jasmine.

Getting started with Jasmine isn't all that difficult. First, you create a simple HTML file that will act as your test runner. You need to include references to a few Jasmine files, your class under test, and your tests. Then you fire up the Jasmine environment and run the tests.

   1: <html>
   2:   <head>
   3:     <title>FizzBuzz Specs</title>
   4:     <link rel="stylesheet" type="text/css" href="jasmine.css">
   5:     <script type="text/javascript" src="jasmine.js"></script>
   6:     <script type="text/javascript" src="jasmine-html.js"></script>
   7:     <script type="text/javascript" src="FizzBuzz.js"></script>
   8:     <script type="text/javascript" src="FizzBuzz.specs.js"></script>
   9:   </head>
  10:   <body>
  11:     <script type="text/javascript">
  12:       jasmine.getEnv().addReporter(new jasmine.TrivialReporter());
  13:       jasmine.getEnv().execute();
  14:     </script>
  15:   </body>
  16: </html>

For simplicity, the file looks like I have all of the files in one directory. The actual repository is organized a little bit better than that.

The Kata I chose was FizzBuzz, since it's a pretty simple one and I wouldn't get bogged down in the code. My first spec looked like this:

   1: describe('FizzBuzz specs', function() {
   2:     var fizzBuzz;
   3:  
   4:     beforeEach(function() {
   5:         fizzBuzz = new FizzBuzz();
   6:     })
   7:  
   8:     describe('when passing in a simple number', function() {
   9:         it('It should return that number', function() {
  10:             var result = fizzBuzz.getOutput(1);
  11:             expect(result).toEqual('1');
  12:         });
  13:     });
  14: });

You start each test with a call to describe, giving a description and a function for what this will test. You can have a setup method (beforeEach()) that will handle any set up, and you can nest describe calls to handle more set up. The heart of the test is the It() method, because this is where you verify your results. In the method above, I'm validating that if you pass in 1, you get the string representation of it back.

The documentation for Jasmine is very good, and goes over all the different ways you can set up expectations, so I won't touch on that here - they say it much better than I could.

Back to our example. Without any implementation, I ran the specs to see what I got.

FizzBuzzFailingTest

One test and two failures. That sounds like success to me! But I thought a passing test would be a bit better, so I added a simple class that lets the test pass.

   1: function FizzBuzz() { }
   2:  
   3: FizzBuzz.prototype.getOutput = function(input) {
   4:     return intput.toString();
   5: };

Rerunning the test now reflects that.

FizzBuzzPassingTest

I then went ahead and finished out the tests and the implementation (available in my Jasmine repo on GitHub). When done, I ended up with six tests that all passed.

FizzBuzzAllTestsPassed

Like I said, it's a fairly simple exercise, but it definitely gave me a feel for what Jasmine could do. Next up was to try it against a real world example, but I'll save that for another post.

Discuss this post!

Categories: Development


 

Introducing Traffic Light

posted on 10/16/12 at 05:59:59 pm by Joel Ross

I'm a big fan of continuous integration. I've been using it since I was first introduced to it by Mike Swanson, when he put his Ambient Orb up in our office. 8 years later, and I'm still using continuous integration on every project I'm on, including any of my personal projects.

But one thing that I didn't have was a good way to monitor the builds. CCTray was OK, but it wasn't the most visible thing. BigVisibleCruise was nice, but screen real estate is at a premium. So I started looking for something I could use. The Ambient Orb was both expensive and out of stock at the same time, so it was out, and a real traffic light was a bit more room than I wanted to use. Eventually, I found a miniature traffic light from Delcom that seemed perfect.

Traffic LightI threw together a quick and dirty application that did nothing but check some XML from CruiseControl.NET, parse out the build status, and change which light was lit up. It was a complete hack, but it worked. When we switched to to Hudson (and then Jenkins), it continued to work because Jenkins offers a CruiseControl.NET compatible output. But then we switched on authentication, and it stopped working. And I left it that way.

Just recently, we had a situation where the build broke for a couple of days and no one noticed it. I didn't notice because the tool I was using to monitor the build wasn't visible enough. So I pulled out the old code for the traffic light monitor and got it working with authentication.

And I kept going. I added a user interface for adding and editing projects. I added a screen for monitoring the build so you don't have to have a real traffic light (pictured right). I added a system tray icon that shows the current state of builds. I added balloon tool tips when builds happen. And I came up with a bunch of ideas I'd like to do with it.

And then I made it open source.

It's still in it's infancy, and set is a little non-obvious (but getting better!), but it works to monitor builds. Now when the build breaks, a giant red light shines in my office - which I definitely can't miss! If you've been looking for a way to monitor your CI server, then you should take a look at Traffic Light.

I've set up a public Trello board that I'll be using to track features and bugs. There's two cards designated for anyone to contribute new ideas or to submit bugs. Details for the board are available on this card. And of course, if you have a feature you want, I'll accept pull requests! I am also attempting to get the project set up on CodeBetter's TeamCity CI server. I submitted my request about a week ago, but haven't heard anything yet. I don't know if I'll ever hear back or not, but if I do, I'll get links out to that as well (and probably include that as a default project in the application).

I don't expect this application to gain a ton of traction, but it's a useful utility and could be a good learning experience about running an open source project, so I'm excited about it.

Discuss this post

Categories: Development, Software


 

Come work with me!

posted on 10/07/12 at 05:04:33 pm by Joel Ross

Almost four years ago, I changed course on my career and got out of consulting. I wanted to focus on product development full time. It was a bit of an adjustment at first, but it turned out to be one of the best decisions I've made. I love what I do and I love working with very smart people who can challenge me in ways that make me a better developer.

Since I started, our development team has almost tripled in size. We've added a dedicated quality assurance team (which by itself is the size of the dev team when I started), and we're looking to grow again. We're looking to hire a few developers and add to our QA team as well.

If you'd like the chance to work at home and work with me (and several other smart developers), either check out the job postings on the TrackAbout website, or contact me directly. I'd be happy to spend a few minutes talking to you about what I do.

Be prepared. The barrier to entry is high. Our interview process is hard, but I think it fairly evaluates your skills, and, just as importantly, shows you what we value. In fact, my boss gave some details of how we've arrived at our current process just recently (for the record, I was hired under the process where "the results were awful." I like to think I'm an outlier). So, if you're up for the challenge and want to work with me (well, not physically with me, since we all work from home), I look forward to hearing from you!

Discuss this post!

Categories: TrackAbout, Inc


 

Using Eventing to Decouple Applications

posted on 10/01/12 at 07:08:31 pm by Joel Ross

I've been writing an application to monitor Jenkins and update a Delcom traffic light with the current build status. I started out with a straightforward approach and it worked well. At first. But as I decided to expand the application to update icons and show a separate window with the current build status, I quickly realized that this wasn't going to be maintainable long term.

Here's what I was doing to update the build status:

   1: projects.Each(p => p.CurrentStatus = projectStatusService.CheckStatus(p));
   2: var buildStatus = GetCumulativeBuildStatusFrom(projects);
   3: delcomService.UpdateBuildStatusTo(buildStatus);

As I started looking at adding other build monitors, my code was going to start to look like this:

   1: projects.Each(p => p.CurrentStatus = projectStatusService.CheckStatus(p));
   2: var buildStatus = GetCumulativeBuildStatusFrom(projects);
   3: delcomService.UpdateBuildStatusTo(buildStatus);
   4: UpdateIconFor(buildStatus);
   5: if (monitorForm != null) {
   6:   monitorForm.SetBuildStatusTo(buildStatus);
   7: }

Notice that the code that's determining the build status is also now responsible for updating the build indicator. And as I added more and more build indicators, this code would have to be touched over and over.

So, rather than continue down the path and not really liking the direction the code was headed, I decided to add eventing.

Before we get to how the code changes, let's look at what we have to add. First, we need an event, which is really just a class:

   1: public class BuildStatusChanged : IEvent
   2: {
   3:     public BuildStatus Status { get; private set; }
   4:     public BuildStatusChanged(BuildStatus status)
   5:      {
   6:          Status = status;
   7:      }
   8: }

The infrastructure to handle it is pretty straightforward. Just one class:

   1: public static class Eventing
   2: {
   3:     private static readonly IDictionary<Type, List<Delegate>> actions = new Dictionary<Type, List<Delegate>>();
   4:  
   5:     public static void Register<T>(Action<T> callback) where T : IEvent
   6:     {
   7:         if (!actions.ContainsKey(typeof(T)))
   8:         {
   9:             actions.Add(typeof(T), new List<Delegate>());
  10:         }
  11:         actions[typeof(T)].Add(callback);
  12:     }
  13:  
  14:     public static void Unregister<T>(Action<T> callback) where T : IEvent
  15:     {
  16:         if (actions.ContainsKey(typeof(T)))
  17:         {
  18:             var item = actions[typeof (T)].FirstOrDefault(i => i == (Delegate) callback);
  19:             if (item != null)
  20:             {
  21:                 actions[typeof (T)].Remove(item);
  22:             }
  23:         }
  24:     }
  25:  
  26:     public static void Raise<T>(T args) where T : IEvent
  27:     {
  28:         if (actions.ContainsKey(typeof(T)))
  29:         {
  30:             actions[typeof(T)].ForEach(a => a.DynamicInvoke(args));
  31:         }
  32:     }
  33: }

When a class wants to know about an event, it just calls Eventing.Register() passing in a callback. So the DelcomService looks like this now:

   1: public class DelcomService 
   2: {
   3:     public DelcomService() 
   4:     {
   5:         Eventing.Register(ChangeBuildStatus);
   6:     }
   7:     
   8:     public void ChangeBuildStatus(BuildStatusChanged args) 
   9:     {
  10:         // turn the traffic light on
  11:     }
  12: }

This same type of code would then be added to any forms that need to know about the current build status, as well as the main application thread that is managing the icon for the application.

As for the code that is checking the build status? It changes slightly:

   1: projects.Each(p => p.CurrentStatus = projectStatusService.CheckStatus(p));
   2: var buildStatus = GetCumulativeBuildStatusFrom(projects);
   3: Eventing.Raise(new BuildStatusChanged(buildStatus));

This is much better. First, the build monitor no longer knows anything about any of the build indicators. Second, if a new build indicator ever is needed (like for an Ambient Orb), this code doesn't change at all.

Thinking in terms of SOLID, we've removed a responsibility from our build monitor, so it truly only has a single responsibility, and we've met the Open/Closed principal as well, because adding new build indicators doesn't require any changes to the code that monitors the build.

The code that this post is based on is open source on BitBucket. It's not exactly straightforward to use yet, but it does work - I use it every day to monitor our builds at TrackAbout. I'm working to make configuration easier, and once that's done, I'll write up a bit more about it.

I'm going to attempt to use Google+ for comments, so if you have anything to add, please leave a comment over there.

Categories: Development, C#


 

1 2 3 4 5 6 7 8 9 10 11 ... 124 >>