Recognizing and Eliminating Your Mistakes

posted on 10/27/08 at 12:12:10 am by Joel Ross

As I stated in my last post, I've been reading up a bit on code reviews. I didn't touch on the last part of the book, which finishes with a discussion of Capability Maturity Model Integration, Team Software Process and Personal Software Process.

When I was at Crowe, I was a part of the review process to move from CMM level 1 to CMM level 2. For the most part, the ideas were good, but the implementation seemed overly documentation-intensive. Not that documentation is necessarily a bad thing, but there was so much administration that I didn't see how it could work and still allow us to be competitive.

Anyway, the part of the discussion that caught my attention was the idea of using a personal checklist, much like the checklist you'd use to perform a code review. In order for you to become a better developer, you need to know where you make mistakes. I could sit here and write out all of the ways I think I screw up, but that's just my gut speaking. It's not rooted in fact. The book's recommendation is that you should keep track of all of the mistakes you  make and categorize them. By making yourself aware of the types of mistakes you commonly make, you'll start to avoid making them, and then you can take that off your list. And until you fix that type of issue, you'll at least be aware of it, so as you review your own code, you can look for it.

I think I'll start trying to do this - just a very quick and dirty list, and at the end of the day, I'll categorize them. By making them visible, hopefully I'll start thinking about them as I write code. At the very least, it'll give me something to look at as I review my own code before I check it in.

Of course, that's another benefit of keeping a personal checklist - if you're doing code reviews, you can (and should) share that with your reviewers, because it gives them something to focus on - specific types of mistakes that you know you commonly make. Remember, it's not a bad thing for your team to know your weaknesses. Ultimately, you share the same goal - build the highest quality software possible. It's in the best interest of the whole team to help each other improve, and the best way to do that is to know where you need to improve. And in the mean time, it's good to have someone else looking over your shoulder.

Tags:

Categories: Development


 

Code Reviews

posted on 10/24/08 at 12:27:43 am by Joel Ross

I've been reading through a free book I got a while back - Best Kept Secrets of Peer Code Review - and it's actually a pretty good read. It is a rather dry read, but there's some good information in there. And in case you were wondering, you can get the book for free as well. Just follow the above link.

Anyway, I should note that there's definitely a reason the book is free - it's findings and advice directly correlate with the feature set of their core product - Code Collaborator. Conveniently, there's also a 20-page ad for the software in the book. The software is around $500 per seat, so you can see why it's worth giving away the book. The advice in the book is definitely slanted toward what their own tool does. The question is whether the software was built around the results of the research (of which the book has plenty), or if the research was hand picked to support what their software does. I don't know that answer, but regardless, there's some interesting information in the book - and since it's free, I think it's worth the read.

There are definitely some things I question in the book - for example, it essentially assumes that everyone is doing code reviews and is reviewing every single line of code. That's not something I've ever seen. Regardless of some of those type of issues, there are a few takeaways that I'd love to get a chance to use on my own projects.

To be honest, I haven't been a part of that many code reviews on my projects. When I was at Crowe, we did code reviews on an internal product I was a part of. We didn't review every line - not even close. We reviewed code for key pieces of the system, as well as reviews for new developers. Crowe typically hires (or at least used to hire) a lot of fresh developers - people right out of college - so reviewing their code was important to make sure they were progressing and doing things the way the project expected. At that time, a lot of that was my code. Crowe was my first job out of college. Since we were all relatively inexperienced, we didn't really know much about what we were supposed to be doing. It was more of a walkthrough of the structure of the code rather than a true analysis looking for specific types of issues. Looking back, I don't remember finding many defects in the code, and the ones we did find were more along the lines of code flow and coding standards rather than actual bugs. Still important, but if I had it to do over, I think it would be much more productive.

We've done code reviews on a few of my projects at Sagestone/NuSoft. Based on how we did them, I don't think they were all that productive either, because we didn't have a solid goal for why we were doing reviews. We needed to do a review, so we did one. If I had to pick a goal, it was more to ensure that the code lived up to coding standards, rather than reviewing the code for correctness. It's important to be consistent, but there are easier, less time intensive methods to verify that. Our time would have been better spent looking at the actual logic contained in the code rather than the structure of the code. But hindsight is 20/20, right?

Anyway, there's a few things the book highlights that I found interesting.

  • Code reviews don't necessarily have to conclude with a large review meeting. The studies showed that review meetings didn't find a high enough percentage of defects compared to individual reviewers looking at the code on their own ahead of time. The meeting was productive in eliminating false positives - alleged defects that, after clarification, are determined to not actually be defects. But you don't need the whole review team and a formal meeting to obtain the same false positive filtering - you can have the author and one person go over those informally.
  • Optimally, a code review should be about an hour and cover 100-300 lines of code. Reviewing the code slowly and methodically is the best way. Longer than an hour though, and the return is diminishing. The majority of defects are found within the first hour of reviewing the code. The optimal amount of code to look at is about 200 lines, because it's small enough to get your head around.
  • Use a code review checklist. This now seems so obvious. Reviewers need to have an idea of what to look for. Creating a list of specific types of issues to look for will result in the elimination of those types of issues. This is partly because your reviewers will be looking for it specifically, and partly because the team will stop writing those types of issues. Once they are made aware that it's a problem, they'll be conscientious of it and will eventually adjust accordingly. This means that your list has to be constantly maintained and updated to reflect the current types of issues you're seeing in your software.

That's definitely a shift in how I've done code reviews in the past. For the most part, the prep for the code review itself only consisted of a brief review of the code, and the assumption was that the deep dive into the code would happen in the two hour meeting. It actually makes sense that it should be the opposite - the deep dive should happen ahead of time, and the meeting (if it even needs to happen) should just be a summary of what was already found.

Interesting thoughts, really. But I still wonder how many people are actually doing code reviews. So, are you doing code reviews? Are they effective? What makes them that way?

Categories: Development


 

NFL Picks: 08-09 Week 7 Results and Week 8 Picks

posted on 10/23/08 at 08:00:00 pm by Joel Ross

It's been a busy week (again), but here's last week's results.

  • San Diego 14, Buffalo 23 (0) (45 O/U)
  • New Orleans 7, Carolina 30 (-3) (44.5 O/U)
  • Minnesota 41, Chicago 48 (-3) (38 O/U)
  • Pittsburgh 38 (-9.5), Cincinnati 10 (35.5 O/U)
  • Tennessee 34 (-8.5), Kansas City 10 (35 O/U)
  • Baltimore 27, Miami 13 (-3) (36.5 O/U)
  • San Francisco 17, New York Giants 29 (-10.5)
  • Dallas 14 (-7.5), St. Louis 34 (43 O/U): It's amazing what losing just one person can do to a team.
  • Detroit 21, Houston 28 (-9) (46 O/U): Someone tell Detroit they need to stop spotting the other team three or four touchdowns before they start playing. Comebacks are a lot of fun to watch, but there's missing one key component: actually coming back.
  • Indianapolis 14 (-1.5), Green Bay 34 (47 O/U)
  • New York Jets 13 (-3), Oakland 16 (41 O/U)
  • Cleveland 11, Washington 14 (-7) (41.5 O/U)
  • Seattle* 10, Tampa Bay 20 (-10.5) (38 O/U)
  • Denver 7, New England 41 (-3) (48 O/U)

Results Summary

  • Picks (this week / season): 7 - 7 / 60 - 42
  • Spread (this week / season): 5 - 9 / 49 - 50
  • Over/Under (this week / season): 5 - 7 / 47 - 51

On to this week's picks.

  • Oakland vs. Baltimore (-7.5) (36 O/U)
  • Arizona vs. Carolina (-4) (43.5 O/U)
  • Tampa Bay vs. Dallas (-2) (40.5 O/U)
  • Washington (-7.5) vs. Detroit (42 O/U): I'm surprised it's only 7 and a half points. Have the odds makers not been watching Detroit?
  • Buffalo (-1) vs. Miami (42.5 O/U)
  • St. Louis vs. New England (-7) (43.5 O/U)
  • San Diego (-3) vs. New Orleans (45.5 O/U)
  • Kansas City* vs. New York Jets (-13) (39 O/U)
  • Atlanta vs. Philadelphia (-9) (45 O/U)
  • Cleveland vs. Jacksonville (-7) (42 O/U)
  • New York Giants vs. Pittsburgh (-3) (42 O/U): This should be a good one. I wonder why it's not the Sunday night game.
  • Seattle vs. San Francisco (-5) (41 O/U)
  • Cincinnati vs. Houston (-9.5) (44.5 O/U): This is the make up game from week 2.
  • Indianapolis vs. Tennessee (-4) (41 O/U): James is going to be mad at me again, and it's probably a stupid pick on my part. But look at my results. Stupid picks is what I do!

Check back next week for more results and more picks.

Tags: | |

Categories: Football


 

NFL Picks: 08-09 Week 6 Results and Week 7 Picks

posted on 10/16/08 at 08:00:00 pm by Joel Ross

I'm sitting in a hotel room in Orlando, but I still wanted to get these out there. So here they are! Results from last week:

  • Chicago 20 (-2.5), Atlanta 22 (43.5 O/U)
  • Miami 28, Houston 29 (-3) (44.5 O/U)
  • Baltimore 3, Indianapolis 31 (-4) (38.5 O/U): Could the Colts be back to form? Hard to tell against the Ravens though.
  • Detroit 10, Minnesota 12 (-13) (45.5 O/U): Hard to cover the spread when you don't even score as many points as what the spread is! Of course, it is the Lions, so who knows? Maybe they could figure out how to score negative points.
  • Oakland 3, New Orleans 34 (-7.5) (47 O/U)
  • Cincinnati 14, New York Jets 26 (-6) (44.5 O/U)
  • Carolina 3, Tampa Bay 27 (-1.5) (36.5 O/U)
  • St. Louis 19, Washington 17 (-13) (44 O/U): You wouldn't know it from the (lack of) hype, but this was the biggest upset of the week.
  • Jacksonville 24, Denver 17 (-3.5) (48.5 O/U)
  • Dallas 24 (-5), Arizona 30 (50 O/U)
  • Philadelphia 40 (-4.5), San Francisco 26 (42.5 O/U)
  • Green Bay 27, Seattle 17 (-2) (46.5 O/U)
  • New England 10, San Diego 30 (-5.5) (44.5 O/U)
  • New York Giants 14 (-8), Cleveland 35 (43 O/U): Surprisingly, this wasn't even the biggest upset of the week.

Results Summary

  • Picks (this week / season): 7 - 7 / 53 - 35
  • Spread (this week / season): 7 - 7 / 44 - 41
  • Over/Under (this week / season): 8 - 6 / 42 - 44

And on to picks for this week:

  • San Diego vs. Buffalo (0) (45 O/U)
  • New Orleans vs. Carolina (-3) (44.5 O/U)
  • Minnesota vs. Chicago (-3) (38 O/U)
  • Pittsburgh (-9.5) vs. Cincinnati (35.5 O/U)
  • Tennessee (-8.5) vs. Kansas City (35 O/U)
  • Baltimore vs. Miami (-3) (36.5 O/U)
  • San Francisco vs. New York Giants (-10.5) (46 O/U)
  • Dallas (-7.5) vs. St. Louis (43 O/U)
  • Detroit vs. Houston (-9) (46 O/U): Good thing Detroit got rid of Roy Williams. Don't need a good receiver scoring a TD and messing up their perfect season!
  • Indianapolis (-1.5) vs. Green Bay (47 O/U)
  • New York Jets (-3) vs. Oakland (41 O/U)
  • Cleveland vs. Washington (-7) (41.5 O/U)
  • Seattle* vs. Tampa Bay (-10.5) (38 O/U)
  • Denver vs. New England (-3) (48 O/U)

Check back next week for more results and more picks

Tags:

Categories: Football


 

Per Request Activation With Ninject

posted on 10/14/08 at 08:00:00 pm by Joel Ross

As I've started to look into NHibernate and how to manage the NHibernate session, I quickly came across the "session per request" model as the recommended (or at least most popular) approach. I initially started out with a context-based storage and eventually refactored the code to be much simpler - the refactoring was very similar to what I did in my last post. By allowing my IoC container to manage the session's lifecycle, I no longer had to worry about it, and removed my reliance on a static class. As a result, I can easily change the session code without affecting any of the code that uses it.

When I started down this path, Nate Kohari warned me that Ninject's  OnePerRequestBehavior didn't seem to be working. He mentioned that he wondered how the Castle team did it, so I started digging. It turns out they did it with a combination of a life cycle management class (their equivalent of behaviors in Ninject) and an HTTP module. With that idea in mind, I dug into Ninject's code to see if I could get something working.

In the end, I got it working, but it's not a generic, "drop in" solution - it requires the registration of an HTTP module in your web.config file, so it's not as seamless as what Nate was going for. But it works, at least for my case. I created a class called CustomOnePerRequestBehavior that looks eerily similar to the original OnePerRequestBehavior! The main difference comes in the Resolve method. Rather than trying to handle the EndRequest event directly (which never actually fires) it just registers itself with the HTTP module:

   1:  public override object Resolve(IContext context)
   2:  {
   3:    Ensure.NotDisposed(this);
   4:   
   5:    lock (this)
   6:    {
   7:      if (ContextCache.Contains(context.Implementation))
   8:        return ContextCache[context.Implementation].Instance;
   9:   
  10:      ContextCache.Add(context);
  11:      context.Binding.Components.Get<IActivator>().Activate(context);
  12:   
  13:      RequestModule.RegisterForEviction(this);
  14:      return context.Instance;
  15:    }
  16:  }

This calls a static method on my request module:

   1:  internal static void RegisterForEviction(CustomOnePerRequestBehavior manager)
   2:  {
   3:    HttpContext context = HttpContext.Current;
   4:   
   5:    IList<CustomOnePerRequestBehavior> candidates = (IList<CustomOnePerRequestBehavior>)context.Items[PerRequestEvict];
   6:   
   7:    if (candidates == null)
   8:    {
   9:      candidates = new List<CustomOnePerRequestBehavior>();
  10:      context.Items[PerRequestEvict] = candidates;
  11:    }
  12:   
  13:    candidates.Add(manager);
  14:  }

This basically stores a list of items that should be taken care of at the end of a request, and stores it in the context.

Beside the static method, the request module also registers to listen for the EndRequest event. When that event fires, it grabs the list of items that should be taken care of at the end of the request, and calls the CleanUpInstances method on the stored instance (that was the other change to the original OnePerRequestBehavior - CleanUpInstances is now internal instead of private):

   1:  void context_EndRequest(object sender, EventArgs e)
   2:  {
   3:    HttpApplication application = (HttpApplication)sender;
   4:    IList<CustomOnePerRequestBehavior> candidates = (IList<CustomOnePerRequestBehavior>)application.Context.Items[PerRequestEvict];
   5:   
   6:    if (candidates != null)
   7:    {
   8:      foreach (CustomOnePerRequestBehavior candidate in candidates)
   9:      {
  10:        candidate.CleanUpInstances();
  11:      }
  12:   
  13:      application.Context.Items.Remove(PerRequestEvict);
  14:    }
  15:  }

And that's pretty much it. To use it, I can have a line like this in my module:

   1:  Bind<ISession>().ToProvider<SessionProvider>().Using<CustomOnePerRequestBehavior>();

I should note that I didn't really write any of my own code here. I used a combination of what the existing OnePerRequestBehavior already does, adding in what the Castle team did. Regardless of where the code came from, by doing this, I can now easily achieve session per request, which is what my goal was in the first place.

Tags: | |

Categories: Development, C#


 

An Unexpected Benefit of Using an IoC Container

posted on 10/13/08 at 12:57:22 am by Joel Ross

In the past few weeks, I've basically come full circle on the usage of dependency injection frameworks and Inversion of Control containers. Back in March, I was questioning whether one was necessary, since I hadn't seen a need to do anything so complex that I couldn't wire it up by hand.

It turns out there's a bit of a chicken and egg thing going on here. I didn't need a container because my software wasn't complex enough to warrant one, and my software wasn't complex enough because I didn't have a container to manage that complexity.

That changed a few weeks ago, when I started down the road of working with repositories in conjunction with MVC. That gave me an opportunity to start from the ground up with a container. As a result, wiring pieces together became trivial. That, in turn, allowed me to build interactions between components that before would have been very difficult to manage.

But that's not what this post is about. As I started to get into using my container, I discovered a side benefit I hadn't considered before. I'll demonstrate it through a refactoring that I went through to get my code in better shape. First up, how the code started out:

   1:  public static class MyAppContext
   2:  {
   3:    public User CurrentUser
   4:    { 
   5:      get
   6:      {
   7:        return HttpContext.Current.Items["CurrentUser"] as User;
   8:      }
   9:      set
  10:      {
  11:        HttpContext.Current.Items["CurrentUser"] = value;
  12:      }
  13:  }

It's a simple example that maintains the current user for the duration of an HTTP request. Nothing fancy, but this type of model gets used a lot. At first glance, it doesn't look too bad. But there's issues here. It's static, which means I can't do much to test it (or code that relies on it). And even if I refactored it so it wasn't static, there's that HttpContext.Current statement in there, which would still make it un-testable (or at least difficult).

The other problem is that this class has multiple responsibilities. I never saw it before, but now that I've noticed it, it sticks out like a sore thumb. This class is responsible for not only giving access to the current user, but also how that current user is stored - essentially, this class is managing it's own lifecycle. That's a violation of SRP.

So, we take a two-pronged attack to fix it. First, make it non-static. That's trivial, so I won't show that. Second, inject the storage into the class. Come up with a simple interface for storage:

   1:  public interface IStorage
   2:  {
   3:    public void Set<T>(string key, T value);
   4:    public T Get<T>(string key);
   5:  }

Then, create an implementation that uses HttpContext to achieve what we had before:

   1:  public class ContextStorage : IStorage
   2:  {
   3:    public void Set<T>(string key, T value)
   4:    {
   5:      HttpContext.items[key] = value;
   6:    }
   7:   
   8:    public T Get<T>(string key)
   9:    {
  10:      return HttpContext.Current.Items[key] as T;
  11:    }
  12:  }

Then we change the original MyAppContext to use the interface, rather than HttpContext directly:

   1:  public class MyAppContext
   2:  {
   3:    private IStorage _storage;
   4:   
   5:    MyAppContext(IStorage _storage)
   6:    {
   7:      _storage = storage;
   8:    }  
   9:   
  10:    public User CurrentUser
  11:    { 
  12:      get
  13:      {
  14:        return _storage.Get<User>("CurrentUser");
  15:      }
  16:      set
  17:      {
  18:        _storage.Set<User>("CurrentUser", value);
  19:      }
  20:  }

This is pretty good. Now, the class delegates to another class to handle how it's data is stored, and I could create an implementation of IStorage that uses a dictionary internally and pass it to MyAppContext for testing purposes, without ever changing my code.

Now, the next question. How do you manage MyAppContext now that it's not static? How do you create one? How do you keep track of it throughout it's duration? Who is responsible for it's lifecycle? All valid questions, and all difficult to handle manually. But if I'm using an IoC container, I can let the container manage the life cycle for me. And it does it outside of the class, so MyAppContext becomes even simpler, because it can use normal techniques for storage:

   1:  public class MyAppContext
   2:  {
   3:    public User CurrentUser { get; set; }
   4:  }

That was definitely a long-winded explanation of how I got from a static, HTTP-based class to your every day run of the mill class, yet still get the same functionality with no real extra cost. Well, no cost except the IoC container implementation. Which brings me full circle, and back to what the title of this post alludes to: Using an IoC container allows me to easily implement and use classes that are ignorant of their surroundings. When I first started down the IoC path, I knew the benefits I expected - being able to program against interfaces and let my container figure out how to inject the actual implementations. But what I didn't expect was to be able to rely on the container to manage the lifecycle of my objects and allow me to write code the way I normally would.

Tags: | |

Categories: Development, C#


 

NFL Picks: 08-09 Week 5 Results and Week 6 Picks

posted on 10/09/08 at 08:00:00 pm by Joel Ross

I didn't get to watch a single minute of NFL football this weekend. That was a bit disappointing, but life goes on. Anyway, here are the results from last week.

  • Tennessee 13 (-3), Baltimore 10 (33 O/U)
  • Kansas City 0, Carolina 34 (-9.5) (38.5 O/U): A shut out? You don't see too many of those these days.
  • Chicago 34 (-3.5), Detroit 7 (44.5 O/U): Detroit needs to stop spotting teams 20+ points!
  • Atlanta 27, Green Bay 24 (-5.5) (42 O/U): This game finally came on the boards, but very late - once they announced the Rodgers would be playing.
  • Indianapolis 31 (-3), Houston 27 (47 O/U)
  • San Diego 10 (-6.5), Miami 17 (44.5 O/U)
  • Seattle 6, New York Giants 44 (-7) (43.5 O/U)
  • Washington 23, Philadelphia 17 (-6) (42.5 O/U)
  • Tampa Bay 13, Denver 16 (-3) (48 O/U)
  • Buffalo 17, Arizona 41 (0) (44.5 O/U)
  • Cincinnati* 22, Dallas 31 (-17) (44 O/U)
  • New England 30 (-3), San Francisco 21 (41 O/U)
  • Pittsburgh 26, Jacksonville 21 (-4) (36.5 O/U)
  • Minnesota 30, New Orleans 27 (-3) (46.5 O/U)

Results Summary

  • Picks (this week / season): 9 - 5 / 46 - 28
  • Spread (this week / season): 7 - 5 / 38 - 33
  • Over/Under (this week / season): 6 - 8 / 34 - 38

On to picks for this week. It's been a busy week, so I haven't had much time to follow much NFL news.

  • Chicago (-2.5) vs. Atlanta (43.5 O/U)
  • Miami vs. Houston (-3) (44.5 O/U)
  • Baltimore vs. Indianapolis (-4) (38.5 O/U)
  • Detroit vs. Minnesota (-13) (45.5 O/U): Detroit will pretty much just limp along this season, with no real direction - other than getting that #1 pick!
  • Oakland vs. New Orleans (-7.5) (47 O/U)
  • Cincinnati vs. New York Jets (-6) (44.5 O/U)
  • Carolina vs. Tampa Bay (-1.5) (36.5 O/U)
  • St. Louis vs. Washington (-13) (44 O/U)
  • Jacksonville vs. Denver (-3.5) (48.5 O/U)
  • Dallas (-5) vs. Arizona (50 O/U)
  • Philadelphia (-4.5) vs. San Francisco (42.5 O/U)
  • Green Bay vs. Seattle (-2) (46.5 O/U)
  • New England vs. San Diego (-5.5) (44.5 O/U)
  • New York Giants (-8) vs. Cleveland (43 O/U)

Check back next week for the results and more picks.

Tags: | |

Categories: Football


 

Query Objects and the Specification Pattern

posted on 10/07/08 at 10:55:44 pm by Joel Ross

The other night, I had an eye opening moment when I finally understood how I could use query objects to allow me to use a generic repository, yet still be able to have re-usable queries. But one thing I didn't quite get was how this related to the Specification pattern. The samples I found (including the one I linked to) all used the term Specification, but that wasn't how I understood the specification pattern to work.

But just by adding one method to the base class, it (again) became clear how the two are related:

   1:  public bool IsSatisfiedBy(T candidate)
   2:  {
   3:    return SatisfyingElementsFrom(new[] { candidate }.AsQueryable()).Any();
   4:  }

By adding this, you can now pass an entity to it, and determine if that entity satisfies the specification. As a result, I've renamed my base class to SpecificationBase<T>, and can now use them for validation purposes as well.

Now, the next question. Does this violate the Single Responsibility Principle? At first glance, I thought it did, because it's used to do two different things: query and validate. But taking a closer look, I don't think it does. How it's used externally is different than what it does internally - which is check that a collection of entities (even if the collection is only one element) satisfies a given specification. More to the point, it's single reason for change would be if there was a change to how it determines that an entity meets the specification. That isn't related to querying or validating at all - that's above this layer.

Of course, I could be wrong. In your opinion, does this violate SRP? If so, how would you fix it?

Tags: | |

Categories: Development, C#


 

The Repository Pattern – I’m Sold!

posted on 10/05/08 at 08:59:10 pm by Joel Ross

Lately, I've been playing with NHibernate and the Repository pattern - and struggling with it a bit. I understand the concept, but it's the implementation that's been bothering me. I was questioning whether to use one repository, or to have many - essentially one per entity. You can see me questioning it a bit in the Entities And Repositories Discussion I posted a while ago, where I asked Nate Kohari whether he used one repository or many. Here's the relevant part of that conversation. I've filtered it quite a bit to capture our back and forth.

RossCode: nkohari: do you have one repository or multiple repositories?
nkohari: RossCode: lately i've been using one
nkohari: but you can only get away with that in some cases
RossCode: nkohari: how do you handle custom queries? Pass in ICriteria?
nkohari: yeah, in the past i have, but lately i've moved to linq for nh
nkohari: so it's all Expression<Func<T, bool>>s
RossCode: nkohari: but then aren't your queries and how you query data in your controllers now?
nkohari: RossCode: touche ;)
nkohari: chadmyers was just talking about that
nkohari: he suggests creating query objects
nkohari: which is actually a very good idea

I didn't have a very good understanding of NHibernate at the time, and I certainly didn't get what he meant by query objects. Plus, it got lost in the middle of much larger conversation.

Fast forward to this past week. I was adding NHibernate to a project, and struggling with the same thing. I had an IRepository<T> that I was working against in my controllers. Then I had actual implementations like CustomerRepository, OrderRepository, etc. But when I needed to do something non-generic (like get a list of customers by a non-key id, like LastName), it quickly fell apart. Now my CustomerRepository had a method called GetCustomersByLastName. How do you use that method when you are working against IRepository<T>? Well, the first and obvious solution is to extract another interface - ICustomerRepository - and reference that in the controller. But that quickly gets overly complex.

So I went back to the tribe, and asked for some advice. I laid out what I was running into, and Nate pointed me to a blog that has information about using query objects. Once I saw it, it was so obvious! But it takes seeing it to really cement the idea. I immediately added it, and within an hour, my code was greatly simplified and much more generic. Not to mention easier to maintain in the long run.

So what exactly did I do? Well, I removed a lot of code. I had a base repository class that I removed. I had three or four custom repositories (OrderRepository, CustomerRepository, etc.) that I removed - and there would have been more of those if I wasn't so early in the development process.

I also added some code. I added a single Repository implementation, and I added a QueryBase<T> class, to help me encapsulate my queries.

Let's start with the Repository class:

   1:  public class Repository<T> : IRepository<T>
   2:  {
   3:    private ISession Session { get; set; }
   4:          
   5:    public Repository(ISession session)
   6:    {
   7:      Session = session;
   8:    }
   9:   
  10:    public IQueryable<T> GetList()
  11:    {
  12:      return (from entity in Session.Linq<T>() select entity);
  13:    }
  14:   
  15:    public T GetById(int id)
  16:    {
  17:      return Session.Get<T>(id);
  18:   
  19:    }
  20:   
  21:    public void Save(T entity)
  22:    {
  23:      Session.SaveOrUpdate(entity);
  24:    }
  25:   
  26:    public T GetOne(QueryBase<T> query)
  27:    {
  28:      return query.SatisfyingElementFrom(Session.Linq<T>());
  29:    }
  30:   
  31:    public IQueryable<T> GetList(QueryBase<T> query)
  32:    {
  33:   
  34:      return query.SatisfyingElementsFrom(Session.Linq<T>());
  35:    }
  36:  }

Note that I'm also using Linq For NHibernate, but the part that is interesting is at the end - the last two methods. Each take a QueryBase<T> object. This object defines what query to run. For example, if I want to get customers by last name, I could create a CustomersByLastNameQuery that inherits from QueryBase<Customer>. We'll get to how to do that, but first, QueryBase<T>:

   1:  public abstract class QueryBase<T> 
   2:  {
   3:    public abstract Expression<Func<T, bool>> MatchingCriteria { get; }
   4:   
   5:    public T SatisfyingElementFrom(IQueryable<T> candidates)
   6:    {
   7:      return SatisfyingElementsFrom(candidates).Single();
   8:    }
   9:   
  10:    public IQueryable<T> SatisfyingElementsFrom(IQueryable<T> candidates)
  11:    {
  12:      return candidates.Where(MatchingCriteria).AsQueryable();
  13:    }
  14:  }

This defines the mechanics of query objects. The Repository calls SatisfyingElementFrom (for one) or SatisfyingElementsFrom (for many), and the MatchingCriteria is used to determine how the query is built. This makes encapsulating queries easy - just implement MatchingCriteria. Here's one to get customers by last name:

   1:  public class CustomerByLastNameQuery : QueryBase<Customer>
   2:  {
   3:    private string _lastName;
   4:          
   5:    public CustomerByLastNameQuery(string lastName)
   6:    {
   7:      _lastName = lastName;
   8:    }
   9:   
  10:    public override Expression<Func<Customer, bool>> MatchingCriteria
  11:    {
  12:      get { return cust => cust.LastName == _lastName; }
  13:    }
  14:  }

As you can see, implementing your own query is relatively easy. And since the original method returns IQueryable<T>, you can later filter the results further, group them, sort them, etc., and still get the benefits of delayed execution. For completeness, here's how you could call the repository:

   1:  IQueryable<Customer> customers = repository.GetList(new CustomerByLastNameQuery(lastName));

By doing the above, I now have exactly what I want: simplicity in my repositories, and the ability to encapsulate and control how my entities are queried.

Tags: | |

Categories: Development, Software, C#


 

NFL Picks: 08-09 Week 4 Results and Week 5 Picks

posted on 10/02/08 at 09:37:26 pm by Joel Ross

I didn't do too bad this week, honestly. 8-5 in picks and spread - I could live with that long term! It's the over / under that killed me this week.

For those who've been around a while, last year, I used to post about how much money I would have won or lost based on a $10 bet per pick (actually $30 per pick - picking the outright winner, the winner against the spread, and the over/under). At some point, I started comparing that to how I would have done against putting that same amount of money ($360-$480 per week) into an index fund. I still have the fund stats, so I figured with all of the financial news lately, I'd see how things looked. My theory (at the end of the season) was that the market typically goes up, so over time, investing in the market would be better. I still think that, but this proves that a year isn't long enough - or at least this year isn't. At the end of last year, I would have had $7,742.69 (after betting $8,010). Had that same amount been invested on a weekly basis in an index fund, the value (as of closing today) would have been $5,559.47. That's a loss of over $2,000 by gambling on the stock market instead of the NFL!

Note that I'm not advocating betting on the NFL or the stock market - if you did nothing with your money, you'd have $8,010 - more than either of the two options! Anyway, onto last week's review.

  • Atlanta 9, Carolina 24 (-7) (39.5 O/U)
  • Cleveland 20, Cincinnati 12 (-3.5) (44.5 O/U): Someone had to win, right?
  • Houston 27, Jacksonville 30 (-7.5) (42 O/U)
  • Denver 19 (-9.5), Kansas City* 33 (46.5 O/U): I was surprised by this one. Apparently Denver only wins at home!
  • San Francisco 17, New Orleans 31 (-5.5) (48 O/U)
  • Arizona 35, New York Jets 56 (-1.5) (45 O/U): They doubled the over / under. Wow.
  • Green Bay 21, Tampa Bay 30 (-1) (42.5 O/U)
  • Minnesota 17, Tennessee 30 (-3) (36 O/U): This picking Tennessee thing has been working out nicely!
  • San Diego 28 (-7.5), Oakland 18 (45.5 O/U)
  • Buffalo 31 (-8), St. Louis 14 (42 O/U)
  • Washington* 26, Dallas 24 (-11) (46 O/U): I didn't think Washington was that good. Apparently neither did the odds makers!
  • Philadelphia 20 (-3), Chicago 24 (41 O/U)
  • Baltimore 20, Pittsburgh 23 (-6.5) (34.5 O/U)

Results Summary

  • Picks (this week / season): 8 - 5 / 37 - 23
  • Spread (this week / season): 8 - 5 / 31 - 28
  • Over/Under (this week / season): 4 - 8 / 28 - 30

Green Bay is potentially without Aaron Rodgers, so that game is completely off the boards in all of Vegas - they won't put odds on it until they have at least some certainty on who's in or out. My guess is that Green Bay is the favorite either way, but the spread and the money line will be different depending on if he plays or not. Honestly, it's hard to imagine him playing with a separated shoulder, but I guess we'll see.

  • Tennessee (-3) vs. Baltimore (33 O/U): I'll stick with Tennessee for this one. Baltimore hasn't been impressive.
  • Kansas City vs. Carolina (-9.5) (38.5 O/U)
  • Chicago (-3.5) vs. Detroit (44.5 O/U)
  • Atlanta vs. Green Bay:
  • Indianapolis (-3) vs. Houston (47 O/U): How far the mighty have fallen. I can't think of a time since Houston entered the league that they would only be a three point dog to the Colts.
  • San Diego (-6.5) vs. Miami (44.5 O/U)
  • Seattle vs. New York Giants (-7) (43.5 O/U)
  • Washington vs. Philadelphia (-6) (42.5 O/U): The NFC East is a tough division. The problem is that they'll be beating up on each other all season, meaning that whoever prevails will be a battered team going into the playoffs.
  • Tampa Bay vs. Denver (-3) (48 O/U)
  • Buffalo vs. Arizona (0) (44.5 O/U)
  • Cincinnati* vs. Dallas (-17) (44 O/U): 17 points? Wow. Cincinnati is like New England last year - only the opposite. Last year, New England was favored by 10+ points on every game. Cincy could end up being a 10+ point dog regularly this year.
  • New England (-3) vs. San Francisco (41 O/U)
  • Pittsburgh vs. Jacksonville (-4) (36.5 O/U)
  • Minnesota vs. New Orleans (-3) (46.5 O/U)

Check back next week for the results and more picks.

Tags: | |

Categories: Football


 

<< 1 2 3 4 5 6 7 8 9 10 11 12 ... 124 >>