In our current application that we work on the most our "business objects" follow more of an ActiveRecord kind of model. Well more specifically they are sort of a heavily bastardized version of Rocky Lhotka's CSLA, but who's keeping track? Looking towards the future, I've done lots of research into alternative models that will hopefully help us solve many of the problems we've got with our current implementation. It seems all the buzz these days is around DDD, ORM, IOC, DI, et al. I have to say the more I've read about those topics in particular the more enamoured I've become with them and the very POCO, PI objects they facilitate. Furthermore I like that these tend to favor the use of patterns over specific frameworks.

I was a big fan of CSLA when the .NET v1.1 version of it came out, and used it to some great success on a project back then. I've been following its progress since then and although it certainly has some cool concepts in it, I feel as though it is slowly but steadily succumbing to the problem that is the eventual death of all frameworks, the "silver bullet syndrome". CSLA has increasingly been trying to become everything to everybody and in the process I think it is becoming overly complex and heavy-handed as all frameworks that are around long enough seem to do. He's got some great concepts that I will consider borrowing for our implementations, but the framework as a whole no longer matches my style, nor indeed it seems the community at large.

So back to the original point of this post, as we evolve to a more DDD oriented model than a CSLA oriented one, one of the main challenges we are working on is what the right way to do validation is. I agree with Oren and Udi that there are different types of validation and that they should be addressed in different ways. The first piece of the puzzle I attempted to tackle is what they termed "business rule validation". To me it makes sense that any kind of complex (non-input) validation should be performed by a specific object dedicated to that task as it promotes the greatest flexibility and reuse. This got me to thinking about the specification pattern as this type of validation is just one of the cases that pattern can be used for. The other great use of this pattern is for selecting a subset of objects from a master list, which ties in very well with the repository pattern for object retrieval. This line of thinking eventually lead me to the idea of somehow creating an implementation of the specification pattern that could be used for both validation and entity selection (using LINQ) types of scenarios. It seemed like a reasonable goal and one that could be very beneficial if done correctly.

As I am a #pragma-tech programmer (pragmatic, get it?), I always like to see what work others may have done in the area. I hate being a plumber so I fired up a google search and turned up several articles on implementations of the specification pattern in .NET but the ones that were most interesting to me were this one and this one, both by Ian Cooper. I'll spare you the gory details of how its all works, you can go read Ian's articles for that. In short though, in these articles Ian pretty much lays out an implementation for exactly what I wanted to accomplish, but unfortunately neglected to include a download for the code. Luckily his articles themselves were detailed enough and contained enough code that I was able to get the gist.

As I wanted to spend some time honing my C# / LINQ / lambda skills anyways I decided to take a crack at fleshing out an implementation of what he talks about to see how it all comes together. A few hours later I had it all banged out and the default implementation of what he was talking about worked quite nicely. There were a few warts with it that I didn't much like however that I decided to fix in my implementation...

The first was that the only way to chain specifications was by writing code like this:

Specification ProductSpec = new Specification(p=> p.PartNumber != string.Empty).And(new Specification(p=> p.MinOrderQty > 0));

I didn't like the fact that you had to create a new specification object inside the chaining method to make this work, so I overloaded those functions to also just directly take a lambda expression so you now you can do this which is more fluent in my opinion...

Specification ProductSpec = new Specification(p=> p.PartNumber != string.Empty).And(p=> p.MinOrderQty > 0);

I also added extension methods for the IQueryable and IEnumerable methods that accept an instance of an ISpecification to use as the query filter. I think this is *much* nicer than the way Ian demonstrates doing it in his article as it can be done all in one line instead of the three that it took with his implementation. So now you can do:

var prd =
(from p in products
select p).WhereSpecification(ProductSpecifications.ProductIsConfigurable);

I also added caching of the compiled expression to improve performance.

The end result is a way of creating a single object using a pretty fluent interface that you can then use to validate an object's state, or use in a LINQ expression against any type of LINQ datasource to specify a filter criteria. Pretty cool!

The bits are here...


2

View comments

  1. Not that many people probably read this blog (which is no fault but my own) but for anyone that still does I thought I'd let you know that I'm moving on.

    You see I'm starting my own Rails firm called ProtectedMethod here in Columbus. Therefore any future blog posts I write will be posted on the company blog here.
    0

    Add a comment

  2. Several months ago I had the opportunity to record an episode of

    with Chris Woodruff and Keith Elder. The idea for the podcast started way back at Devlink 2009 where I ran into Keith at the speaker's VIP dinner. We'd both had a couple of drinks by that time and got to chatting about a lot of different subjects, one of which was the talk I was giving at the conference about production debugging. Keith thought it sounded like an interesting and relevant topic that they hadn't had on the show before so we struck a deal to get an episode recorded.

    I put quite a bit of time and effort into synthesizing some of the most important information into a form I thought I could talk about over the phone. Even with all that prep work though I must say I was still pretty nervous on the night of the actual recording. I feel like at this point I'm pretty good in front of a crowd giving the talk when I've got slides and examples to show. I found out that trying to explain it all with just words can be a much more difficult endeavor!

    All that aside though I think the end result turned out quite well. Probably thanks in no small part to some great editing work by Chris and Keith! ;) If you've already seen my talk in person there probably isn't a whole lot new to hear on the podcast, but if not I encourage you to check it out! I think there is definitely some useful info there in regards to where to get started on solving your next big customer problem.

    Thanks again guys for having me on and keep up the good work of spreading tech knowledge to all your loyal listeners!
    0

    Add a comment

  3. Favorite Mac Software

    TextMate - The indispensable, extensible, text editor for all your programming and text editing needs.

    MacPorts - Package / application management system. Easiest way to install thousands of apps and required dependencies without having to pull source recompile, yada, yada...

    QuickSilver - Can't live without this if you are a keyboard junkie.

    CleanMyMac - OK... I admit it... I'm *totally* OCD when it comes to PC cleanliness. The fact that I know that uninstalling apps doesn't get rid of all their related junk drives me nuts. CleanMyMac will fix that problem as well as many other routine maintenance tasks.

    Evernote - You take notes right? Keep em in the cloud and synced across all your devices. With plugins for all major browsers, there is no better note client than this.

    MacLoc - I like to be able to lock my machine without logging out and closing all my apps. Seems like something that should be built into the OS, but it doesn't appear to be there in OS X. This program solves that problem.

    OmniGraffle- If you do any significant form of diagramming this is the tool for you.

    p4Merge - This is the merge client I'm currently using and have setup with GIT. I hear Araxis Merge is the bees knees, but it's also not cheap. Really wish BeyondCompare had a mac version. *sigh*

    Pixelmator - My photoshop replacement. It's an absolutely gorgeous (and affordable) app for those times when you need to get your hands dirty with some image editing.

    SizeUp - One of the features I *love*, *love*, *love*, about Windows 7 was the new shortcut keys for sending windows around the screen. Either to certain positions / sizes, or to other monitors. Also one of the hardest things I've had to get used to on OS X is the non-windows way in which window management is handled. SizeUp solves this issue for me by giving me back the windowing features of Windows 7.

    Tweetie - My twitter client of choice. No holy wars here, just the one I like the best.

    Growl - Get notifications of events happening on your system in an unobtrusive way.

    iStat Menus - Ever want to know what the hardware on your mac is doing? This program will tell you at a glance on the menu bar.

    VMWare Fusion - Great virtualization app for running your Windows VMs. One mistake I made was not creating a partition for my windows installation thereby allowing me to boot into it natively using bootcamp, or as the VM from inside OS X. One of these days I'll correct that.

    Visor - Gives you a globally available terminal window available from a hotkey. This one is the hotness for developers who find themselves using the terminal a lot.

    AutoTest + Growl - Not really an application per say, but I love the autotest/zentest tool as a way to help guide my TDD. This will automagically monitor a folder containing some ruby specs and run the corresponding specs every time you save either the spec, or the source file. Set it up with growl and you will be notified by growl about whether your specs passed / failed. Also recommend setting up the fsevent library to reduce CPU overhead and save yourself some battery life.
    0

    Add a comment

  4. Today I was honored to be a presenter at the Software Engineering 101 Conference organized by Jim Holmes in Columbus OH. This was a great little conference idea put together and organized by Jim as a way to get back to some of the basics people need to know about developing good software. There were sessions on things such as OOP, SOLID, TDD, and of course my talk on debugging.

    The great thing that set this conference apart for me from many others I've attended in recent memory was that there was a very high degree of new faces in attendance. I attend a fair amount of events and so have come to at least be familiar with a lot of people in this region so I was surprised by this. The same sentiment was echoed by my fellow speakers who all agreed with me that this is a great thing! These are exactly the people we want to be reaching out to as speakers and helping to learn new and cool things that they can then take back and share with their coworkers and peers.

    I can't claim to know everyone's thoughts on the subject, but for me that is the reward I feel from giving back to the community through speaking... The sense that maybe I served as the spark that started a fire that will continue to grow and spread carrying good engineering practices with it. The more we all pitch in to educate each other, the better all of our lives are going to be.

    Slides for the talk can be found here. These are the Devlink specific version, but 99% of the content was the same I talked about today.
    0

    Add a comment

  5. Big thanks to Joe Wirtley, Jim Holmes, and Justin Kohnen for hosting my talk on Production Debugging last night at the Dayton .NET User Group. Thanks also for all the attendees that came out to learn something new! The feedback was tremendously positive, which makes the experience for me as the speaker very rewarding.

    During the presentation Joe suggested that I post some links here about how to setup Symbol/Source server, which I think is a fantastic idea. So the best links I've come across to get all this stuff working can be found here:

    Setting up a Symbol Server
    Setting up Source Server

    In addition for more great reference material on symbols, debugging, windbg, etc... please see my del.icio.us bookmarks here. I have many items tagged with "debugging", "windbg", "symbols" and other terms I talked about last night that represent the best links I've found from across the web over the years I've been doing this.

    For presentation content please see the link in my post about my presentation at Devlink, as the content is 99% the same.
    0

    Add a comment

  6. Got back a few hours ago from the Devlink conference in Nashville TN. Kudos to Jon Kellar and all the other folks that worked extremely hard to put together a great conference. Thanks again to those same people for giving me the opportunity to speak to other passionate developers at such a great venue. My experiences at this conference opened my mind in some ways I wasn't expecting (which I'll blog about later once I find the right words to say what I want to say).

    As a followup to my presentation I wanted to add the answers to the two questions that were asked that I didn't have good answers for at the time.

    1) What permissions are required to use ADPLUS.vbs to get memory dumps?

    Extensive searching on google for the answer to this question turned up no results. I know that for visual studio remote debugging however things tend to work much more easily with admin privileges. I imagine the same would be true for ADPLUS. If anyone does happen to find a definitive answer to this question please post it in the comments for all to share!


    2) Do these techniques work for the .NET Compact Framework?

    From the research I did I could not find reference of anyone using WinDbg to debug memory dumps of .NET CF applications. I did however find this blog post describing how one could go about finding memory leaks in a .NET CF application that I thought might be useful.

    Thanks to everyone who gave feedback on my presentation! I had a great time doing it and I hope that everyone in attendance was able to learn something useful from it. You all are the reason I keep doing this so I can't say thanks enough for all the kind feedback I've received.

    The slides / example scripts can be found here. (Note that the virtual machine is too big to host anywhere, but you can always download the BuggyBits application source from Tess's website.)
    1

    View comments

  7. So I've been experimenting a lot recently with Ruby during my free time. Writing some simple applications and exploratory tests to help learn the many interesting nuances of the language. The feeling that has consistently struck me throughout this period is one of profound... freedom. At first I thought that this feeling was merely a reflection of the expressiveness of the syntax, the abundance of helpful libraries available, or the way things just seem to work. I still think those things are a part of that feeling, and I'll dive into those with more detail later. After significant further reflection though, I'm starting to believe that there is a much deeper reason for those feelings as well...

    In my daily work I primarily write .NET code using all the fancy visual tooling of Visual Studio and R#. The work I've done in Ruby thus far has been something of a programming Renaissance for me. For the Ruby work, I've been working with primitive tools (SCITE, command line, etc...), and as a dynamic interpreted language, Ruby necessarily lacks a lot of the static type trappings and compile time warnings of languages like C# or Java. This has been quite a change for someone like me whom has said on numerous occasions that I don't know how I would live without R#. It's been a very different experience programming with almost no fancy visual tooling. Most strange indeed because I haven't missed any of it one bit! I've heard similar phenomenon described by other rubyists, but never understood or believed it until now.

    Which brings me back to the main point I'm getting to. I've been reading Uncle Bob Martin's clean code tips of the week recently which have had me ruminating quite a bit on the various topics of code quality. It was Clean Code Tip of the Week #9 however that really got me thinking. In this tip the intrepid adventurers are recounting tails of a time in the past when it took 24 hours to run a program on punch cards. Because of this they spent inordinate amounts of time manually reviewing their code to make sure things worked right, and consequently they had very few defects. We live in a golden age compared to that were compute resources are cheap enough and fast enough we can let the computers do a lot of that heavy work for us. This all seems great on the surface, but I'm beginning to wonder if all these advancements in technology don't have a much darker un-talked about downside.

    One thing I have definitely experienced during my renaissance affair with Ruby, is that I've felt myself thinking much more carefully about the code I was creating. Without static typing to warn me at compile time about type mismatches, and things like intellisense to easily discover parameter and method return types it becomes much more important to pay attention to the details of variable naming, parameter counts, object orientation, etc...

    All of this begs the question in my mind of: have advancements in technology, and visual tooling actually been making us dumber or lazier by allowing us not to exercise our brain? Biologically / psychologically speaking, as I understand it, our higher-order brain functionality is one of th key characteristics of what makes us humans. If we offload too much of that processing to machines, what is left to define us as humans? Not directly related to programming, but I certainly know I'm not alone in thinking about these issues. Some time ago Rick Strahl blogged about how he felt as though the rise of search technology has been diminishing his mental capacity. His blog post (which I can't seem to find at the momemnt) very closely echoed my own thoughts on the matter. If we can search anything at a moment's notice why do we need to remember it? Are we stifling our own creativity and innovative capacity by having any answer we desire readily available at our fingertips? Are we as programmers (and indeed society in general) doomed to head down a Wall-E esque path of increased technological oversight leading to humanity's complete inability to do anything for ourselves without assistance?

    Perhaps more importantly, or at least more approachabble is the question of are these tools really needed? The claim of course is that they are productivity boosters, but is that really the case? Having witnessed some true Ruby experts at work it was nothing short of amazing what they could accomplish in an hour with nothing more than VIM. As another example, I used to work on software written in an obscure language called DIBOL on VAX/ALPHA systems. There was no such thing as a GUI on these systems so your only option was to use a good text editor such as TPU. When I first started that job, coming from a windows background I railed at the lack of intellisense, etc... and sought ought and found a windows based text editor that would at least give me syntax highlighting for this obscure language. I convinced the company to buy me a copy and spent several painful weeks working with it. It was a ridiculously painful process because I would edit the source on windows and then have to FTP it to the VAX to compile and run it. After witnessing my boss crank out some code like nobodys business with TPU I finally broke down and decided to learn the tool. Within a few weeks I whipping out DIBOL code like I'd never done before and my productivity only continued to increase as my familiarty with TPU became greater. I know that Jimmy Bogard recently had a similar experience at the NFJS conference where he witnessed Stuart Halloway a Clojure master complete an absolutely insane amount of work in an hour using Clojure. I certainly think that there is some evidence at least to suggest that these so-called productivity boosting tools are just a band-aid for deficiencies in some of our common programming languages.

    Don't get me wrong, there are some *great* things about R#, VS, etc... and certainly people can use these tools effecively without writing crappy code. I do wonder however it wouldn't be better to make these tools something that people have to earn over time through demonstration of mastery without. If we give new programmers in college all these tools right from the get go, and they become dependent on them to help them write clean code, will they still be able to stand on their own when the crutch has been removed? Will allowing tools to point out all the smells in their code for them cause them to be blind to the smells they create when the tools are not present?

    An exercise for the reader is to create a reasonably complex system using your statically typed language of choice and your favorite visual tooling. Then create a reasonably complex system using Ruby and a basic text editor. Come back to both after a few months and see which is easier to reabsorb. Another exercise is to spend some time watching someone who is a true expert with an alternative language at work for a while without all the fancy tools we have in .NET. I think the results might be eye opening.

    All that being said, I am totally loving the Ruby language and all the tools that have sprung up around it. As I said to someone else recently, once you pierce the veil of static typing as being some kind of safety blanket you can stop worrying about angle brackets and interfaces and just get down to business. Ruby is just that... a language designed to get out of your way and help you get things done. Rarely have I found what amounts to reading a reference novel for a language an enjoyable experience, but the Pickaxe has truly been a fun read. Also some of the frameworks built-up around Ruby are completely amazing in their simplicity and power. Things like Sinatra, Heroku, etc... make it incredibly easy to get a site up in running (the way YOU want it) in no time flat.

    Along with Ruby has come my first exposure to GIT which at least from my limited experience thus far seems like "source control done right". I've got a lot more reading to do on the topic, but so far I think TFS and heavy-handed SCMs are for the birds.

    5

    View comments

  8. So up until a few days ago I thought I had a pretty good handle on how all this AJAX stuff worked. It was at that point investigating what seemed to be a pretty innocuous bug report that I we ran into a particularly painful behavior of the XMLHTTP object that made me rethink my assumption. The behavior makes sense once you give it some thought, but is not at all obvious at first which is why I'm posting the story here both so I can remember it and so hopefully someone else will benefit from the information.

    So picture for a minute a scenario where you have an input form that a user must fill out before clicking a button to submit the form. Some of these input fields are textboxes that require some complicated validation logic (easy to do on the server, but difficult on the client) that you run to ensure that the entered data is valid. To get the best user experience you want to have the validation appear to be on the client side, occurring right after the user leaves the field. To achieve such functionality you may have likely taken the same approach that we did of attaching an onchange event handler to the input element that makes an AJAX request to the server to perform the validation. And this is where the problem begins...

    Likely (as we did) you want the input to be validated and any errors flagged for the user before you submit the whole form to the server. Therefore it makes perfectly logical sense to execute the AJAX request synchronously so that it blocks further downstream processing from occurring until the validation routine has had a chance to finish. So you test all this functionality and it seems to be working just great and life is good... or is it...?

    So the next functionality you add is the ability for the user to submit the form which requires some javascript processing of its own. So you wire up an onclick event handler to the submit button and you think you're good to go... until...

    A user testing the application finds that if they change the value in one of the input fields that is using the AJAX validation (to an invalid value) and then immediately click the select button before clicking elsewhere on the page that the validation does not prevent them from submitting the form. A simple example could be as follows:



    <html xmlns="http://www.w3.org/1999/xhtml">
    <head>
    <title>Untitled Page</title>
    <script language=javascript type="text/javascript" src="Scripts/jquery-1.3.2-vsdoc.js"></script>
    <script language="javascript" type="text/javascript">
    $(document).ready(function()
    {
    $("#mybutton").click(HandleClick);
    });

    var lastTextBoxValue = "";

    function HandleClick()
    {
    console.log("HandleClick:"+lastTextBoxValue);
    }

    function HandleChange(input)
    {
    console.log("HandleChange Before Ajax");
    $.ajax({ url: "EventTest.html" });
    console.log("HandleChange After Ajax");
    lastTextBoxValue = input.value;
    }
    </script>
    </head>
    <body>
    <div id="mybutton" style="border: 1px solid black; background-color: Aqua">
    Click</div>
    <input type="text" onchange="HandleChange(this)" />
    </body>
    </html>



    EGADS! What possibly could have gone wrong with such simple functionality??? Firing up the debugger you swiftly realize that something very strange is going on... Setting breakpoints anywhere in the chain of processing appears to prevent the issue from happening. So you spend several hours banging your head against the wall looking for an answer.

    That is, of course, unless you happen to have read this blog post and have learned from our frustration in finding the cause of this issue. To make a long story short what is happening here is that using the XMLHTTP object in synchronous form does not behave exactly as you might expect it to at first blush...

    A synchronous request does in fact block further execution of javascript in the current execution stack that led to the request as one would expect. What it does *not* block however is other execution stacks that may have been triggered by other asynchronous processes such as events like a button click! So in the validation case where you changed an input value and clicked the submit button what you would see is:
    1. onchange event of the input field fires
    2. synchronous AJAX request begins
    3. click event of button fires
    The behavior makes sense once you realize that if the synchronous call blocked all javascript from processing that it would basically lock your browser and prevent such scenarios as having a cancel button to cancel a long-running synchronous request.

    We haven't found a great solution to this issue yet, but I'm throwing this out there as a word of caution to all that there is a reason the abbreviation stands for ASYNCHRONOUS Javascript and XML. I've seen time and time again that if you fight the asynchronous nature too much you usually end up burning yourself sooner or later.

    In addition, please see Jeff's comments below for why this behavior might be a little more nefarious outside of my validation example. He actually did a lot of the work on finding this bug, but since he hasn't got his blog up and running yet I'm posting it here.

    Some further reading materials on javascript timers / events / performance that can help to understand all this better are:

    1. How Javascript Timers Work
    2. Secret of Keeping Web Apps Responsive
    3. Client-side Performance Optimization of AJAX Applications
    1

    View comments

  9. Last night was my second go-round giving my Production Debugging talk. This time it was for the Cincinnati .NET User Group. This group is run by a great group of people, Mike Wood, Matt Brewer, & Phil Japikse (sorry if I left anyone out). There was quite a turn-out last night at 67 people, nearly setting the record for the group! I was impressed by the passion and insightful questions of all those in attendance and want to give thanks to the organizers as well as the attendees.

    Feedback thus far has been extremely positive which goes a long way to validate why I believe so strongly in the .NET community and getting involved in giving back. Hopefully everyone there last night learned a few new tricks and will be better prepared for when those critical issues happen.

    One question came up last night that threw me for a bit of a loop and so after ruminating on that question some more last night I wanted to go ahead and address it here on the blog. The question was roughly:

    During the crash demo why didn't you open the 1st Chance NullReferenceException dump? Wouldn't that have shown us the original error and location without having to jump through hoops in the 2nd chance dump to find it?


    This was a great question, and in reference to this particular simple crash demo was spot on correct! For this very simple case opening the 1st chance dump would have been a quicker way to get to the root of the problem. For more complicated crash scenarios however this likely would not have been a good choice. If we remember what I spoke about last night in regards to how exceptions are handled, the 1st chance exceptions occur when the exception is first thrown and the application has not yet had a chance to handle that exception. Therefore in a more complicated scenario there potentially could have been multiple 1st chance exceptions of NullReference or other types out there that had corresponding dump files written but were actually handled gracefully by the application and not the source of the actual application crash. In those types of scenarios we want to make sure to open the 2nd chance dump so that we can be sure that we are looking at the actual UNHANDLED exceptions.


    As promised, I have attached all the presentation materials to this post so people can refer back to them later. There are a lot of files in the package, so I recommend first checking out the Readme.docx which talks about what each of those files represents.

    Please feel free to leave any comments / questions you may have either here on my blog or ping me on twitter.

    Happy debugging!

    0

    Add a comment

  10. I’ll stand by the statement I made several months ago to my buddy (and now co-worker) Jeff that I think we’ve got a pretty stellar team at our current place of employment. Unfortunately today that team got a little weaker with the departure of another friend and co-worker, Lee. It seems to be be a bit of a disturbing trend these days that a lot of good people I know are either leaving or being forced to leave their current places of employment. Perhaps that is just the way things have been and will always be in troubling economic times, but I’d certainly like to hope for better.

    It’s always hard to say goodbye to a good co-worker, and even harder if they happen to be a friend as well. After working at a few different places in this business I think you come to realize that one of the most important factors in being happy with your job is getting to work with really smart, passionate people who share your same interests. Those people seem to be few and far between these days, especially as companies attempt to outsource and contract resources from other sources. This makes it all the more important to do everything you can to hold on to those good people when you’ve got them. It’s unfortunate that sometimes companies don’t realize the value of keeping that well-oiled machine of a good team whole.

    I’s hard not to feel at least a little pang of guilt as a part of management when you do lose one of those. You always wonder if there was a little something more you could have done to change their minds, or better yet, to stem the underlying tide of discontent before they began considering other options. I can honestly say that the team I work with is one of the most important factors I weigh when evaluating my career opportunities. I want to be a part of a really good team, and more importantly I want to be a part of making that good team into a *great* team.

    With that in mind I plan on spending more time being a little retrospective in the coming months. Focusing some time on energy on what I can do to become a better manager to those I work with. It’s easy for me to get lost in the technology sometimes because it’s what I know. It’s safe and comfortable and easy to learn. Management on the other hand is a relatively new thing for me, and I believe something infinitely harder to learn to be great at than simply banging out code.

    Lee… Goodbye and good luck with your new endeavors. You’ll be missed.

    1

    View comments

About Me
About Me
Columbus, OH, United States
Labels
Blog Archive
Loading