You have reached the blog of Keith Elder. Thank you for visiting! Feel free to click the twitter icon to the right and follow me on twitter.

To CSLA or Not CSLA – Your Thoughts?

Posted by Keith Elder | Posted in .Net | Posted on 28-11-2007

image Recently at work we have been having some discussions around standardizing .Net development around Rockford Lhotka’s CSLA framework.  My team has been digging into CSLA for several weeks now generating business objects using Codesmith Freeware 2.6 and working through Rockford’s book generating CSLA objects for our main product.  With only a few weeks into it and no real production time behind the framework we are having a hard time jumping on one side of the fence or the other.  There are things we like about CSLA such as the ability to restructure an application from a two tier to a three tier application just by changing the config file.   Maybe those of you that have used CSLA can chime in and provide some constructive criticism or positive feedback about your experience.

Before you jump in and start commenting here is some background on why this is being considered.  I think it is important to know why we are considering this path. 

The main reason we are looking to standardized development across various teams is we have teams moving to .Net and we have teams that make it hard to move engineers among the various projects as needed.  Today there isn’t a standard on how applications should be built and each team builds things the way they chose.  Some people may think that is good since it gives developers flexibility but it also causes problems.  For example, one team may use NHibernate and another might use something else.  If a team member needs to be reallocated to provide an additional resource to a project the problem arises they are no longer productive because they don’t know NHibernate.  Now of course this can be said for any variety of things.  Obviously having a standard each team follows would be beneficial.

In addition to reallocating resources other teams are moving into .Net from various other platforms.  Given the fact there are 15 ways to do data access alone we are trying to minimize that thought process so the engineers can focus mainly on business development not the “how”.   The CSLA framework is very attractive from this angle because it provides a wealth of documentation whereby we could hand an engineer a book and they can learn how things are done.  They also don’t have to know the internals of CSLA since they are going to focus on business rules, validation and security. 

That’s some of the thought process going on.  There are other things we are considering as well but I don’t want to get into all of the details.   The community in general seems to be somewhat happy with CSLA and then there are some that are not.  I have always said you really don’t know a product until you put it into production.  For those that have heard me speak, you know that I don’t normally speak on topics until I have put them in production and gone through all of the rusty washers.  So for those of you that have used CSLA for a lengthy amount of time what are the rusty washers or shiny pennies you found?    

Please keep your emotional flames to yourself, I don’t want this to get ugly and turn into a Y vs Z vs X debate, just the facts.  Ok, that’s it.  Ready?  Discuss! 

Comments (30)

Used CSLA.  Hated it.  The MVP app we were trying to build was dog-slow.  It took 2 minutes to load one page of heirarchical data.  That was in 2008.

It looks like there has not been any activity on this thread in a while.

I just stumbled across it while researching the exact same issue that Keith initially was attempting to solve – setting a default toolset/framework/environment for porjects to allow the developers to focus on where the money is (UI and BL) while leaving the busy work out of the way.

I was wondering Keith – what did your group finally decide/standardize on?

Hi Guys I know this is late but this still seems to be a very relevant page on the net.

I have used both CSLA and NHibernate and although their paths cross a bit they are actually very different animals.

Hibernate is ORM (Mapping Entities to your database) and CSLA is Business logic.

So at first glance they should both work together very well but in practice it is very difficult to achieve well though I can’t speak for the latest version of CSLA which claims to support the use of Nhibernate and other ORM tools like Linq-To-Sql and Linq-To-Entities.

They are both excellent frameworks and do what they say on the tin.

CSLA is just magic once you start using it and you have your persistence sorted out watching collections of objects and child object get seamlessly loaded and persisted with all the business logic applied and any broken rules immediately displayed on your UI is just excellent and the security just works not to mention the undo and transaction management is also a great bonus.

My only gripes about CSLA is that all of this functionality can really start to slow down your application if you don’t keep an eye on things. Its great linking all your business objects together and harnessing the power in your application but this can easily get out of control.

If you choose to use CSLA make sure that you lazy load child collections and any third party grids like Infragistics need to be told to only look as many bands deep as you need otherwise your business objects start going off in a frenzy loading child objects.

Also the CSLA version I used “Version 1xx” something only had row level dirty checking so you had to update the whole row when persisting an object which did not help with performance – Though with hind site this could have been implemented relatively easily and may already be implemented in new versions.

Transactions were good but it only supported heavyweight distributed transactions which was a bit overkill in our application lightweight transactions would have been fine. Much faster and could have been implemented but not without a lot of re-work.

NHibernate on the other hand just persists your relational objects to the database automatically and thats it! My experience with it has been nothing short of fantastic. It is very performant (reads are possibly faster than reads to a database using if you use the caching server) and it has field level dirty checking and supports so many databases out of the box.

The Icriteria interface is excellent as is HQL for querying the objects the SQL generated is generally excellent and all SQL statements are prepared / compiled with parameters with the SQL2005 dialect that I am using.

The ultimate framework is certainly a combination of the two.

Dear all, i have read all of your comments but still in confusion what is the end result of this long chain of comments. i am developing software for an insurance company and i want to use generic collection. should it be good to use N Hibernate or CSLA.

waiting for ur valuable suggestions.

Little late but…

With the fast pace developments being made to .Net by Microsoft itself, it is no longer an attractive exercise for an enterprise to build an in-house framework on top of the .Net Framework. More likely, choosing third-party products (proprietary or free) would be wise, but keeping it simple and close to the .Net Framework would be wiser.

Consider AJAX.Net. There were AJAX engines for .Net from third-party vendors before Microsoft released their version.

Consider ADO.Net Entity Framework. There were ORMs for .Net from third-party vendors before Microsoft announced that they will release their own version.

If you were an early adopter of products from third-party vendors (like CSLA.Net and NHibernate), you may end up struggling to switch when you find out that a new technology/approach is better. If you started to be least fancy with .Net, you may have no problem switching to new technologies. Although, that would still depend on the overall architecture, strategies and disciplines used in designing and implementing your products.

The most important thing to remember is to own the design and the implementation of your products. You should own the ERDs and the UMLs. Never let the technology that you use to dictate the design. Likewise, ensure that you have full control of their implementation with as much separation of concerns as possible but without compromising testability and maintainability.

Let’s talk a bit about CSLA.Net. Note that, historically, CSLA.Net was designed with UIs and remoting in mind (and for years, if I may add, Lhotka was not a believer of SOA). This is the main reason why CSLA.Net is not SOA ready as-is requiring Lhotka to create the DataPortal as an adapter later on. Since then, CSLA.Net users ended up adjusting their designs to keep using CSLA.Net. So suddenly, to some extent, CSLA.Net took control of the design and its future.

Although Microsoft has no equivalent version of CSLA.Net, it can be replaced by an ORM and a set of project templates to create the business objects, components and/or services. Like your ERD and UML diagrams, ORM and project templates are easier to manage and control.

… just my two cents.

Well, it looks like I’m a little late to the discussion, but I still wanted to add my two cents.

I used CSLA on an ASP.NET project for 4 months. I didn’t choose it and I didn’t know anything about it before I got on the project. I had no problem learning it and it never gave me any problems. That project had one of the most well-structured and well-tested code bases that I’ve ever worked with. The developers deserve a lot of credit for that, but I think the framework has something to do with it also.

The problem that I have with most of the anti-CSLA comments here is that I didn’t see ANYONE say, “We used CSLA and it caused all kinds of problems for us.” Other people complain that CSLA is stuffed with all of this extra stuff that they won’t ever use, or it’s not architected the way that they prefer. So people cast CSLA aside and write their own validation framework, their own business object framework, etc.

So let’s think about this. I’m not seeing any concrete evidence that CSLA doesn’t work or causes serious problems. I am seeing lots of people choosing to write their own frameworks instead. Which code base is going to have fewer bugs? The one that is being used by lots of applications and has a book written about it? Or something that someone develops from scratch? Not to mention the fact that someone has to take the time to write and test their framework code.

Again, I used CSLA and liked it, but I don’t know if I would consider myself a CSLA expert. I will say this:
– you are getting lots of functionality for free
– you’re getting something that is supported (Rocky’s book, Rocky’s forums)
– you are getting something that is well-tested (people all over the world are using it)

I think we need to evaluate the way that we evaluate something. Don’t underestimate the value of getting something for free, even if it’s not 100% architected the way that we prefer.



I agree 110%. Thanks for the links.

I came to the same conclusion as you but from a different angle. I got nailed on scalability by putting simple logic in my procs.

I did a writeup on this a while back..


I’m sorry if this is hijacking your post, but I do think it is still relevant to the issue at hand.


I am familiar with that post. Back around April of 2003 is when I first really started to use O/R Mappers. I was a big follower of the ASP.NET architecture forums as well as blog posts like the one from Frans you referenced and other blogs like Paul Wilson’s.

I don’t even think the post you provided is dead on. I work on a very large application OLTP application with over 500 tables over 130GB in size.

Before we changed our architecture to NHibernate (back in June 2005) we had over close to 2000 stored procedures with a large portion of them having optional parameters. As our system grow (we went through a huge growth spurt around then) our stored procedures with optional parameters started slowing to a crawl.

I’m not sure how many people are really database people here, but the reason is really simple. Once you started putting COALESCE or other functions in your where the SQL engine can no longer perform seeks on the data. Everything becomes scans. Our tables have millions of records. Scanning millions of rows was unbearable.

What did we do about it? For a short term fix we rewrote these procedures to become dynamic sql. We would create a nvarchar string in the stored procedure and then dynamically build the where clause (still using parameters mind you) and then run the result through sp_executesql. Effectively we put the code that a good data access tool would generate in the stored procedure itself. T-SQL was not the best tool for this job.

The result though? Our application worked again. Our scans had now become seeks. Where we used to have to scan millions of records to find what we were looking for we now could seek to just the few we actually wanted. Queries which used to take minutes would now return in less than a second. The statement Frans made about it returning twice is fast is really a subject of the database he was querying on.

The above exercise really helped me convince my peers that an O/R Mapper with dynamic sql was the right direction to move our application. We’ve been replacing those dynamic SQl T-SQL stored procedures every since.

Regarding Frans posts though, the referenced posting isn’t really his most famous on the subject. The one I always remember coming up was:


If you read the title you can probably guess that it caused a big stir at the time. Note that Frans used to be a big stored procedure guy. He also made reference to that fact in the posting shown by Evan. His original tool was built entirely around stored procedures after all.

Another interesting and possibly controversial post of his back in the day would be:


And from that post I do want to quote one thing he had to say, which is what I was trying to express in an earlier post:

"The reason for this is simple: you should use the technology that fits the job best. If a couple of actions are simple in SQL and hard in C# or other 3GL (which is highly likely, since SQL is set-based, C# is imperative), why not write that code in SQL?"


They may have switched recently I guess..



Here’s an interesting post I just stumbled on (looking up the MySpace stuff). It’s from Frans Bouma’s blog (and is rather old)..


(Frans is the head guy on LLBLGen)

I tend to disagree with him on a lot of stuff, but that’s actually a pretty decent post..

@Michael, @John (and maybe someone I’m missing)

To clarify the reason I mentioned NHibernate originally was not that it is in competition with CSLA, as I stated earlier if anything it complements it. The reason I mentioned it was in regards to standardizing development and moving developers across enterprise teams. Taking into consideration the security needs of our data access NHibernate will not work on our core enterprise systems. As someone mentioned, if you are forced into doing stored procs with NHibernate it isn’t fun.

Now that isn’t to say that for intranet type of applications like requesting time off or 100’s of other internal applications we have that are not directly involved with financial information we couldn’t use CSLA with NHibernate. I think in that case it makes sense. Of course iBatis could be looked at as a standard with CSLA as well. Or maybe LINQ becomes the standard and we just roll stored procs that play nicely with CLSA today, who knows.

At the end of the day we need to say “Here is the standard and how things are done in the context of software development within these walls.”


I’m surprised more folks don’t look at iBatis.Net, especially those that have similar needs like ours. In our case for the enterprise it makes a lot of sense and is a fantastic tool but I haven’t been able to find much on using it with CSLA or even if it is needed. I’m still digging.


N-level undo doesn’t use serialization, because that wouldn’t allow for in-place undo. It uses reflection to get a snapshot of all field values (which is what serialization does too), such that it can restore those values in-place. Serialization always results in a new object instance, making it impossible to do a “cancel” and end up with the same object instance being used.

N-level undo is designed to support containment relationships, and there must ultimately be one “root” object at the top of the object graph. Below that root object, you should be able to have a wide range of graphs, though circular references are not directly supported (though there are solutions – but this is an unusual edge case, so it doesn’t keep me up at night).

You are probably correct, a many-to-one relationship would have the “one” be contained by many other objects and this would not be supported. However, I’d suggest that it violations the principle of containment to think that one object could be contained by many other objects, so this idea is broken to start with.

I can’t be contained within 5 airplanes at once. That’s physically impossible. But I could make 5 overlapping airplane reservations – each “using” me up to the point that I ultimately actually climb aboard just one of them.

As was mentioned earlier in the thread, it is very important to choose an architectural style that you are comfortable with, and which meets your needs. Certainly, choosing between DTO/entity objects and behavioral business objects is a key item, as the result are two very different architectures with very different characteristics.

Going down the DTO/entity/POCO route, there are data objects and behavioral “objects” (which really manifest more as services). This approach has some serious strengths, because the data objects are simplistic and are comparatively easy to persist and are very easy to manipulate. It has some weaknesses too, especially when it comes to creating highly rich and interactive Windows Forms or WPF apps – at least if you share my primary goal of minimizing (eliminating if possible) code in the UI.

Going down the behavioral business object route, the objects exist because they fulfill some responsibility in a use case, and they encapsulate the behaviors necessary to implement that responsibility (or collaborate with objects that provide those behaviors) and also encapsulate the data necessary to implement those behaviors. This approach has some serious strengths, because everything (behavior and data) required to fulfill a given responsibility is in one place, and because these objects make it very easy (almost trivial) to create a highly rich and interactive Windows Forms or WPF app, with almost no code in the UI. It has some weaknesses too, because these objects can be more complex, but more importantly because they really don’t map directly to database constructs. When you use this approach you hit Taylor’s impedance mismatch problem head-on (in a way most ORM tools bypass).

My preference has always been for the behavioral business object route. From time to time I’ve been told that this is “old style thinking”, but after being in this industry for 20+ years I can say without any reservation that it is merely “out of vogue”. Just like the thin client web fad, all I’ve got to do is wait a while and people will come back around to wanting rich clients, and at some point they’ll want rich objects too (they tend to go hand in hand imo). I guess I’m lazy – rather than chasing fads, I just stick with what works for me and let the industry do its yo-yo thing 🙂

I had to laugh actually, because a few months ago I heard a (young – perhaps 10 years experience) researcher speak about the need for “semantic data” to be passed along with the messages in an SOA setting. How simple messages don’t meet the end-goal needs, because they don’t contain enough information for the recipient to really know what the data means, or how it should behave, or what rules apply to any of the fields. I didn’t have the heart to tell him that he was ultimately driving right back at the mobile object concept that we already have…


My decision wasn’t CSLA vs. Nhibernate, but reading back, perhaps that’s what it sounds like. In fact, it was a CSLA vs. POCO’s as John mentions above. The team and I deteremined that we’d rather write exactly what we needed to get the job done vs. going with a heavier framework for our business classes (such as CSLA).

This really is an interesting discussion, but I wonder what else we’re missing. Are there other frameworks out there like CSLA that would provide Keith what he’s looking for?

To be fair, it has been about 4 years since I last looked at CSLA. I was actually agreeing before that it wasn’t a completely fair comparison sine CSLA is so much more than a persistence tool (which Rhockford even says it is not a persistence tool). My previous decision was that a lot of the other features of CSLA were things I did not want in my business objects. I like to keep my business objects, small clean and POCOs if you would.

I think the reason why NHibernate has been discussed so much in these comments is because the original blog post referenced it as a direct competitor to CSLA for this decision.

I think the unclear part of this discussion has been what Keith is trying to accomplish. I mentioned before that it seems unfair to say all applications should work the same way. However, for subsets of applications it does seem to make sense. Are you looking for one complete application framework? Or were you more concerned with tackeling particular areas such as persistence or logging?

Also correct me if I’m wrong, but I have heard that the n-level undo feature of CSLA falls apart with more complicated object models. For example if you have an operation which changes a related many-to-one entity you can not rollback that operation since the reference was not modified on this object. I know n-level undo just uses serialization and deserialization to undo an operation, but that is how it was described to me. When considering CSLA n-level undo was actually one of the most interesting features to me.


Not that it is really important, but I believe Myspace actually uses LLBLGen Pro. I had heard LLBLGen from others before, and if you look at any Myspace job opening they all say experience with LLBLGen preferred.

There’s a lot of good discussion here, which is nice to see.

I did want to clear up some apparent misconceptions and/or assumptions regarding CSLA .NET.

Perhaps most importantly, CSLA .NET is very specifically NOT an ORM. A number of comments seem to imply that it is one, or suggest applying ORM decision criteria when evaluating CSLA. But it is not an ORM.

CSLA .NET is designed specifically to help you build a rich, abstract, UI-independent business layer that supports some important and common concepts. Most notably: data binding (in Windows Forms, WPF and Web Forms), validation, authorization and undo capabilities.

It does provide an abstract persistence model, which might imply ORM concepts, but that is inaccurate. The abstract persistence model provides the objects and any code consuming the objects with a high level of abstraction, but ultimately leaves any data access/mapping as an external concept.

CSLA .NET is especially useful if you subscribe to the concepts of responsibility-driven design and more specifically single-responsibility design. It is less useful if you try to apply data-centric concepts like Active Record, though people do use it this way successfully.

In short, CSLA .NET helps you build the “O”, but leaves the “RM” out of scope.

As a separate feature (and this is where people often accidentally assume ORM concepts), CSLA enables the concept of mobile objects through its data portal. But the data portal and mobile objects are not an ORM.

Mobile objects is a concept in which objects can move between machines (or processes/AppDomains) in an n-tier deployment configuration. It is a focused subset of the broader concepts of an object portal, or mobile agents.

So the commenter who pointed out that CSLA’s data access support was weak is partially correct, but perhaps overly kind. CSLA has no data access of its own. The data portal provides a level of structure, providing clearly defined locations where CRUD operations are invoked/implemented, but the ORM/DAL/whatever that is invoked is entirely outside the scope of CSLA .NET.

This has been a very conscious choice on my part. In my books I illustrate how to use ADO.NET directly, which provides the highest possible performance, but a low level of abstraction. Now that Microsoft has LINQ and soon the Entity Framework, I’ll be providing rich support for interacting with these more abstract (though slower) data access technologies. A side-effect of these new features is that it will likely be easier to use nHibernate as an ORM behind CSLA as well.

But I still won’t force the use of any specific ORM/DAL.

Supporting the flexibility to choose the right UI technology/framework and the right ORM/DAL technology/framework for your particular needs, while providing powerful features for creating rich, mobile business objects is one of my primary goals for CSLA .NET.

“we are a financial institution that processes loans (which have tons of authorization rules, business rules, and validation rules) this is one of the things we have to keep in mind in this review”

This is not an issue with NHibernate. It’s neither supported nor unsupported. The limiting factor will be in how the rest of your system is designed. Those things can and have been done in systems with NHibernate (which I suspect you already knew). 🙂

And now, you’ve finally given us the meat of your question (for Data Access anyway)..

“The main thing that is holding up NHibernate for several of our team members on this review is the fact that NHibernate pushes everything in plain SQL queries. We actually like stored procedures because it provides the ability for us to control access to sensitive data even down to developers in production. Stored procs allow the DBAs to shuffle, optimize and move things around in the database without having to adjust the business object. I guess now that I typed that out I’ll say this to sum it up, data persistence is the least of our worries. “

If controlling access through stored procedures is truely a requirement for your application (and there is not a viable workaround for whatever reason), then I would recommend against using NHibernate. NHibernate can use stored procedures, but that’s really outside the scope of what you would *want* to do with it.

There is a whole world outside the realm of stored procs (ie..Security), but I won’t speculate on your scenario (unless you want to make that your next post..lol)

NHibernate + Heavy Use of Sprocs == Pain

iBatis.NET, on the other hand, is a different story (which is what they use over at MySpace)

I knew when I posted this I was going to get comments leaning towards NHibernate, it is the new kid on the block. And that’s OK, it was expected. Let me say this, we are a financial institution that processes loans (which have tons of authorization rules, business rules, and validation rules) this is one of the things we have to keep in mind in this review.

With that said it does seem one can have their cake and eat it too (in regards to NHibernate). This post on the CSLA forum outlines how CSLA can use NHibernate for data persistence with minimal fuss.


I really don’t see these two as competing systems but complimentary in the way outlined in that post. No different from how in the future we may want to use LINQ.

The main thing that is holding up NHibernate for several of our team members on this review is the fact that NHibernate pushes everything in plain SQL queries. We actually like stored procedures because it provides the ability for us to control access to sensitive data even down to developers in production. Stored procs allow the DBAs to shuffle, optimize and move things around in the database without having to adjust the business object. I guess now that I typed that out I’ll say this to sum it up, data persistence is the least of our worries.

Personally, I had rather map the object and send it on its way but given the sensitivity of our data we just can’t do that.

Again I’ll say, they compliment each other more so in my mind rather than compete. If I’m missing something let me know.


If you want to expand on what you are seeing as important for your enterprise model, I might be able to give a bit more concrete information on NHibernate. Nothing is a silver bullet, NHibernate just happens to have hit my personal sweet spot.


I’ll 2nd the thought about the Business Objects book. It’s on my list of books to read. I know that in theory, there are a lot of synergies between the two. They just differ in implementation. I know at least a few of his “concepts” overlap with those of Domain Driven Design.

While I don’t see myself adopting CSLA down the road, I will say that I would choose it in a heartbeat over most other frameworks if NHibernate and family were off the list of choices. 🙂


As I just told Tahir, NHibernate and ActiveRecord are already used by one of our teams pretty heavily. I’ve played with NHibernate for probably, um, 4 hours and CSLA for several weeks off and on for a total of 16 hours so I have a better understanding of CSLA’s features more so than NHibernate. I am planning on leveling the playing field in that regard.

I think I’m mostly re-hashing what the above posters have had to say, but I’m +1 on NHibernate as well. I think the problem with this post is that CSLA is so much more than just a tool to solve the data access problem. It really is intended to be an application framework. It’s a relatively outdated tool at that as well.

I think the Business Objects book is great, it’s one of the best books I’ve read. The really valuable parts are the explanations as to why the framework works the way it does. That book is then very helpful in identifying how your applications should work. It’s a book all developers should read whether you are choosing CSLA or not.

That being said the data access is very primitive compared to NHibernate (which I’ve personally been using for the past 3 years). We chose not to use CSLA in the past because 1) I’m not a fan of all of the code generation. I would prefer one source of the code so that if there is a problem we don’t have to deal with massive amounts of code generation and verifying that nobody touched the source. 2) NHibernate is much more feature complete at solving the data access problem. 3) We didn’t care for many of the CSLA features, so why do we need them?

As far as no NHibernate book, you really need to read Hibernate In Action. It’s written regarding the Java Hibernate implementation, but it’s dead on for NHibernate as well. It’s an awesome resource that NHibernate developers could learn a lot from. The authors were also some of the founders of Hibernate, and provide a lot of insight in to why it works the way it does.

I would also challenge the idea that one framework must exist for all applications. That can be a bit unfair. Some applications may work best as web applications, some as windows forms. Some require more complexity than others. I agree in having a sense of consistency, but must the entire architecture be the same? I also think it is valuable to learn from one project to another. If you have one set architecture, is there room to learn from the last project?

Also, I’m a huge NHibernate fan who happens to live in Michigan. I’m more than willing to be involved in the community. I wouldn’t be concerned about the ability to get assistance on NHibernate versus CSLA.


We do have some time to research this, the important thing is do what we feel is right. It is about what is right for us not about who is right. We have a team that uses NHibernate and ActiveRecord and they are happy with it and they are involved in this discussion about CSLA. As a matter of fact, everything is up for consideration at this point. The main thing is finding something that fits into the enterprise model for the long haul. To think there is one tool that fits all needs is also somewhat foolish as well and I keep that in the back of my mind.

Thanks for your comments.


Both CSLA and NHibernate work best when developing with the behavior-centric mindset.

But I think it’s important to note that there are various styles of Architecture within the behavior-centric mindset.

I know many NHibernate users that practice Responsibility-Driven Design (I do). That’s in conjunction with the principles of OOP (Single Responsibility Principle, Open/Closed Principle, Liskov Substitution Principle, Interface Segregation Principle, Dependency Inversion Principle, and all the cousins). 🙂


If you are in Nashville anytime soon, let me know and I’d be glad to give you the whirlwind NHibernate tour (rather than writing a book in your comments). 😉

Kudos for attempting to standardize.

If you want another choice to look at, check out the Active Record project (same name as the data access pattern) over at the Castle Project (google it). It provides an Active Record facade (the pattern) on top of NHibernate (like you see in CSLA).

They’ve got some other great tools as well (Castle Windsor, Monorail, to name two). We are actively using both NHibernate and Castle Windsor in the project I’m in now.



Someone mentioned about selecting a Methodology for developing/designing your applications. There are basically two flavors
1. Data-centric
2. Behavior-centric

The focus of CSLA.NET is not the features that it has, the framework is the by-product of the Business Object thinking by Rockford.

CSLA.NET can help you do the Behavior-centric OO Design. Most of the other tools are Data-centric way of designing applications.

In order for your team to be productive with CSLA.NET or any other framework it is important that know about their skills in OOD. If they are doing the Responsibility-Driven Design (RDD) (i.e. Behavior-centric approach) then CSLA.NET will be helpful, otherwise other tools/frameworks are probably better. The learning curve is not so much about the tool or framework, it is this data-centric or behavior centric thinking.

I am not using CSLA.NET but it has changed the way I design our products. CSLA.NET and Rockford promote the RDD, which produce a lot more maintainable code.

I am not sure how much time you have to research various frameworks but I would suggest that you divide your team in two, one can develop a very small application using CSLA.NET and other using NHibernate. You would know about the learning curve that way and also what benifits each provide and what shortcomings each have. Personally I don’t think that either is a huge learning curve however as your team is new to .NET, it might be different.


I want to clarify some of what I’ve said. As Evan said, I understand that CSLA is more than persistence so comparing NHibernate by itself to CSLA is apples to oranges.

On the project I’ve described, the choice was use CSLA or use NHibernate with custom classes (thus putting in just exactly what we needed to get the job done).

As Keith said in his last comment, I’m not here to defend either. In fact, during our short period of evaluation, my stance was to solve the problem at hand in the least complex way possible. By using CSLA, you’re accepting a certain amount of overhead even if you choose to only use 1% of its features. With our approach, we developed exactly what we needed with quite a bit less “baggage”.

This really is an interesting discussion — interesting enough to get me to break out my CSLA book for review. 😉


Nice to meet you at DevLink as well Evan! I’ll see if I can take a few of your points. I’m new to CSLA so if someone else wants to chime in feel free.

1/2. Lazy loading of objects. Yes CSLA can do this and relatively easy. See this URL for more details


Basically you can control which objects to load or not load. For example if I have a Contact object I can choose to autoload the Addresses when I get that object by default, or if I access the Addresses later on they’ll get loaded. If you have a case whereby you want to load only a few specifics fields of data into a ReadOnly object you could add another factory method and call something to load that. In either case I think you are covered.

3. Using the Contact scenario again if it has child object (address) that you added or updated, the developer just calls the Save() on the Contact object and it looks at all the children to see which ones are marked dirty or for deletion and then does it thing. The framework manages that for you, you really don’t have to think or worry about it.

4. Tracking changes, yes. It has always had the built in feature of n-level undos for objects.

5. I’ll let someone else answer that one.

6. As far as I know it doesn’t provide caching, that would be according to your business rules and you would handle that as needed. Remember the object you are working with is just a plain old C# object.

7. Same answer as #6.

8. More than likely you will generate a CSLA object and then add tests to save on typing. No reason you couldn’t flip it, i think it would be counter productive though.

9. In terms of architecture this is the easy part. With just a change to the config file you can change the archtecture. This could be remoting, wcf, web services or enterprise services.

Let me say that I’m not here to defend CSLA, this is just what I’ve learned after working with it for a few weeks. If I am incorrect I hope someone will set me straight.


Yes, the book is documentation, but NHibernate has a huge community behind it that generates plenty of documentation in the form of blog posts, articles and samples.

NHibernate has a learning curve, but in my experience, it’s a much shorter curve than something like CSLA. Of course, as I said in my initial comment, my experience with CSLA is limited. During the project I mentioned, we did NOT have a lot of time to do the kind of evaluation you’re doing (BTW, you can read about that project in my Lessons Learned series). We were under the gun from the start. One of the reasons I selected NHibernate is that many of the devs in the company I was contracting with had NHibernate experience and I knew I could draw on them when needed.

I believe an experienced developer coming to .NET would probably find CSLA more complex than NHibernate/custom objects. In the long run, is that ok? Maybe. Maybe not.

As for validation, we used EViL attributes in our entity classes. Our Manager classes would ensure the entities were valid before persisting the data.

While an argument can be made for us (me) possibly oversimplifying things, we didn’t see the need for many of the features offered by CSLA such as the way it handles business rules, security, etc.

In the end, I cannot honestly say that NHibernate was better than CSLA in that project. I do believe the devs were able to hit the ground running and we were extremely productive. Could the same have been achieved with CSLA? Probably, but as I said, we were under the gun and a decision had to be made.

I have a bone to pick with most implementations of the Active Record pattern (which CSLA provides an implementation of for Data Access).

I haven’t worked with CSLA, but I can give you a few critical questions to ask.

A few critical questions to ask of CSLA:
–If I were to create an Order entity with relationships to LineItems (which relate to Products), is there a way to specify what gets loaded when I do the equivalent of Order.GetById(x). In some scenarios, you just want the “header” info, such as order date. In other scenarios, you want the Header and LineItems (so you can display them in a grid or something). Other times, you will want the Order Header, LineItems, and Product details. Does the tooling allow you to specify which to load?
–In the same scenario above, will the tooling support Lazy Loading of data on the fly? (ie..if my code calls lineItem.Product, will it fetch the data from the database on the fly for me?)
–Apply the same object graph situation to saves. Will it save the order info? Also LineItems? Also the product? How do I specify what changed?
–Will it track the changes for me (as in the above question)
–Can I send updates in batches?
–What’s the data caching scenario for the tool?
–Will it catch queries and data for me?
–What’s the testing story for the framework look like? Can I write my tests first or am I forced to write them afterward?
–If I have to scale my Application Tier beyond a single server, what does that look like?

Those are a few questions which might be of interest when evaluating CSLA (which I have not).

I would urge you to choose an Architectural Style before choosing a framework, otherwise you will be letting the tool choose it for you. Fowler’s PoEAA is considered the best reference on this subject (with several choices–each with pros and cons), but there is no coverage of Component Models which some will also choose as their preferred Style (for good or bad).

If you want to build applications using the Domain Model style, I can add a +1 to NHibernate for Data Access (which I know CSLA is more than just Data Access).

Feel free to shoot me an email if I can be of any help answering questions on the above.

Oh, and it was great meeting you at DevLink.. 😉

Thanks Michael for checking in from the trenches so to speak.

You mentioned the book first. I think the book is a plus since that is the documentation. For example there isn’t an NHibernate book right now (although one is coming) and to say that NHibernate doesn’t have a learning curve I don’t think is true (not that you said that but it was implied). Think about someone coming to .Net that is a solid engineer, but that doesn’t know .Net at all. Which one is easier to learn then? I don’t know but something to consider.

Ok so you went with NHibernate. Question, how do you handle validation of your objects? Business Rules? Security? Property notification on changes? I understand NHibernate can save and fetch data you want but does it provide the other stuff or in your problem domain was it just getting data and saving data?

Keith, perhaps if you need to hand the devs a book, your choice is too complicated. 🙂

For what it’s worth, my experience with CSLA is limited and it may be the perfect fit for the teams your working with.

The complexity and perceived learning curve is exactly what led me to choose something other than CSLA on a project earlier this year. One of the devs was very familiar with it, but we had to consider the learning curve and overall developer experience. We ended up going with NHibernate which IMO let us really focus on the business and not on writing stored procs, etc.

I would like to note too that during my brief research at the beginning of the year, I had a tough time finding people that were happy with their choice of CSLA. Of course, I could have been looking in the wrong places. 😉

Good luck!

Write a comment