|
The variety of objects and dependencies doesn't really make a lot of sense to me. I may just be confused because of the naming...but if I am correct, all of these objects are part of a DAL, and the business layer is not involved at all? If that is the case, it seems like you could greatly simplify:
class UserService
{
IDataMapper<User> _mapper;
public UserService(IDataMapper<User> mapper)
{
_mapper = mapper;
}
public User Load(int id) {
public void Save(User user) {
}
class User
{
int ID { get; set; }
}
interface IDataMapper<T>
{
T GetByID(int id);
T Insert(T item);
void Update(T item);
void DeletE(T item);
}
class UserDAL: IDataMapper<User>
{
}
Now, when it comes to unit testing...testing with the above model is a synch. You can easily mock away your DAL from the service, the entity is not coupled to anything, and life is bliss:
class MockUserDAL: IDataMapper<User>
{
}
[TestMethod]
public void Test_UserService_Load
{
MockUserDAL userDal = new MockUserDAL();
UserService userSvc = new UserService(userDal);
User user = userSvc.Load(1);
Assert.IsNotNull(user);
Assert.AreEqual(1, user.ID);
}
modified on Tuesday, January 6, 2009 7:38 PM
|
|
|
|
|
Thanks for your thoughts everyone. I see some common threads in the responses, so... why is our DAL so complicated?
The reason our DAL was broken up into multiple projects (Mapper, DataAction and Adapter) was because for our first DAL we took a stab at using the EntityFramework v1 in a disconnected multi-tier application.
When using the Entity Framework, the DataAction in our case would return a ModelUser object, as defined by the entity data model. We did not like the idea of passing this object (tied to the entity framework infrastructure) all the way out to our client tiers.
To remedy this, we put a facade on the outside of it (Adapter) and created a Mapper that would translate/convert the ModelUser object into a POCO (Plain Old CLR Object) EntityUser. This translation is not overly straightforward, so it seemed appropriate to split these up.
So, the EntityFramework DAL would work like this:
Business Layer calls the Adapter
Adapter calls the DataAction which returns a ModelUser
Adapter calls Mapper which takes the ModelUser and returns an EntityUser
Adapter returns EntityUser to the Business Layer
So for us, the Adapter, DataAction and Mapper were considered separate parts of the DAL, but still part of a single DAL implementation. Both the DataAction and Mapper methods might have different signatures for a EF DAL vs. ADO.NET DAL implementation.
When it came to the ADO.NET DAL, it seemed to make sense to keep the same structure to avoid confusing things.
The Business Layer references an IAdapter interface, so the whole DAL can be swapped out. As well, the DataAction implements an interface so that it could be mocked and avoid the database hit during unit test runs.
Again, this response is much longer than I'd first hoped... thanks for sticking with me to the end.
|
|
|
|
|
Aaah, the wonderful joys of Entity Framework. I was so excited when I first started playing with EF...and it turned out to be such a disaster in a multi-tier/multi-layer story. :'(
Before we really really hit "the end", one thing I wanted to mention. You DAL should be tested too. Its code, just like all the rest, and just because it hits the database doesn't mean it doesn't need testing. It sounds like you have things implementing interfaces in all the appropriate areas so that you can mock away your DAL when testing higher level stuff. But you should also set up an automated testing database so that you can unit test your DAL as well. Based on what you explained above, there is a fair amount of behavior from your Adapter on down that should be tested.
|
|
|
|
|
Yeah, we too were happy to see the EF... haha, and happy again to see it go when we shelved it (for now). We hope to pick it up again and provide a DAL implementation for it when v2 comes out.
You are right about testing the DAL. The Adapter is testable by mocking the DataAction.
The DataAction is testable by providing a connection string to a unit test database. That is where some of the complexity of my original post came from. DataAction method test hits the db, runs a sproc and returns a DataSet or DataTable or DataRow.
The Adapter is like the commander of the DAL, orchestrating the call to DataAction and Mapper.
My original question was hinting at the dependency tree depth. Our current setup has:
Adapter passes DataTable to Mapper and the mapper builds up a User entity and returns it.
Now, some of the hidden complexity within the Mapper is created by the need to support multiple languages (globalization), etc. So internally:
Mapper depends on a static Globalization class, which has a singleton dictionary (collection of supported culture info) that is used in part of the mapping. If the singleton is not populated, then it too needs to call a separate GlobalizationAdapter (to hit the db and get the collection).
It was here that I was wondering about the depth of the unit test. Because the Globalization class is static, I can only mock the GlobalizationAdapter that it uses to go to the db (and avoid a db hit during unit testing).
So when I'm unit testing UserMapping, I'm actually testing:
UserMapping >> Globalization(static) >> MockGlobalizationAdapter
Without mocking the GlobalizationAdapter, the test fails obviously. I guess my question really deals with the best way to handle unit testing classes that depend on static classes or singletons. Or should this dependency chain be re-architected in any way?
|
|
|
|
|
Unit testing and statics are always a rich topic of discussion. Ultimately, what it boils down to is whether you think testing the static Globalization type is acceptable when your actually unit testing something else. If you are unit testing UserMapping, and mocking GlobalizationAdapter, your also interaction testing Globalization. At some point, you need to interaction test, to make sure that when A uses B, the interaction of the two behave like you would expect. Sometimes you can achieve this with a mock...sometimes you need to test the interaction of two real objects. Testing has a variety of forms: unit testing, interaction testing, acceptance testing, build verification testing, etc. Unit tests will only take you so far, and you can plug the holes and double up by performing other kinds of testing.
In the case of your static Globalization class, it sounds like its a pretty simple type that acts as a lazy-loaded lookup? If its basically just a facade around a dictionary and some loading logic, I would not bother mocking it away, and just include it with your UserMapping unit tests. If Globalization is a richer class, and provides a variety of globalization services, it might be better to mock it away. You would want to unit test Globalization in isolation as well, to make sure you cover code that wouldn't be covered by interaction testing.
If you do indeed need to mock away static types, there is one product that will let you do it: TypeMock Isolator. TypeMock's Isolator lets you mock absolutely anything, statics included. Its a pretty unique mocking framework that really helps you get the job done when nothing else will. Couple caveats: 1) its not free, and 2) it requires that all testing processes be spawned from its isolator root process that enables all the advanced call interception and whatnot.
|
|
|
|
|
Hi everyone.
I wanna download html /php files from a webpage.
Do you know any method for this operation?
Thanks for your helps.
|
|
|
|
|
There seems to be more than quite a few docs out there that teach you how to do TDD from scratch, but I haven't seen much theoretical work done on "mocking out" and testing existing components from legacy apps--for example, if I have a DAL that's already plugged into my four-year old .NET 1.1 app, is there anyway I can apply post-hoc unit tests to the DAL without changing its design?
What I'm really looking for is a catalog of "mocking patterns" that give me solutions to various problems so that I can isolate my legacy components and test them without modifying the design--so my question is, has anyone managed to do this yet?
|
|
|
|
|
|
About NMock / NMock2:
the version at nmock.org (NMock) is not actively supported anymore.
If you are looking for the latest version based on NMock then have a look at https://sourceforge.net/projects/nmock2[^] (NMock2)
Happy mocking
Urs
-^-^-^-^-^-^-^-
no risk no funk
|
|
|
|
|
Hrmmm I'd be assuming you are kinda boned in that case without some real trickery. Most legacy code of this kind I've seen tends to be fairly rigid - definitely no dependancy injection or even factory patterns. It sounds hard.
Possibly you could do something like post-process the compiled DAL, and replace instantiations of certain classes with your mocked instances, and then run the tests. :S
|
|
|
|
|
Mark Churchill wrote: Hrmmm I'd be assuming you are kinda boned in that case without some real trickery. Most legacy code of this kind I've seen tends to be fairly rigid - definitely no dependancy injection or even factory patterns. It sounds hard.
Without a doubt, it is incredibly difficult--you'd have to find a way to isolate the target component and then "extrude" it so that you can test its behavior without pulling it completely out of a compiled assembly--in essence, it's like refactoring in reverse--albeit in a binary form.
It's definitely a black art, to say the least.
Mark Churchill wrote: Possibly you could do something like post-process the compiled DAL, and replace instantiations of certain classes with your mocked instances, and then run the tests. :S
Actually I'm not too worried about how to replace the classes or methods with mocks--what I'm really looking for is a a common set of testing practices that I can programmatically apply to the portion of legacy code that I want to test. Based on the lack of responses to this thread, however, I can only surmise that there aren't that many people out there that specialize in applying automatic regression tests to legacy code without changing the design--so for now it's more of a pipe dream, but I'll have a lot of fun making it a reality. Thanks for the input, Mark!
|
|
|
|
|
Have you looked into Pex[^]? It's an interesting concept, to say the least.
|
|
|
|
|
Pete O'Hanlon wrote: Have you looked into Pex[^]? It's an interesting concept, to say the least.
From what I can tell, Pex looks like a brute-force solution to unit testing--it solves some of the coverage problems in TDD by attempting to quantify every possible input and output that might come out of a single method. There's two problems to this approach: 1) Observability, and 2) Isolating the component to be tested.
The first problem of observability has a lot to do with the software engineering equivalent of the Uncertainty Principle--how do you observe the behavior of an app without having to modify it? For example, if I wanted to test the DAL of a legacy app and that application has a tightly-coupled architecture, how do you verify the behavior of the DAL without modifying its architecture to support unit testing? With Pex, you can only test the components of an architecture if their behavior can be "observed" by the unit tests generated by Pex; however, in a legacy app, you might not have the luxury of modifying the architecture to support those unit tests.
The isolation problem rears its ugly head when you have legacy code that is ridden with "copy & paste" code. In order to use automated unit testing (much less Pex), you would have to refactor out the duplication and then mock out the components around the component to be tested, and then test the component itself. Once you have the component isolated, it is then (and only then) that you can throw Pex at it and have it do its brute-force approach to finding holes in your code.
The problem with using Pex with legacy apps is that the approach might be too invasive. In the current state of automated unit testing and TDD, this is akin to giving bypass surgery to a patient who just wants a medical checkup. What I want to do is diagnose the patient (per se) without killing them in the process. IMHO, we're practically in the Dark Ages when it comes to diagnosing legacy apps, and that has to change.
|
|
|
|
|
If you need to unit test legacy code, code that isn't particularly designed for mockability (or isn't designed for it at all), there is nothing that comes close to the TypeMock Isolator. TypeMock lets you mock absolutely anything, any time, for any reason, regardless of what it is. You can mock statics, replace function calls, whatever you need to. It allows you to fully isolate your unit of interest and actually perform "unit" testing for any code, regardless of whether it actually supports unit testing.
Many people will jump on this with a vehement revulsion and provide blanket statements that you should refactor all of your code so its better architected and is properly mockable. In the long run, yes, you should eventually improve your code base. But when your on a budget and need that full safety net of tested code, TypeMock is a godsend. You can, basically, have your cake and eat it too. I hope it helps.
http://www.typemock.com/
|
|
|
|
|
You're right - I am going to jump on this post. In fact, I'm going to jump all over it and say - interesting, I'll go and have a look at it now.
|
|
|
|
|
LOL, let me know what you think. I'm still evaluating it myself, but and it does have a couple drawbacks (i.e. the isolator is a 'wrapper' program that must be running for method intercepts and the like to function during execution, and it costs money). When it comes to testing locked-down legacy code, though, its truely unique.
|
|
|
|
|
Class A
{
B bsObject;
public void FunctionOfClassA()
{
bsObject.FunctionOfClassB()
}
}
Class B
{
public void FunctionOfClassB()
{
}
}
I have my program structure as shown above.
I want a way to apply atomicity to this. I mean if Class B's data write worked fine but Class A's calculations failed then want to rollback the data write. i want it all succeed or rollback.
One way i can think of is to create a DB ptr and begin transaction just before call to FunctionOfClassB() and pass it as a parameter. so in this FunctionOfClassB will use same db ptr to write data. And it would also be possible in FunctionOfClassA() to check if calculations went ok then commit or rollback.
My requirement is to not to make any change in parameter list or not to provide a new method with new parameter.
Class A
{
B bsObject;
public void FunctionOfClassA()
{
bsObject.FunctionOfClassB()
// some calculations
// Here I want to decide whether to commit or rollback
}
}
Class B
{
public void FunctionOfClassB()
{
// here i want to begin a transaction
// some data writing.
}
}
Can anyone give an high level summary how this could be possible?
|
|
|
|
|
paresh_joe wrote: I want a way to apply atomicity to this.
this? The code you posted has no data, there are no write operations, therefore nothing to roll forwards or backwards, even sideways.
led mike
|
|
|
|
|
paresh_joe wrote: My requirement is to not to make any change in parameter list or not to provide a new method with new parameter.
If so as above, then how can you (down below) pass the pointer since your function accepts no parameters in your skeleton class.
paresh_joe wrote: One way i can think of is to create a DB ptr and begin transaction just before call to FunctionOfClassB() and pass it as a parameter.
Please reword.
|
|
|
|
|
I meant to say that only solution come to my mind is to pass DB parameter to function which is not possible. I want some other way in which this can be implemented.
|
|
|
|
|
Can the server function throw an exception if transaction is not completed? Yes, then you can handle it in the client and do whatever you need to. Another approach is, if you can, to introduce two events in the server method which can be raised when the server method either completes or fails. This event will have one parameter derived from the EventArgs class (if using .NET) and it can contain information about what happened during the transaction--what failed and what passed.
What do you think?
|
|
|
|
|
I REALLY DONT KNOW.........PLS HELP
|
|
|
|
|
I don't understand what you are asking. You are back after a long time! Can you be more clear about your question?
CodingYoshi
Visual Basic is for basic people, C# is for sharp people. Farid Tarin '07
|
|
|
|
|
|
|