|
Thanks!
That was a very good description.
Yes, you understood me correctly when I wrote about cumulative data. I did mean aggregated data. I've had fever over 38 Celsius for almost a week so the translator in my head is working only partially
I believe that L2S will understand the "basic" scenarios and optimize them using a single roundtrip. Your example for order amount was a good example and the query generated is efficient in most scenarios. If the cardinality of the main query from customer changes, SQL Server optimizer may aid to transform the query to a different format in order to maintain the performance.
What I was wondering is that I'm not aware that any relational SQL engine is capable of handling networks (data containing trees in both directions simultaneously). This is a very common situation in the systems that I use.
Also what worries me is that Linq itself isn't very dynamic. Typically I have queries that have dynamic amount of conditions (normally 1-100, but in hard cases over 1-1000). So if I want to support them, I have understood that I would have to create an enormous amount of Linq queries. This is one reason why I haven't really tried L2S yet (although I do use Linq to XML and Linq to Objects).
Your description of L2S made me so curious that I have to give it a try in the near future. I'll try the LinqPAD also.
Thanks again,
Mika
|
|
|
|
|
Glad I understood your questions.
Regarding the next few. I'll start with networks. I call a 'network' a 'graph'...actually, I call trees or networks 'object graphs'. This is actually a very good question, and its also pretty much exactly WHY we have O/RM systems. To give you a clearer idea of what an ORM is...Object/Relational Mapper. The concept behind this term is that the way we work with objects is not directly translatable to how we store and manage data. Objects are usually represented by a 'graph' (or network, if I understand you correctly) in memory. A set of objects related through pointers, with a variety of navigability (sometimes we can only go from parent to child...sometimes we can go from parent to child and child to parent). Data is usually represented as sets of tuples (many tables of rows), with relationships defined external to the tuples themselves (foreign keys defined for tables rather than tables containing direct pointers). These differences create what we call the impedance missmatch between objects and databases, and its this impedance missmatch that O/RM's are specifically trying to solve.
Object relational mappers handle the process of 'bridging the gap' for you so you don't have to worry about it. That gap is where the process of building an object graph, or network, from relational data needs to happen. L2S can handle object graphs very well, and can build an entire network of objects from a single result set queried from the datase in most situations. Sometimes a graph is too complex to be retrieved in a single query, or sometimes the missmatch between your conceptual model and the database schema is too great, and additional queries are required. Regardless of what is needed for each specific scenario, the benefit of an O/RM is that it does the gritty work of solving that problem for you. All you need to worry about, once the O/RM is in place, is querying your conceptual model (note, when you query with an O/RM, you arn't really querying the database...your querying your model). If a parent object needs a pointer to its children, and all of its children need pointers back to the parent, the O/RM sets those pointers up for you...when you get a graph result back from your O/RM, its fully constructed and all relationships are in tact.
As for LINQ being dynamic. It actually is very dynamic, but its not obvious in the first few glances how. Critical thing about LINQ is that its what we call delay-loaded, or delay-bound. When you write a LINQ statement, that statemnt is actually just setting up an enumerator. The database, or whatever it is your querying, doesn't actually get queried until you actually iterate over that enumerator. So, when you need to have dynamic conditions, adding them with LINQ is actually quite easy:
var query = from o in MyObjects select o;
if (minID > Int32.MinValue)
{
query = query.Where(o => o.ID >= minID);
}
if (maxID > Int32.MaxValue)
{
query = query.Where(o => o.ID <= maxID);
}
if (name != null && IsEqual)
{
query = query.Where(o => o.Name == name);
}
else if (name != null && ContainedWithin)
{
query = query.Where(o => o.Name.Contains(name));
}
switch (orderField)
{
case "Name": query = query.OrderBy(o => o.Name); break;
case "ID": query = query.OrderBy(o => o.ID); break;
default: query = query.OrderBy(o => o.OrderIndex); break;
}
foreach (var result in query)
{
Console.WriteLine("ID: " + result.ID + ", Name: " + result.Name);
}
|
|
|
|
|
If you are only interested in quick and dirty presentation of information, then stored procs / view might suffice.
If you are looking for a "business object" style approach, then seriously consider looking at an O/R mapping tool for your data layer. This will mean you will avoid writing and maintaining a DAL, and you'll end up with improved flexibility as a side effect. Try ours out, it will take you all of 10 mins :P
(Of course this may be seen as a biased opinion )
|
|
|
|
|
Hello,
I am looking to develop a software that could be integrated into AOL, MSN, Skype, Gmail and Yahoo messangers.
I was wondering:
1. on what platform those IM's developed?
2. on what platform should I develop my own software, that would be easy to integrate to
those IM's?
Note: my software should be executable when installing and integrating it as part of those IM's.
Thanks for your help.
|
|
|
|
|
|
Hi..
In VS 2005,Style sheet has one property named filter and opacity but in VS 2008 there is no such property.
can any one tell me how to use that property in VS 2008.
|
|
|
|
|
Sorry but how is your question related to Design and Architecture? Try Visual Studio message board.
|
|
|
|
|
Lets say I have a class called Person (Encapsulates the business layer functionality) and PersonManager (Encapsulates Data Access Layer). Here are code snippets simplified for clarity:
public class Person
{
public void Save()
{
Personmanager manager = new PersonManager();
manager.Save(this);
}
}
public class PersonManager
{
public void Save(Person p)
{
if (p.IsNew)
Insert(p) // Private method
else
{
if (P.IsOld)
Update(p) // Private Method
}
}
}
During insertion problems can occur so Insert method should deal with it and if it can not then it should throw it to the caller. Ideally, Insert should not reveal its inner working to the caller and should not break encapsulation but let the caller know something exceptional happened. The PersonManager class will also try to deal with it, and if it can not, throw it to the caller (Person) without breaking encapsulation. The Person class will follow the same rule.
My questions are:
1. Why do we append InnerExceptions as they will break encapsulation? For example, if an SqlException occurs and I attach it as inner exception, now the caller knows I am dealing with SQL database and encapsulation is broken! If I do not attach it then caller will not have sufficient enough information.
2. GUI will have a try-cach which calls person.Save() and Person will have a try-catch which calls PersonManager.Save(Person p) and Insert() and Update will also need try-catch blocks. Am I right? Is this nesting too far or is this how things should be done ideally?
3. Every class, and the method within it should try to deal with the exception and try an alternative or retry again. Is there a general rule of thumb how many times it should try and give up? Does it depend on how critical the system is?
Am I the only one who is lost because I have read many sources to obtain these answers? Please provide any useful links or book names.
|
|
|
|
|
Hi,
here is my 2c on this interesting topic; it is pragmatic rather than academic:
- each level needs to catch exceptions, and throw its own exceptions using semantics the caller will understand;
- however that would throw away all potentially useful details, as you indicated; hence the original exceptions are added as inner exceptions.
- in the end, the top level not only wants to know something failed, it also wants to be able and indicate in what direction a solution might be found. Hence an inner exception "disk full" or "server down" could be very helpful, even when breaking encapsulation.
When something goes wrong, I prefer encapsulation gets broken, rather than breaking my head over
what may possibly be wrong.
|
|
|
|
|
You know what Luc, You make a good point and I think I don't want my head broken either!
Thanks for the comments--they are very helpful. I am, although, surprised only you posted a response to such an interesting and open ended question.
|
|
|
|
|
Before I go into a discussion on proper exception management, I need to bring up a fundamental architectural issue. There are a variety of architectural styles, and usually each team or architect will choose the style they like best. However, an extensive amount of research has gone into how "effective" an architecture is in enabling developers, testers, etc. develop a deliverable product. So, I'm going to throw out one of the rules of effective design here: Isolation.
Your current design is a very dependant design. Your Person class is tightly coupled in two ways. First off, its tightly coupled to a specific concrete implementation of the PersonManager. Second, its tightly coupled to a specific persistance mechanism. Both couplings are bad, no other way to put it really. If we look at some of the most effective software development methodologies today, DDD and TDD will rise to the top. Both advocate the isolation of classes from each other, and both advocate the use of dependancy injection to improve decoupling (help achieve 'loose coupling'). Your Person entity should be simple, and should not be aware of the persistance object (PersonManager). This means Person can't save itself, so the save operation goes into a person 'service'. You would end up with something like this:
class PersonService
{
Person Load(int id)
{
PersonManager mgr = new PersonManager();
Person person = mgr.GetByID(id);
return person;
}
void Save(Person person)
{
PersonManager mgr = new PersonManager();
if (person.ID <= 0)
{
mgr.Insert(person);
}
else
{
mgr.Update(person);
}
}
}
class Person
{
int ID { get; set; }
string Name { get; set; }
}
class PersonManager
{
Person GetByID(int id)
{
}
void Update(Person person)
{
}
Person Insert(Person person)
{
}
void Delete(Person person)
{
}
}
With the above, your system is appropriately decoupled. Your exception management, along with the bulk of your business logic, and particularly interaction with persistance logic, should reside primarily in your service layer. This greatly simplifies your data access code and your entity...neigher one need to handle exceptions...they just throw them. If, for some reason, the service layer can not resolve or recover from an exception, then the exception should bubble up to your presentation layer, where you could handle it explcitly, or in most cases, just let the default handler deal with it. To keep your exception management simple in the presentation layer, you could follow a general rule of always wrapping all exceptions that bubble up from the service layer in a ServiceException. Ultimately, however, you will only have two areas where exceptions will be handled...your service layer and your presentation layer.
As for exception resolution...if, and I stress IF, you can handle the exception automatically, then the resolution strategy should also reside in your service layer. Generally speaking, exceptions indicate some broken state that is preventing a process from completing successfully...which usually requires user intervention. If you have some alternate path for accomplishing something, that alternate path should be attempted in the service, and nowhere else. But I wouldn't stick a save operation in a loop and try it 5 times before finally bubbling an exception up...thats just wasting resources, because if it failed the first time, it failed. If there is a chance it could succeed during a series of successive tries, then it sounds like you have concurrency issues that should be solved at a lower level to prevent such exceptions from ever occurring in the first place.
|
|
|
|
|
Building on Jon's answer, we have similar layers and while not necessary, here is a glimpse of what we did.
Define classes for DataAccessLayerException and BusinessLayerException and UILayerException.
Then we used the MS Patterns and Practices Exception Handling Application Block to help us handle unhandled exceptions (and by handle, I mean log, determine whether or not to rethrow, etc.) at the boundaries.
If the DAL catches an exception, it is wrapped in a DataAccessLayerException and rethrown.
The Business Layer can then catch the DataAccessLayerException and deal with it. If an Exception is thrown in the BusinessLayer then it is wrapped in a BusinessLayerException and rethrown.
The handling of different exception types can be defined within the application block configuration. In our case, the application block still only provides handling for unexpected errors. We would still have a try.. catch to internally handle errors that you can code for and recover from. The application block handles everything else and logs it the listener(s) of our choice (defined by configuration).
|
|
|
|
|
Lefty has some good suggestions, so to clarify with some code examples, here is an updated version of the snippet I posted before. I have made a couple other important changes that will further improve your decoupling and make your product more maintainable in the long run:
interface IRepository<t>
{
T GetByID(int id);
T Insert(T item);
void Update(T item);
void Delete(T item);
}
class DataAccessException: ApplicationException
{
}
class BusinessException: ApplicationException
{
}
class PresentationException: ApplicationException
{
}
class PersonService
{
public PersonService(IRepository<person> repository)
{
_repository = repository;
}
private IRepositoru<person> _repository;
Person Load(int id)
{
if (id <= 0) throw new ArgumentException("The Person ID must be greater than zero.");
try
{
Person person = _repository.GetByID(id);
return person;
}
catch (DataAccessException ex)
{
throw new BusinessException("An error occurred while accessing data.", ex);
}
catch (ArgumentNullException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new BusinessException("An error occurred while processing your request.", ex);
}
}
void Save(Person person)
{
if (person == null) throw new ArgumentNullException("person");
try
{
if (person.ID <= 0)
{
_repository.Insert(person);
}
else
{
_repository.Update(person);
}
}
catch (DataAccessException ex)
{
throw new BusinessException("An error occurred while updating data.", ex);
}
catch (ArgumentNullException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new BusinessException("An error occurred while processing your request.", ex);
}
}
}
class Person
{
int ID { get; set; }
string Name { get; set; }
}
class PersonRepository: IRepository<person>
{
Person GetByID(int id)
{
if (id <= 0) throw new ArgumentException("The ID must be greater than zero.");
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred retrieving the requested Person: {0}", id), ex
);
}
}
void Update(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred updating an existing Person: {0}", person.ID), ex
}
}
Person Insert(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
"An error occurred inserting a new Person.", ex
);
}
}
void Delete(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred deleting an existing Person: {0}", person.ID), ex
);
}
}
}
</t></person></person></person></t>
|
|
|
|
|
I second John's advice; the Active Record[^] pattern never sat well with me.
As an added caveat, be careful that you do not end up with an Anemic Domain Model[^]. I don't usually pass on Fowler's Nibblets of WisdomTM, but he hits the nail on the head with this one.
In short, developers will often just define a bunch of classes consisting of a bunch of properties so that they can feel like they're using Object-Oriented techniques. Don't forget that a Person should also have behaviour. (Personally, I've known plenty of misbehaving persons! : )
"we must lose precision to make significant statements about complex systems."
-deKorvin on uncertainty
|
|
|
|
|
I don't think that anyone addressed your first question, so I'll give it a shot.
In OO, encapsulation (or information hiding) concerns the way that we hide the implementation of our class behind a stable design. Thus, if I have the following class
public class Person
{
public DateTime Birthdate { get{ ... something secret ... } }
public int Age { get { ... something secret ... } }
}
then it doesn't matter to the code that uses the Person class if the value for Age gets computed from the Birthdate every time, caches the initial calculation in a private int , or uses some fancy Web service to calculate it. We are hiding the internals of the class implementation which, according to OO, everyone else doesn't care about.
Now, in terms of the System.Exception class, it has that property InnerException because, when this is set, the instance of the Exception that you catch was created by another exception which gets stored in that InnerException property. You'll note that the only type information that we have about the InnerException is that it is of type System.Exception , no leakage at all. This does not break encapsulation because, in the context of the creation of the exception that you've caught, another exception was the cause of it and knowing that exception isn't bad. You don't know how the InnerException was set, how the value gets returned to you, or what's going on in the containing exception instance.
Now, if catching code has something like the following somewhere
try
{
... exception thrown here ...
}
catch(Exception e)
{
if(null != e.InnerException)
{
if(e.InnerException is SqlException)
{
... do something SqlException specific here ...
}
}
}
then I would argue that you did not handle the SqlException deeply enough in your code or that it should not have been caught until now in a separate catch(SqlException se) block.
"we must lose precision to make significant statements about complex systems."
-deKorvin on uncertainty
|
|
|
|
|
I'd like to know the process and method of Scrum, who can help me, to provide me some documents or resouces or some kind of the introductions...Thanks.
|
|
|
|
|
Well, a google search for "scrum software development" only returns 954,000 hits so I guess its hard to find any info.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
it's really a bad news, can you provide any information for me from your opinion?
|
|
|
|
|
Yes, learn to use your initiative.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
I have a question about architectures that support unit testing. Say I have these objects and dependencies:
DALUser >> MappingUser >> Common.LocalCulture
DALUser >> IDataActionUser
DALUser object has dependency on IDataActionUser , whose instance is populated using dependency injection. It also has an internal dependency on a MappingUser object. The logical flow looks like this:
Business Layer object calls the DALUser object and requests a User object
DALUser calls IDataActionUser class to hit the database and return a DataTable
DALUser calls MappingUser , passing the DataTable and the MappingUser class transforms the DataTable into a User object and returns it
MappingUser depends on a static Common.LocalCulture class that does some culture ID conversions
DALUser gets the User object from MappingUser and returns it to the Business Layer, as requested
With these dependencies, I can easily mock the IDataActionUser and return a hardcoded DataTable from the mock for testing purposes.
My question is... what is the appropriate scope of a unit test? Should I create an interface for MappingUser so that I can mock it and ensure that I am testing ONLY DALUser ? What prevents you from moving towards a design where every class in your system has an interface? How do you determine where the line should be drawn between dependencies that should be mocked and dependencies that you include in a single unit test?
If I do create an interface and mock MappingUser , then my "test" of DALUser isn't really testing much at all, since DALUser only acts as a public interface to the business layer and a controller of sorts that calls these dependent objects. Without the functionality of the dependent objects included in the test, the DALUser class doesn't really do much... so it is still worth unit testing?
I'm obviously new to the unit testing game, and trying to absorb the finer points of the philosophy.
Thanks in advance for opinions.
|
|
|
|
|
Maybe it's me, or maybe because it's christmas but my brain just wont process all that information. Perhaps that's why you have no replies. It's a well thought out well constructed post but perhaps the problem requires to much immersion for text messaging a discussion.
So I will at least reply to the question in your subject line. "Is there such a thing as Too Many Interfaces?"
Yes.[^] and KISS Principle[^]
Here's the thing. Flexible software is good but requires a degree of complexity. So there is a constant struggle to find the balance between the flexibility you need and the simplicity you desire.
Leftyfarrell wrote: the DALUser class doesn't really do much... so it is still worth unit testing?
Difficult for us to know that. The benefits of unit tests and automating them are specific to the combined project/environment. That said, in general, tested stuff is good. If nothing else it can raise your level of confidence in the code allowing your mind to forget it and focus on another task. I could talk about unit tests for a while but most people have already stopped reading by now.
led mike
|
|
|
|
|
Thanks for the reply. Heh, I see your point.
Ok, maybe I can rephrase the question.
Does anyone have a general rule of thumb they use in terms of class dependency depth that is OK for unit tests? How many levels into a dependency chain is OK vs. too far in a unit test?
If I have 10 objects, chained together with dependencies... and I am writing a unit test for the parent object... can I go in 2 levels before I should create a mock and terminate the chain for testing purposes? 3 levels? 4?
Thoughts?
|
|
|
|
|
|
Its Christmas, and I'm well into the Christmas spirits at the moment, but my 4c is that a well abstracted system is generally not much more code (in terms of complexity). Add too much abstraction and you suddenly find yourself putting in a lot more effort...
Keep it fairly simple until you need the abstraction. Extracting an interface / pulling members up to a base class are pretty trivial refactoring actions. Get resharper if you haven't already
|
|
|
|
|
I am curious: Why is the business layer calling DALUser and DALUser calling an interface which hits the database and then passes the databale returned to the MappingUser?
I would design like so:
Business Layer(User object) calls DAL Interface(IDataActionUser) and passes itself in. IDataActionUser will have different implementations (Oracle, SQL, FlatFile etc) and they will load the User object passed in. No need to worry about passing DataTables back and forth. The IDataActionUser implementors can even call Common.LocalCulture for help. Now the design is more simple like so:
Business Layer >> IDataActionUser >> Implementor >> Database, Flat file, xml etc. >> Common.LocalCulture and the User object is good to go now.
I would even use the Bridge Pattern to abstract the implementor from the Business Layer.
|
|
|
|
|