|
I need to write a very simple WinForms application, but I often get stuck in over analytical mode with these. I get stuck on Yes, No, Retry loops, and with very little thinking to do about the actions of the application, I start thinking about the 'workflow' of the application. Should I just have a method for each task, and call each one sequentially, after checking that the previous didn't raise an 'abort' flag? Should I implement a simple ITask interface, and a class for each task, then loop through an ordered collection of tasks? Should I use this as a simple introduction to Windows Workflow? The scope of this application and its set of tasks is guaranteed to grow, so my first option of hard coding the tasks is the nastiest, but given the size of the app, maintenance can quite easily be done by just changing the code; the executable is always deployed with the upgrade materials, and not permanently deployed.
|
|
|
|
|
I found it hard to find an answerable question. What did strike me was your statement "...guaranteed to grow/hard coding the tasks is the nastiest" followed by "given the size of the app, maintenance can quite easily be done by just changing the code". You seem to have a bit of a conflict.
|
|
|
|
|
I think my sentences ending with question marks are quite easy to answer, even if those without are hard to comment on.
OK, hard coding the tasks is still the nastiest, but the cost of doing this is quite low, so I will be getting into a conflict with management if I can't produce a configurable application in the same time as I can produce a hard-coded one. We're never going to reach even fifty tasks, and if no code is reproduced between tasks, maintaining the hard coded task is feasible enough for the powers that be to sanction a hard-coded app. I, however, do not feel nice about hard coding the app.
|
|
|
|
|
I did have a look at your questions, for example your question: "Should I just have a method for each task, and call each one sequentially, after checking that the previous didn't raise an 'abort' flag?" My answer came to: Possibly, if that what the User wants; which seemed a bit superflous.
Anyway I see you mentioned Windows Workflow. I've never used this but had a look at one of the Intro articles on CP. Workflow looks like it will separate the logic of the task from the details of performing the actions which I'd definitly agree is good.
Having worked both as a software developer and as a manager of s/w developers, what really got to me was developers investing so much time/effort in designing and producing a framework in which to execute tasks (with task interface etc) that the framework became their goal, but as no one else knew the details of this framework it tied, and delayed, the project into using it.
If Worlkflow gives you that framework - which it seems to, I'd use that to develop/code your actions/tasks first and do yout own framework later if its needed. But don't let your management think that the Workflow front-end you may show them is the finished thing if its not. They may assume its a completed front-end.
It looks as if playing with Yes/No/Abort etc is relatively easy in the Workflow and separates from you task objects.
|
|
|
|
|
Jonathan Davies wrote: Possibly, if that what the User wants; which seemed a bit superflous.
This is to a dictatorial application, i.e. it forces the user to do what the project owner, our management, wants. This is a utility to upgrade our main application, and the only user option is the required main application path. 'User' requirements are moot after the task of getting that location.
Jonathan Davies wrote:
Having worked both as a software developer and as a manager of s/w developers, what really got to me was developers investing so much time/effort in designing and producing a framework in which to execute tasks (with task interface etc) that the framework became their goa
Sounds like me sometimes. I used to be worse though.
Thanks for your input.
|
|
|
|
|
I have some basic code for a cache manager provided below. My design is far from complete and I need help. What I need is a single cache manager CacheManager.cs GetCacheItem method that accepts as parameters a cache key, business object delegate method and a variable set of parameters. This manager should be able to get and retrieve items in cache, handle dependencies, and work with every business manager object in the project (i.e. bc_CustomerManager.GetCustomer, bc_ProductManager.GetProduct, etc.) Anyone who can assist me or work with me on this would be wonderful.
Cache Manager GetCacheItem:
public object GetCacheItem(String sCacheKey, QueryExpression QueryExpression, params object[] parameters)
{
object objCacheItem = null;
if (!CachingEnabled)
{
return objCacheItem;
}
if (CacheItem<Object>(sCacheKey) != null)
{
return CacheItem<Object>(sCacheKey);
}
else
{
lock (syncObject)
{
if (CacheItem<Object>(sCacheKey) != null)
{
return CacheItem<Object>(sCacheKey);
}
else
{
Object cacheItem = QueryExpression(parameters);
AddToCache<Object>(sCacheKey, cacheItem);
return cacheItem;
}
}
}
}
In my project I have a business layer and a data layer. I am not sure where and how the code below should be called. Incomplete code call to CacheManager.GetCacheItem:
<pre>
delegate object QueryExpression(params object[] parameters);
CacheManager cacheManager = new CacheManager();
BC_Assects bc_Assects = new BC_Assects();
DataTable dt = (DataTable) cacheManager.GetCacheItem(sCacheKey, bc_Assects.GetAssetsByCollectionID, parameters);
Thank you very much!
Steve
|
|
|
|
|
I got something working. Here is a sample cache manager and aspx page. Now, all I need is to handle dependicies. Can anyone help?
aspx code:
protected void Page_Load(object sender, EventArgs e)
{
CacheManager cacheManager = new CacheManager();
BC_Customer bc_Customer = new BC_Customer();
string[] param1 = { "customers", "stephen", "lisa" };
cacheManager.InsertInCache(bc_Customer.InsertCustomerName, param1);
List<string> cNameList = (List<string>)cacheManager.GetCacheItem(bc_Customer.GetCustomerName, param1);
DropDownList1.DataSource = cNameList;
DropDownList1.DataBind();
string[] param2 = { "products", "Book", "Card" };
BC_Products bc_Products = new BC_Products();
cacheManager.InsertInCache(bc_Products.InsertProductName, param2);
List<string> pNameList = (List<string>)cacheManager.GetCacheItem(bc_Products.GetProductName, param2);
DropDownList2.DataSource = pNameList;
DropDownList2.DataBind();
}
Cache manager:
public delegate object QueryExpression(params object[] parameters);
public delegate object InsertExpression(params object[] parameters);
public class CacheManager
{
public TimeSpan CacheDuration { get; set; }
private static object syncObject = new object();
private bool CachingEnabled = true;
private bool HasKey(string Key)
{
return (HttpContext.Current.Cache[Key] != null);
}
private Object CacheItem(string sCacheKey)
{
return (Object)HttpRuntime.Cache[sCacheKey];
}
public object GetCacheItem(QueryExpression QueryExpression, params object[] param)
{
object objCacheItem = null;
if (!CachingEnabled)
{
return objCacheItem;
}
if (CacheItem((string) param[0]) != null)
{
return CacheItem((string) param[0]);
}
else
{
lock (syncObject)
{
if (CacheItem((string)param[0]) != null)
{
return CacheItem((string) param[0]);
}
else
{
Object cacheItem = QueryExpression(param);
DateTime expiration = DateTime.Now.Add(new TimeSpan (6000));
HttpContext.Current.Cache.Add((string) param[0], (List<string>) cacheItem, null, expiration,
TimeSpan.Zero, System.Web.Caching.CacheItemPriority.Default, null);
return cacheItem;
}
}
}
}
public Object InsertInCache(InsertExpression insertExpression, params object[] param)
{
if (HasKey((string) param[0]))
{
return CacheItem((string)(param[0]));
}
DateTime expiration = DateTime.Now.Add(new TimeSpan (6000));
HttpContext.Current.Cache.Add((string) param[0], insertExpression(param), null, expiration,
TimeSpan.Zero, System.Web.Caching.CacheItemPriority.Default, null);
return CacheItem((string)(param[0]));
}
}
Thanks,
Steve
modified on Saturday, February 28, 2009 10:00 PM
|
|
|
|
|
Now, the cache manager handles dependencies. See below:
public delegate object QueryExpression(params object[] parameters);
public delegate object InsertExpression(params object[] parameters);
public class CacheManager
{
public TimeSpan CacheDuration { get; set; }
private static object syncObject = new object();
private bool CachingEnabled = true;
private bool HasKey(string Key)
{
return ((Object)HttpRuntime.Cache[Key] != null);
}
private Object CacheItem(string sCacheKey)
{
return (Object)HttpRuntime.Cache[sCacheKey];
}
public object GetCacheItem(QueryExpression QueryExpression, params object[] param)
{
object objCacheItem = null;
if (!CachingEnabled)
{
return objCacheItem;
}
if (CacheItem((string) param[0]) != null)
{
return CacheItem((string) param[0]);
}
else
{
lock (syncObject)
{
if (CacheItem((string)param[0]) != null)
{
return CacheItem((string) param[0]);
}
else
{
Object cacheItem = QueryExpression(param);
DateTime expiration = DateTime.Now.Add(new TimeSpan (6000));
HttpContext.Current.Cache.Add((string) param[0], (List<string>) cacheItem, null, expiration,
TimeSpan.Zero, System.Web.Caching.CacheItemPriority.Default, null);
return cacheItem;
}
}
}
}
public Object InsertInCache(InsertExpression insertExpression, params object[] param)
{
System.Web.Caching.CacheDependency cacheDependency;
string[] dependencyKey = { (string) param[1] };
if (HasKey((string) param[0]))
{
return CacheItem((string)(param[0]));
}
DateTime expiration = DateTime.Now.Add(new TimeSpan (6000));
cacheDependency = new System.Web.Caching.CacheDependency(null, dependencyKey);
HttpContext.Current.Cache.Add((string) param[0], insertExpression(param),
(param[1] == "" ? null : (cacheDependency)), expiration,
TimeSpan.Zero, System.Web.Caching.CacheItemPriority.Default, null);
return CacheItem((string)(param[0]));
}
}
|
|
|
|
|
Hi all,
I am workign on vc++. I got a project which is under maintanence now, the project is huge but I am new to this project.Can anybody suggest me a free /trail reverse engineering tool that can give me the design of the project so that i can have a better control on the project.
Thanks in advance...
|
|
|
|
|
|
I do all my reverse engineering with Lattix (www.lattix.com), it gives you a handy dependency matrix, with which you can play and restructure your application in a nice way. It is not free, but you can download a trial that might help you out in a couple of days. It works in combo with dOxygen.
If you download refer to me and they provide you some more help.
Han
|
|
|
|
|
I am working on a ticketing based product. Eg: This Product can be used for Help Desk application. Resources should be assigned to work on tickets. We need to implement Automatic allocation based on automation logic since the tickets are strict SLA bound and no minute should be wasted.
We will be using Automatic Resource Allocation methodologies like Round Robin, Load based, Ticket attribute based and Resources skills, level based. Your suggestion on any best practice, algorithms, frameworks in place specific to TECHNICAL IMPLEMENTATION would be of great help.
Thanks in advance.
Nags
|
|
|
|
|
Hello all. I have this 3 layer application I'm building (with MFC as my Framework). I'm trying to follow the MVC pattern.
For some actions, the controller will request the view to get some input from the user, usually by displaying a dialog box. This dialog will usually be populated with drop-down lists that are not directly related to the model, however they are essential as they allow some filtering. Say, the Product Family drop-down will be filtered according to the value selected in the Branch dialog. A Product has a Product Family, and a Product Family has a Branch associated (and thus, the Product has a Branch associated as well).
My question is the following. I was wondering whether the controller should provide the dialog with the Branch collection, or rather the dialog box should simply load the branches and display it.
Any clue will be really appreciated, as I'm having trouble to determine which one would be the best option.
Thanks in advance.
Stupidity is an International Association - Enrique Jardiel Poncela
|
|
|
|
|
Seems like you have a Push or Pull quandry as examined, for example, here[^].
|
|
|
|
|
Hi,
I have a WebService-WS, WebApp-WA and DTOproject-DTO. I made a DTO, SZ, serializable and added it as reference to WS and WA. When I call webmethod WS.oneWS.method() it returns an array of WS.SZ where I need DTO.SZ. Am I missing something here apart from making SZ serializable? Please help me.
Thanks in advance.
|
|
|
|
|
Hello there,
I'm starting up a little personal project that I'm not too sure what the best way to tackle is. I'm hoping someone could give me a few pointers and/or some ideas on questions I should answer before I dig in.
The problem I'm trying to solve to determine file and project dependancies in a big, mature, sprawling codebase, with tens of thousands of files. This codebase has evolved over time and can be quite unwieldy. To build a small tool I need to build most of the build tree because it's almost impossible to figure out what all my dependancies are.
So I found a library that will allow me to watch file activity, for example file open, creates, and so forth. So my thought is if I run a full clean build and monitor and record all of the file activity of a build and can generate a dependency graph of the entire project and/or be able to automatibally create a command file that to build any project I want and have it build everything I need in the proper order. I'm sure there will be other interesting things I can do with this information. I'd also like to try and visualize the entire project and maybe create a change heat map from it, but looking at the source server info.
My quandry is how best to record the file activity so that I can build a depenecy tree. Ideally I'd like to be able to do this in a multi-threaded way and be able to tell which files need to be built before others and which files are grouped in a project and so on.
My current proposed approach is to record all file activity to a logfile and the post process it to generate the depenecy graphs. I'm still a little hazy about all the data that I need to record. I'm currently thinking I'll figure that out as I go when I find I'm missing some important information.
Any pointers or thoughts would be greatly appriciated.
Thanxx,
Adam
|
|
|
|
|
Adam,
Apologies if I'm just rephrasing your question: View what you want as a series of graph nodes and verticies where different verticies represent the reason for a link (dependancy) between the nodes.
Try drawing a few views on paper to see what you, as the user, want to see. Eg "library used by" or "header used by" etc. This should give you the answer to your question.
|
|
|
|
|
I hope you don't mind I'm using this group as a sounding board. I'm posting my thoughts in part to help solidify them in my head and then also to see if anyone out there has any interesting observations or experience that can help.
I have been thinking about what I and my customers would like to get from this tool and have come up with a great many pieces of information that I could/should track. But I'm thinking that I just need to find the core pieces of information and then use queries against the information store to figure out the rest. For example, some of the things I'd like to be able to do are:
1. Determine build order of files/projects
2. Determine which projects can be built simultaneously
3. See all the dependancies of file X/project Y
4. See all the dependancies on file X/project Y
5. Find duplicate files
6. Which projects create which files and then which other project then consume those files.
I have some ideas on what to track I just need to implement it and see how it works. The more I think about this issue I think the harder part will be to actually collect this information. Ideally I'd be collecting this information while multiple instances of the compilation tools are executing at once. I'm having some fun wrapping my head around having a single process monitoring multiple processes and maintaining the build ordering/hiereachy inside and outside of a project. I'm wondering if it would be best to do this in two passes.
Then there's the question of what's the best design for processing all of those file open/creation transactions. My current thought is to have one thread per compilation process. I only expect to have 4-8 processes going at once, so that doesn't seem like it should be to resource intesive.
And just so you know this is the biggest project I've ever tackled, so I'm a bit nervous about getting my approach organized before I start and then making sure I'm starting in the correct place.
Thanxx for your input,
Adam
|
|
|
|
|
Storing the data produced by monitoring the file activity and processing it should be kep apart In my opinion. In this case you need a format that both Data-Storer and Data-Processor can work with, and it would seem to be wise to pick a format that can be extended easily. If you start storing data A,B and C but later realise you need D as well for each file it should be relatively easy to add.
Coding any of the 'knowledge' about how processing occurrs in the Storer side would seem to be wrong.
This will allow you to work on the Data-Storing code side independantly of the Processing.
Presuming you store a certain set of data in this format. From this data you will produce some information, or at least rearrange the data so that it's in a format more suitable for the user or for a second stage of processing at least. This makes me think that breaking your Data-Processing down into smaller units should perhaps be woth considering.
Personally I'd leave out for now how you are going to implemenet it, i.e.leave the problems of multi processes and threads until later and settle on what you are going to do first.
To be able to debug this etc what about starting it all as single threaded processing file/data in serial rather than in parallel to prove that your idea of what needs to be done is correct?
|
|
|
|
|
How to play an Animation and make video render before logon through credential provider ?(Vista)
It seams VMR9 cannot work fine before logon. It's too slowly by creating a WPF process.
(development of facial recognition credential)
modified on Wednesday, February 4, 2009 9:24 PM
|
|
|
|
|
Hi all:
I have inherited a web application that's used for reporting. Each report has three main sections: Input section where the users select dates and other info, Summary screen where a summary of the data is shown, and a Detail screen where detailed data is shown.
I don't think it makes sense to create a new Input, Summary, and Detail page when a user asks for a new report. Too much duplication!
Is there a pattern I can use to simplify this? Let me know if you need more details.
Thanks!
Nick
|
|
|
|
|
First suggestion is, don't be afraid of "duplication". For one, Input, Summary, and Detail views, while they may all work with the same data, have distinct purposes, and will have distinct functionality that drive them. While you could probably merge all that functionality into a single glob, from a maintainability perspective...isolation of purpose (called separation of concerns and single responsibility in architect-speak) can offer you a lot in terms of long-term maintainability.
My recommendation is this. Even if you need the Input, Summary, and Detail views to be represented as a single thing to the user...develop them as individual components that make use of a core, shared set of objects. Input should be a view component that encapsulates the logic required to display a form to the user and process that input. Summary should be a view component that takes the processed input and renders the summary. Detail should be a view component that takes the processed input and renders the full detailed report. Since each one is its own, isolated component, they can be maintained individually. That doesn't preclude them from being rendered into a larger composite view, so you still achieve what you want for the user (a single "page" from which a report can be queries, previewed, and rendered in full detail).
If there is some duplication of code between the three components...that is ok. If you can centralize any common logic into a helper class that can be shared accross all three, great. If the logic is similar, but can't really be normalized, thats ok too. The key thing is that Input, Summary, and Detail are independant components that can be maintained, updated, and modified (i.e. with new functionality) independant of each other, which will improve your long term maintenance and product flexibility. Regardless of up-front costs...long term maintenance is by far the greatest cost of any project.
|
|
|
|
|
Thanks for the response Jon!
|
|
|
|
|
hii all..i got a problem regarding to convert the requested memory address to bus address..as we know, for the CPU to access a memory (doing transaction) it must go to bus address first..txx
|
|
|
|
|
|