|
Pete O'Hanlon wrote: Only a small subset of WinRT appears on the desktop side.
Like!
(Is that true? Where you know about this?)
dev
|
|
|
|
|
devvvy wrote: Like! (Is that true? Where you know about this?)
No. Of course it's not true. I often like to lie to people in answers because I'm a sadist who wants them to waste time.
Of course it's true. I know it because I'm doing development with WPF on Windows 8 desktop using WinRT functionality where I can, and I've run up against the limitations. For instance, you can't use the WinRT camera APIs. You can't access the contracts. There are many more things you can't do. I believe MS has documentation somewhere on what APIs can be used.
|
|
|
|
|
Can WinRT apps communicate with .NET apps via socket? What about IList/IDictionary or DataTable...etc?
dev
|
|
|
|
|
devvvy wrote: Can WinRT apps communicate with .NET apps via socket?
No.
devvvy wrote: What about IList/IDictionary or DataTable
No DataTable in WinRT.
|
|
|
|
|
Can WinRT apps comm with .NET via WCF then?
dev
|
|
|
|
|
Not really, no.
The recommended way to get the desktop and WinRT world to play nicely together is to use Azure Service Bus.
|
|
|
|
|
I am trying to create a UML Class Diagram Editor and Generator of Class Files in (PHP but open to support for other language) based on the Diagram as a school project and I am having a difficulty on designing my classes to fit with the project.
for example, if I will create a class UMLEntity(these are the items that can be added diagram) base class which will be extended by UMLClass and UMLInterface that both have UMLMember (UMLProperty, UMLMethod). The problem is UMLEntity and UMLMember both have modifiers (public, protected, private, static, abstract, final etc) but limited depending on each type (ex. interface can only have public members and cannot be static, interface cannot be static). so maybe I should automatically set those members as public, but how?
Can anyone help me out what is the better way for this scenario.
Or can you please share your approach if you'll be the one to develop this project
Please help me out.
Thank you.
|
|
|
|
|
Daskul wrote: I am trying to create a UML Class Diagram Editor and Generator of Class
That is a big project so hopefully you have severly limited the scope so you actually have a chance at succeeding at something.
The 'descriptor' class defines the entity but it doesn't validate it. Validation occurs only at input, so in your case as part of the GUI itself.
Each descriptor has methods. Each descriptor has a type ('class', 'interface'). Methods, which also have a descriptor class (type of 'method') have a an access modifier that specifies what the access is.
|
|
|
|
|
Thank you for your response.
yes there are a lot of limitations regarding this project considering I am a student.
What do you mean by descriptor? Can you please help me design my Domain MOdels? I really need to get started with this.
Thank you
|
|
|
|
|
Daskul wrote: What do you mean by descriptor?
What part of the several statements that I made describing it did you not understand?
|
|
|
|
|
I just can't figure out how to implement it through code. can you please guide me or show a brief example what would it be look like when it is actually a class hierarchy?
|
|
|
|
|
That's not the way the forums work. Presenting information like that would require a fair bit of code and description, and that's something that should be done through an article. The way it works is, you try something; if it doesn't work, you come here for help with the code that you've put together and we help fix that code.
|
|
|
|
|
Ok, I've been doing a lot of reading on these forums lately. I've noticed that a lot of people are making a lot of noise over various coding practices, calling them a "code smell". For example, "never have more than 5 fields in a class, or it's a code smell and should be refactored", as if such advice were "Bible-tastic Goodness". I honestly don't understand why the whole of development has shifted into extremely compact functions/methods as a seemingly "unbreakable rule", where if a class (or God forbid a single method) does more than one thing, it's "bad".
Granted, if you're dealing with inexperienced coders on a team, it's not ideal to let them run amok with hundreds of lines of code without encouraging them to keep things manageable... so it that context, I do see some value in keeping methods and classes compact. However, if you assume a project must meet the same objectives either way, then this requires that functionality must get broken up into an extremely complex tree of various classes, etc., where methods call methods call methods, ad infinitum. My point is that engendering such practices comes at the price of performance.
I've been programming now for over 30 years, and have used a huge variety of languages - everything from GW-BASIC to Assembler to C++ to VB to C#, and everything in between. I've designed systems that run manufacturing facilities, and I've written games that have done well enough to garner millions of downloads. I've been doing this a LONG time doing a LOT of different things with it... and the singular most important lesson I've learned throughout all of this is as follows:
Never, ever, make the computer do more than is required to achieve the desired results; and accomplish this with code that is clear to someone unfamiliar with it via consistent styling, clearly thought out comments, and self-explanatory naming.
This means to never refactor functionality into a separate method just because one method is getting a bit lengthy, assuming that functionality is NEVER needed anywhere else. (And the minute it is actually needed elsewhere, then it's time to move it into a seperate procedure.) Why make the computer perform a CALL instruction (with all the associated stack management for proc address and arguments, etc.) when it doesn't need to? What for? To make it allegedly more readable?? I honestly don't understand the logic behind such refactorings or design methodologies. It seems so wasteful to design so many layers into a project just because some people have a hard time reading longer segments of code.
Maybe I'm just too "old school" for my own good, but it seems to me that good design starts with making a system perform as well as possible with code that is as readable as possible, and in that order; not by following a series of "laws" that result in code that may be more readable to the masses, but runs orders of magnitude slower.
When I was coding on XT machines with a scant couple of Mhz at my disposal, every single instruction mattered... ALOT. It just seems that so many of the current "Best Practices" don't really give much consideration, if at all, to the performance of the end product... and I honestly just don't get why. Sure, one could always just throw more hardware at a design that is more complex than it needs to be, but why should we?
I know this has turned into a bit of a rant, and for my first CodeProject post, that's probably not the most appropriate ...but I sincerely would like to know why so many "best coding practices" strike me as focusing on the wrong things... like trying to make programmer's jobs as easy as possible? After all, it will never truly be easy to be truly skilled at programming - it takes practice, no matter what policies we adopt. I guess I'd like to understand why the quest for performance seems to have been nearly completely abandonded in favor of making code as readable as possible... and I'd quite sincerely like to hear some really good reasons to accept such policies as even remotely reasonable.
|
|
|
|
|
Robb Ryniak wrote: I've noticed that a lot of people are making a lot of noise over various coding practices, calling them a "code smell". For example, "never have more than 5 fields in a class, or it's a code smell and should be refactored"
Most of those have learned the 'rule of thumb', without understanding what they were learning. And yes, I got a feeling that it's getting worse every year.
Robb Ryniak wrote: and I'd quite sincerely like to hear some really good reasons to accept such policies as even remotely reasonable.
Hehe, not from me; any policy that cannot be explained is redundant. If it does not help me in doing my job, the policy is ignored - I'm paid to work, not to follow policies.
It'd be as useful as having a yearly meeting to discuss whether the current code-standard should be dropped in favor of Systems Hungarian. Meet and discuss all you want, just don't bother me with the drivel.
Good rant, enjoyed the read.
|
|
|
|
|
Eddy Vluggen wrote: Good rant, enjoyed the read.
lol... thanks.
|
|
|
|
|
Robb Ryniak wrote: I've been programming now for over 30 years, and have used a huge variety of languages - everything from GW-BASIC to Assembler to C++ to VB to C#, and everything in between.
I have been doing it for 40 years and have used Fortran, Basic(various flavors), Pascal, assembly (different flavors), C, C++, C#, Java, Perl, various SQL dialects and various scripting languages.
Robb Ryniak wrote: and the singular most important lesson I've learned throughout all of this is as follows:
You failed to mention maintenance costs and total life cycle costs in anything you said.
Robb Ryniak wrote: Why make the computer perform a CALL instruction (with all the associated stack management for proc address and arguments, etc.) when it doesn't need to? What for? To make it allegedly more readable?? I honestly don't understand the logic behind such refactorings or design methodologies. It seems so wasteful to design so many layers into a project just because some people have a hard time reading longer segments of code.
Which might be because you don't understand maintenance costs an life cycle costs.
The fact that you understand the code means nothing it terms of whether someone else can understand it. And in the vast majority of professional programming software that actually reaches production will require that someone else besides the original programmer must understand it at some time.
Software development is not a rigorous discipline and practices are acquired based on many factors but because the community is so broad practices that work become the norm over time.
Your large method idea was one that even structured programmers rejected long ago and that rejection is further demonstrated by the wide and rapid acceptance of Object Oriented programming.
As with all things this is not an absolute mandate in that every thing must be broken into smaller pieces but the vast majority should. And at least in my experience code that fails to do this generally is often obviously not based on an OO design.
Robb Ryniak wrote: When I was coding on XT machines with a scant couple of Mhz at my disposal, every single instruction mattered... ALOT.
And when I used punch cards every single key press mattered since it could take a half an hour turnaround to find even syntax errors.
But those days are past and are irrelevant for most business domains. The most relevant business domain that is even close to that these days is embedded programming and even those are getting away from small space constraints (over the entire field, some areas still require it.)
Robb Ryniak wrote: like trying to make programmer's jobs as easy as possible?
Huh? Because it costs money to produce software. It costs money to fix bugs in production. Because it costs money if one losses market share due to long lead times.
Robb Ryniak wrote: I guess I'd like to understand why the quest for performance seems to have been nearly completely abandonded in favor of making code as readable as possible..
Because performance is most significantly impacted by requirements and design. Not implementation. Excluding implementation design errors, which are still design errors, performance improvements at the implementation lever are almost always in the small percentages. Whereas design/requirements problems can result in order of magnitudes difference.
|
|
|
|
|
jschell wrote: I have been doing it for 40 years and have used Fortran, Basic(various flavors), Pascal, assembly (different flavors), C, C++, C#, Java, Perl, various SQL
dialects and various scripting languages.
Awesome. Same here, pretty much. You just have 8 years on me
jschell wrote: You failed to mention maintenance costs and total life cycle costs in anything you said.
You're right... I didn't say anything about that.
jschell wrote: Which might be because you don't understand maintenance costs an life cycle costs. The fact that you understand the code means nothing it terms
of whether someone else can understand it. And in the vast majority of professional programming software that actually reaches production will require that someone else besides the original programmer must understand it at some time.
While I didn't mention anything about total cost for maintenance, it doesn't mean I don't understand or value the need for maintainable code from a cost perspective, nor does it mean that I don't value having code that is readable by others. On the contrary, maintainability, TCO and readability are all extremely high on my list of what I value. I didn't mention it in my OP simply because it is... in my opinion... secondary to design and implementation that considers performance (accuracy and stability assumed) as the higher priority. If I am faced with the choice between code that executes twice as fast and code that is twice as readable, I choose performance every time... but then comment appropriately.
For example, I had a project about 10 years ago in C++ that required some software-based imaging processing. Performance was really critical, and the math could have been done step by step, calling various other functions to make the code really, really readable... but putting a particular transformation on ONE LINE helped with compiler optimization to the point where it ran literally 10x faster. So, I wrote it BOTH ways... the "easy to read" way that perfectly expressed the logic of the algorithm, and the ugly but fast-executing way, and left the "easy to read" way as a comment block in order to aid future coders in seeing what was actually going on.
In contrast, there's a fairly popular game out right now that is written (AFAIK) in a highly tiered OO approach, and it's charming 8-bit graphics and blocky environments that are comparable to the technology of the classic DOOM/DOOM II era in the 90's, is so inconsiderate of performance issues that it can bring a Core i7 with dual top-end graphics cards to its knees... that one's a real head-scratcher.
jschell wrote: Your large method idea was one that even structured programmers rejected long ago and that rejection is further demonstrated by the wide and rapid acceptance of Object Oriented programming. As with all things this is not an absolute mandate in that every thing must be broken into smaller pieces but the vast majority should. And at least in my experience code that fails to do this generally is often obviously not based on an OO design.
I wasn't trying to say that large methods were specifically desirable, only that they shouldn't be shunned as "don't ever ever do this" kind of practices. IMO, a given construct should be used when it's appropriate... always. If a piece of functionality would benefit from a monolithic approach, it should be done that way. If a piece of functionality benefits from an highly tiered OO approach, then it should be done that way. I'm advocating a more liberal approach to design and implementation, not a specifically monolithic approach. The right approach for the right situation. Period.
jschell wrote: Because performance is most significantly impacted by requirements and design. Not implementation.
I don't agree at all. I think any design (good or bad) can be implemented with either fast or slow executing code, and my original point was that it seems the practices being widely adopted in the past 5 to 10 years are definitely not in favor of speed of execution, and I don't understand that.
jschell wrote: Huh? Because it costs money to produce software. It costs money to fix bugs in production. Because it costs money if one losses market share due to long lead times.
Now here, you make a good point that I agree with. I just don't know that a more liberal approach (as I'm advocating) is necessarily the anticedent of time efficiency... though it can be, of course. I just don't think it needs to be that way.
modified 10-Dec-12 8:27am.
|
|
|
|
|
Robb Ryniak wrote: secondary to design and implementation that considers
performance (accuracy and stability assumed) as the higher priority. If I am
faced with the choice between code that executes twice as fast and code that is
twice as readable, I choose performance every time... but then comment
appropriately.
I don't care how fast the code executes - what matters is how fast the application works. There are often many tasks in an application which would have zero impact on the primary performance of the application. Yet there are also many tasks that still must exist for the application to be fully functional. Making all of those unreadable for the sake of performance that means nothing is not a good investment.
Robb Ryniak wrote: For example, I had a project about 10 years ago in C++...
For example I had a project in Java a bit more than 10 years ago which the estimated run time was 8 to 12 hours with a 3 month development cost. Removing one requirement which the user didn't even want reduced the estimated run time to a couple of minutes (and implemented run time to less than that) and the development time to a couple of days.
Robb Ryniak wrote: In contrast, there's a fairly popular game out right now that is written ...
And you are suggesting that is solely a implementation problem and has nothing to do with design?
Robb Ryniak wrote: I wasn't trying to say that large methods were specifically
desirable, only that they shouldn't be shunned as "don't ever
ever do this" kind of practices
As I stated, in my experience, large code blocks like that are usually the result of poor or no OO understanding. So avoiding them provides a way to mitigate a design problem - not an implementation problem.
Robb Ryniak wrote: I don't agree at all. I think any design (good or bad) can be implemented with
either fast or slow executing code,
Not in my experience. And nothing I have ever read suggests that to be true in a general way. And that isn't a recent trend either. I can remember seeing the same view point in the 90's.
Robb Ryniak wrote: I just don't think it needs to be that way.
Far as I can tell your viewpoint would lead exactly to that. Promoting performance as the goal means that developers are going to spend a lot of time optimizing code to achieve that goal - for specific code blocks. While performance measurements would demonstrate that most of the code has no impact on the primary core functionality of the application.
|
|
|
|
|
jschell wrote: Making all of those unreadable for the sake of performance that means nothing is not a good investment.
Agreed. I wasn't suggesting making code unreadable! (yikes!) I am however, all for favoring performance over "perfect" readability, and worrying more about the art of performance than the "coding standard du jour"... which is in contstant flux anyhow.
And that also assumes that performance is an issue for the given application. Not every app needs to be perfectly optimized. (I'm guessing you don't spend a lot of time writing games?)
jschell wrote: For example I had a project in Java a bit more than 10 years ago which the estimated run time was 8 to 12 hours with a 3 month development cost. Removing one requirement which the user didn't even want reduced the estimated run time to a couple of minutes (and implemented run time to less than that) and the development time to a couple of days
I'm sure we can both think of examples where either requirements or implementation have had a dramatic performance impact. I refuse to be dogmatic about either... let's just say, good requirements and good implementation together make for a mutual win... agreed?
jschell wrote: And you are suggesting that is solely a implementation problem and has nothing to do with design?
Nope. Not suggesting that at all. I am however suggesting that with the game-in-question's poor level of performance, something's very likely very wrong with the implementation... more than likely the design as well. My point was simply that bad performance seems to be astonishingly prevelant and rather than discussing how to get better performance (from either the design or implementation point of view, or better: both) far too many developers these days seem to rather want to discuss why it's good or bad to use standard A or standard B than to discuss how to make code that's both reliable and powerful.
Unless you're suggesting that an implementation can't possibly wreck the performance of an otherwise perfectly good set of requirements? If that's your assertion, then I don't think we're going to see eye to eye on this point. I think both design and implementation have to be a win.
jschell wrote: Far as I can tell your viewpoint would lead exactly to that. Promoting performance as the goal means that developers are going to spend a lot of time optimizing code to achieve that goal - for specific code blocks. While
performance measurements would demonstrate that most of the code has no impact on the primary core functionality of the application
I suppose it depends on the application... for a word processing app, there's not as much concern for performance as their would be for game programming or a scientific app that's processing some intensive mathematics. I suppose it may have seemed like I was making blanket statements that all apps written by all programs must be perfectly optimized for perfect performance... but that's not what I was implying, nor is that very practical. But when I see applications that really should have a performance focus (like games) seem absolutely complacent about the issue (as apparent by the results), then that's a pretty big problem imo. ...and I just don't see people discussing performance anymore... not as much anyhow. The focus is on when and if to use regions, or how compact is your method, or how many fields do you have, or does your code comply with some rules based app, or whether or not WPF is going to stand the test of time. Not that those things don't bear discussion, but where's the love for performance? It just seems that it gets virtually no attention these days.
Oh, and for the record, I'm not promoting that developers spend 10x as long coding in order to optimize code. I'm promoting knowing how to write fast executing code from the get-go so you don't need to optimize it much later if at all... I'm promoting senior developers sharing such techniques with the up-and-comers so they too can avoid machine-clogging code bloat. There's an art form that just seems to be lost, or at least not discussed much these days... and I miss it. That's all I'm saying.
I suppose I'm just being nostalgic for the days where I (and the two other coders in my town lol) would sit around sharing tips and discoveries on how to achieve much more with much less. Those times made me a far better coder than I ever would have been otherwise.
|
|
|
|
|
Robb Ryniak wrote: And that also assumes that performance is an issue for the given
application. Not every app needs to be perfectly optimized. (I'm guessing you
don't spend a lot of time writing games?)
Most applications do not need optimized instruction processing. And game software is a small part of the developer space.
Robb Ryniak wrote: let's just say, good requirements and good implementation together make for a
mutual win... agreed?
Except a "good" implementation is not equivalent to one that has been micro optimized for speed.
Robb Ryniak wrote: Nope. Not suggesting that at all. I am however
suggesting that with the game-in-question's poor level of performance,
something's very likely very wrong with the implementation...
And I would suppose that a better design would eliminate almost all of the performance problems.
Robb Ryniak wrote: Unless you're suggesting that an implementation can't possibly wreck the
performance of an otherwise perfectly good set of requirements?
A poor design can certainly wreck requirements. A poor implementation might or might not depending on what "poor" means. In terms of speed, excluding design problems at the implementation level, a "poor" implementation cannot be substantially improved solely by changes to the implementation.
Robb Ryniak wrote: I think both design and implementation have to be a win.
If there are performance problems with a good design then I can use a profiler to get the best that can be gotten from the app.
If there are performance problems caused by a poor design then the application will need to be refactored.
Robb Ryniak wrote: as their would be for game programming or a scientific app that's processing
some intensive mathematics.
Or embedded controllers. HOWEVER those are very small parts of overall development.
And I can give you a specific example - how fast does the configuration setup for a game need to be? Do you want a developer spending two weeks to improve that by 10%?
Robb Ryniak wrote: ..and I just don't see people discussing performance anymore...
I do. They often have a small bit of code and they want to make if faster. On further questioning one often finds that they decided it needed improvement as a whim and didn't use a profiler to localize a problem. Actually they often do not even know what a profiler is.
Robb Ryniak wrote: It just seems that it gets virtually no attention these days.
Because in the context that you are driving at it is meaningless for most work. I care if my database really can hold 20 billino rows and how to insure, via the design, that a projected data storage need like that does not become a problem. I care that a server can handle 100 txns a second but don't need it to handle 1000 because the market segment and known drivers will never require a large number of servers. But I can't add another order of magnitude to either of those by profiling an implementation. The design and architecture must be created to handle those loads an that drives the implementation.
And this supposes even that there is an actual realistic need for high volumes. Some people decide that the server must handle large numbers without doing any sizing in the first place. One can find that even if they own the entire market that they wouldn't have the needs that they claim.
Robb Ryniak wrote: I'm promoting knowing how to write fast executing code from the
get-go so you don't need to optimize it much later if at all...
First that is promoting what you are claiming it isn't. Because if that is the emphasis then peole will need to learn it for every code block they touch.
Second, as I said, most code does not need it. Profiling will tell one exactly where implementation bottlenecks are. And one can then make updates and design compromises to optimize those areas.
Robb Ryniak wrote: I suppose I'm just being nostalgic for the days where I...
And at one time I had the entire compiler API memorized, understood how the entire API worked, understood a great deal about how the OS worked and even how the computer boot up worked (at the assembly instruction level.)
But times change.
|
|
|
|
|
jschell wrote: Most applications do not need optimized instruction processing. And game software is a small part of the developer space.
...
Or embedded controllers. HOWEVER those are very small parts of overall development. And I can give you a specific example - how fast does the configuration setup for a game need to be? Do you want a developer spending two weeks to improve that by 10%?
You're absolutely right about that much... having been writing games since the age of 8 (well, OK, crazy simple ones back then, though they sure felt awesome at the time lol), I do tend to give those smaller parts of the overall developer space a far larger place in my heart than what is represented by the overall community in practice. I don't write games exclusively of course. I deal with all sorts of applications; but if I'm not working on a game, I'm usually wishing I was. So, honestly, I do have a specific bias, and I recognize that.
And you're right about much of what you said on the other things you wrote in your last post as well... I'm not going to quote every line, but the bottom line is that certain applications really don't need micro-optimization at all, and I wouldn't waste my time doing it. Games are one thing... where every frame matters, and (with the exception of online components) the execution is local and very much hardware dependant, so micro-optimization can make a difference... but then that's really part of the requirements, isn't it? As many FPS as possible on the lowest end hardware possible. (Unless you work for Crytek, in which case, feel free to consider only the highest end hardware - the users will catch up eventually. lol)
At any rate, I still stand that implementation in any project that is conscientious of performance will do better than if performance is not considered at all. I'm not suggesting that an enterprise class database system needs performance tuning for of every line of code - after all, a piece of code that is called even once every few seconds, does it really matter if it runs at 0.005 seconds or 0.05 seconds? An order of magnitude, yes, but not really a very high impact to the whole of the project... so we can agree there I'm sure. But at least being conscientious of the performance of code while it's being written will go a long way to make sure an implementation isn't overly bloated.
I have a semi-regular project (occasional feature additions, etc.) through a developer who created a database system that is robust, but frustratingly overengineered. (Too many "what if's" and not enough "what's it really need to do's".) I had to build a new machine just to run their developement platform. And that was a design issue, to be sure. So your arguments hold up well in my experience as far as database applications go. That said, I can see situations where poor implementations that work within the confines of a set of requirements could definitely wreak havoc on overall performance. Let's say a customer has the need of managing a large tree of of data (akin to a file system tree) and wants the GUI to show all the nodes. OK, no problem. Just load the data, parse into a tree, etc. Easy enough. What if, however, the programmer chose to iterate through the tree with individual select hits in the DB in order to keep the code small, and not deal with in-memory structures to sort the data into a tree? (Assuming appropriately sized data.) Hitting the DB over and over would wreck performance, to be sure. But then... I guess we're not dealing with someone very experienced in that situation - so I suppose what you've been contending holds true most of the time in practice.
jschell wrote: Actually they often do not even know what a profiler is.
Huh? What's a profiler? ...OK, just kidding. Seriously though, it does surprise me to hear people being unfamiliar with common tools. It's like not having ever heard of source control, or not knowing the difference between an IDE and Notepad. Odd. But I guess nobody's born with the information...
jschell wrote: And at one time I had the entire compiler API memorized, understood how the entire API worked, understood a great deal about how the OS worked and even how the computer boot up worked (at the assembly instruction level.) But times change.
(sniff) Yes they do. I miss DOS. (okay, only a little. )
*** Listening to the song: "INT 21h, where did you go?" *** ...OK, not really.
|
|
|
|
|
Robb Ryniak wrote: where every frame matters, and (with the exception of online components) the
execution is local and very much hardware dependant, so micro-optimization can
make a difference
Some games, not all. As an example card games don't require much of anything in the way of graphics optimization. And some don't even require optimization for strategy - for example solitaire.
Robb Ryniak wrote: Let's say a customer has the need of managing a large tree of of data (akin to
a file system tree) and wants the GUI to show all the nodes....
However the way you phrased it means that it is either a requirements and/or design problem. So one one look at the problem at that level before one implements anything.
|
|
|
|
|
jschell wrote: Some games, not all. As an example card games don't require much of anything in the way of graphics optimization. And some don't even require optimization for strategy - for example solitaire
Come on now, Solitaire's not really game lol... seriously though, I meant "graphically rich games like first person shooters like the Doom franchise, Crysis, F.E.A.R, and RTS like AoE, etc."... you know, the kinda games *I* play. Anything else is "just an app" lolol. (How's that for tunnel vision?? lol) But your point stands - anyone remember those simple (but oddly fun) games that Microsoft released in the Win 3.x era? The "entertainment pack" with Klotski, that mouse/cheese game, etc.? Yeah, not alot of optimization needed there either. At any rate, once we got talking on the same wavelength I think we understand each other well enough.
|
|
|
|
|
The guidelines/best practice/patterns/rules are there to capture knowledge of many coders.
If you try to refactor your code to adhere some standard, you are digging into the whys and hows of others. I would claim that any coder can only get better from such an experience.
If you fancy performance try run FxCop from Microsoft against
your C# code - FxCop has a set of rules that questions different coding constructs. Example: Do you initialize fields with default values e.g.
private bool foo = false;
CheckStyle for is also worthwhile to incorporate in your daily coding... that is if an old dog want to learn some new tricks
|
|
|
|
|
Keld Ølykke wrote: If you try to refactor your code to adhere some standard, you are digging into
the whys and hows of others. I would claim that any coder can only get better
from such an experience.
...and that is such an answer I was looking to get. That makes sense to me. Even if I wouldn't adopt such a policy for a project I was heading up, there is always value in understanding the practices of others, even if I don't agree with them. I have learned many times from other coders, even if I didn't agree with their approach, I can usually glean something useful from the experience. Thanks for the considered answer.
Keld Ølykke wrote: that is if an old dog want to learn some new tricks
Pfft... I'm not that old... (am I? lol) Seriously, if I wasn't interested in learning new stuff, I'd have left the field decades ago. Ours is in industry in constant flux. Good and bad I guess.
|
|
|
|
|