|
Eddy Vluggen wrote: Why is it that with each new thing, something else "needs to become legacy"?
hope not!
Eddy Vluggen wrote:
WinRT is unmanaged, and will not replace the unmanaged environments.
What about WPF and .NET itself you think?
dev
|
|
|
|
|
Take a look at history; both Microsoft and the community tried to kill VB6.
It's still here.
WPF looks definitely looks like something worth investigating. No, WinForms isn't going away either
|
|
|
|
|
Eddy Vluggen wrote: WPF looks definitely looks like something worth investigating. No, WinForms isn't going away either
I do both so no bias here but you can't deny most new jobs requires WPF not winform.
dev
|
|
|
|
|
devvvy wrote: I do both so no bias here but you can't deny most new jobs requires WPF not winform.
Depends on your location; more WinForms here than WPF. A few kilometers south-east, and they'll tell you that Java is preferred.
|
|
|
|
|
I don't think WinRT/Metro was ever envisioned by Microsoft as a total replacement API for full .NET. After all, WinRT is a subset of .NET. I think the motivation and drive behind WinRT is the iPad. Microsoft saw (as did we all) a huge shift in consumer computing to include lightweight tablet computing (as opposed to "heavy" tablets, which have been around for a very long time, e.g. Windows XP Tablet Edition.)
Microsoft knew that if it didn't want to lose market share to Apple (considering the whole of the computing ecosystem), it needed to compete on the same grounds with the same range of offerings, rather than sticking to "desktop/laptop only". I think that's the real motivation behind WinRT. Smaller, lighter, runs on ARM.
Microsoft also wanted to make sure that unlike the iPad, that their offering had the rather novel convenience of having your portable apps run on both the "WinPad" and the desktop... something that can't be done with iOS. Since they couldn't support a full blown traditional application model on the tablet (given the storage, CPU, and battery life differences), they needed to create something new that would work on both platforms... hence, WinRT was born. And sure, Microsoft is pushing WinRT right now, and with good reason: they desperately want the WinPad to succeed. It has nothing to do with abandoning the desktop experience, or abandoning the full .NET framework.
In other words, I am very much inclined to believe that full .NET will continue to be developed by Microsoft, and WinRT is really for lightweight apps that need to be run on both "WinPad" and desktop. (Is anyone else calling it "WinPad", or did I just coin the term?? lol)
Let's say, for "giggles", that Microsoft was actually stupid enough to completely abandon the full .NET framework in favor of the lighter WinRT. What then? Nothing. Fortunately for us, Microsoft isn't the only game in town as far as .NET development goes. I've been fairly impressed by Mono, which offers a variation on .NET that's even cross platform... running on Windows, Linux, Mac, and even Android & iOS. Since Microsoft isn't the only one offering the full .NET "experience", if Microsoft jumps ship for an all-WinRT future (very unlikely IMO), all you'd need to do is switch to Mono, where your C# code should work pretty much as-is.
So, don't worry about it. Continue writing .NET apps until you know you'll need an app to run on both platforms. Then use the WinRT subset for that particular app. I think (and apparently Microsoft does as well) that there's room in the computing ecosystem for both full-on .NET desktop apps and lightweight WinRT apps. In short, WinRT is an addition to our computing ecosystem, not a replacement for full .NET.
|
|
|
|
|
|
Bottom line is that Metro is for "Windows 8 Apps", not "Windows Apps"... the latter being "real" Windows apps as we all know and love, and the former being lightweight apps that run on BOTH the desktop and WinRT. For applications appropriate for the portable space, that's probably the way to go... the *only* way to go if you want to target WinPad (Windows 8 tablets). For Line of Business applications that will require a more desktop experience, the full .NET is probably the way to go. (If portability is needed in such cases, I could see making a "light" version of an app with simpler controls and fewer features with Metro as a side-by-side addition to the main desktop app written against full .NET... that's pretty common these days for road warriors with iPhone apps that let them do some things for work while on the road, while having full features at their desk.)
Regarding WinRT appearing on the desktop, it does indeed, but only for Windows 8. (AFAIK there will be no WinRT mode ever for Windows 7, Vista, or XP.) This is done so that apps written for WinPad will also run on the Windows 8 desktop... it doesn't mean that WinRT is the "real core" of the Windows 8 desktop, or even the most important aspect. It's just something added to the desktop experience to let you use your Metro apps on both types of devices. This would be analogous to Apple creating a means to let apps for iPhone run natively on Mac OSX.
So, again, I figure it's not a replacement API, but a new additional API that has its purpose, just as the full .NET experience also has its purpose.
|
|
|
|
|
Only a small subset of WinRT appears on the desktop side.
This whole argument ignores the fact that a large part of .NET development is done server side to support ASP.NET. This is an area where WinRT makes no impact whatsoever.
|
|
|
|
|
Pete O'Hanlon wrote: Only a small subset of WinRT appears on the desktop side.
Like!
(Is that true? Where you know about this?)
dev
|
|
|
|
|
devvvy wrote: Like! (Is that true? Where you know about this?)
No. Of course it's not true. I often like to lie to people in answers because I'm a sadist who wants them to waste time.
Of course it's true. I know it because I'm doing development with WPF on Windows 8 desktop using WinRT functionality where I can, and I've run up against the limitations. For instance, you can't use the WinRT camera APIs. You can't access the contracts. There are many more things you can't do. I believe MS has documentation somewhere on what APIs can be used.
|
|
|
|
|
Can WinRT apps communicate with .NET apps via socket? What about IList/IDictionary or DataTable...etc?
dev
|
|
|
|
|
devvvy wrote: Can WinRT apps communicate with .NET apps via socket?
No.
devvvy wrote: What about IList/IDictionary or DataTable
No DataTable in WinRT.
|
|
|
|
|
Can WinRT apps comm with .NET via WCF then?
dev
|
|
|
|
|
Not really, no.
The recommended way to get the desktop and WinRT world to play nicely together is to use Azure Service Bus.
|
|
|
|
|
I am trying to create a UML Class Diagram Editor and Generator of Class Files in (PHP but open to support for other language) based on the Diagram as a school project and I am having a difficulty on designing my classes to fit with the project.
for example, if I will create a class UMLEntity(these are the items that can be added diagram) base class which will be extended by UMLClass and UMLInterface that both have UMLMember (UMLProperty, UMLMethod). The problem is UMLEntity and UMLMember both have modifiers (public, protected, private, static, abstract, final etc) but limited depending on each type (ex. interface can only have public members and cannot be static, interface cannot be static). so maybe I should automatically set those members as public, but how?
Can anyone help me out what is the better way for this scenario.
Or can you please share your approach if you'll be the one to develop this project
Please help me out.
Thank you.
|
|
|
|
|
Daskul wrote: I am trying to create a UML Class Diagram Editor and Generator of Class
That is a big project so hopefully you have severly limited the scope so you actually have a chance at succeeding at something.
The 'descriptor' class defines the entity but it doesn't validate it. Validation occurs only at input, so in your case as part of the GUI itself.
Each descriptor has methods. Each descriptor has a type ('class', 'interface'). Methods, which also have a descriptor class (type of 'method') have a an access modifier that specifies what the access is.
|
|
|
|
|
Thank you for your response.
yes there are a lot of limitations regarding this project considering I am a student.
What do you mean by descriptor? Can you please help me design my Domain MOdels? I really need to get started with this.
Thank you
|
|
|
|
|
Daskul wrote: What do you mean by descriptor?
What part of the several statements that I made describing it did you not understand?
|
|
|
|
|
I just can't figure out how to implement it through code. can you please guide me or show a brief example what would it be look like when it is actually a class hierarchy?
|
|
|
|
|
That's not the way the forums work. Presenting information like that would require a fair bit of code and description, and that's something that should be done through an article. The way it works is, you try something; if it doesn't work, you come here for help with the code that you've put together and we help fix that code.
|
|
|
|
|
Ok, I've been doing a lot of reading on these forums lately. I've noticed that a lot of people are making a lot of noise over various coding practices, calling them a "code smell". For example, "never have more than 5 fields in a class, or it's a code smell and should be refactored", as if such advice were "Bible-tastic Goodness". I honestly don't understand why the whole of development has shifted into extremely compact functions/methods as a seemingly "unbreakable rule", where if a class (or God forbid a single method) does more than one thing, it's "bad".
Granted, if you're dealing with inexperienced coders on a team, it's not ideal to let them run amok with hundreds of lines of code without encouraging them to keep things manageable... so it that context, I do see some value in keeping methods and classes compact. However, if you assume a project must meet the same objectives either way, then this requires that functionality must get broken up into an extremely complex tree of various classes, etc., where methods call methods call methods, ad infinitum. My point is that engendering such practices comes at the price of performance.
I've been programming now for over 30 years, and have used a huge variety of languages - everything from GW-BASIC to Assembler to C++ to VB to C#, and everything in between. I've designed systems that run manufacturing facilities, and I've written games that have done well enough to garner millions of downloads. I've been doing this a LONG time doing a LOT of different things with it... and the singular most important lesson I've learned throughout all of this is as follows:
Never, ever, make the computer do more than is required to achieve the desired results; and accomplish this with code that is clear to someone unfamiliar with it via consistent styling, clearly thought out comments, and self-explanatory naming.
This means to never refactor functionality into a separate method just because one method is getting a bit lengthy, assuming that functionality is NEVER needed anywhere else. (And the minute it is actually needed elsewhere, then it's time to move it into a seperate procedure.) Why make the computer perform a CALL instruction (with all the associated stack management for proc address and arguments, etc.) when it doesn't need to? What for? To make it allegedly more readable?? I honestly don't understand the logic behind such refactorings or design methodologies. It seems so wasteful to design so many layers into a project just because some people have a hard time reading longer segments of code.
Maybe I'm just too "old school" for my own good, but it seems to me that good design starts with making a system perform as well as possible with code that is as readable as possible, and in that order; not by following a series of "laws" that result in code that may be more readable to the masses, but runs orders of magnitude slower.
When I was coding on XT machines with a scant couple of Mhz at my disposal, every single instruction mattered... ALOT. It just seems that so many of the current "Best Practices" don't really give much consideration, if at all, to the performance of the end product... and I honestly just don't get why. Sure, one could always just throw more hardware at a design that is more complex than it needs to be, but why should we?
I know this has turned into a bit of a rant, and for my first CodeProject post, that's probably not the most appropriate ...but I sincerely would like to know why so many "best coding practices" strike me as focusing on the wrong things... like trying to make programmer's jobs as easy as possible? After all, it will never truly be easy to be truly skilled at programming - it takes practice, no matter what policies we adopt. I guess I'd like to understand why the quest for performance seems to have been nearly completely abandonded in favor of making code as readable as possible... and I'd quite sincerely like to hear some really good reasons to accept such policies as even remotely reasonable.
|
|
|
|
|
Robb Ryniak wrote: I've noticed that a lot of people are making a lot of noise over various coding practices, calling them a "code smell". For example, "never have more than 5 fields in a class, or it's a code smell and should be refactored"
Most of those have learned the 'rule of thumb', without understanding what they were learning. And yes, I got a feeling that it's getting worse every year.
Robb Ryniak wrote: and I'd quite sincerely like to hear some really good reasons to accept such policies as even remotely reasonable.
Hehe, not from me; any policy that cannot be explained is redundant. If it does not help me in doing my job, the policy is ignored - I'm paid to work, not to follow policies.
It'd be as useful as having a yearly meeting to discuss whether the current code-standard should be dropped in favor of Systems Hungarian. Meet and discuss all you want, just don't bother me with the drivel.
Good rant, enjoyed the read.
|
|
|
|
|
Eddy Vluggen wrote: Good rant, enjoyed the read.
lol... thanks.
|
|
|
|
|
Robb Ryniak wrote: I've been programming now for over 30 years, and have used a huge variety of languages - everything from GW-BASIC to Assembler to C++ to VB to C#, and everything in between.
I have been doing it for 40 years and have used Fortran, Basic(various flavors), Pascal, assembly (different flavors), C, C++, C#, Java, Perl, various SQL dialects and various scripting languages.
Robb Ryniak wrote: and the singular most important lesson I've learned throughout all of this is as follows:
You failed to mention maintenance costs and total life cycle costs in anything you said.
Robb Ryniak wrote: Why make the computer perform a CALL instruction (with all the associated stack management for proc address and arguments, etc.) when it doesn't need to? What for? To make it allegedly more readable?? I honestly don't understand the logic behind such refactorings or design methodologies. It seems so wasteful to design so many layers into a project just because some people have a hard time reading longer segments of code.
Which might be because you don't understand maintenance costs an life cycle costs.
The fact that you understand the code means nothing it terms of whether someone else can understand it. And in the vast majority of professional programming software that actually reaches production will require that someone else besides the original programmer must understand it at some time.
Software development is not a rigorous discipline and practices are acquired based on many factors but because the community is so broad practices that work become the norm over time.
Your large method idea was one that even structured programmers rejected long ago and that rejection is further demonstrated by the wide and rapid acceptance of Object Oriented programming.
As with all things this is not an absolute mandate in that every thing must be broken into smaller pieces but the vast majority should. And at least in my experience code that fails to do this generally is often obviously not based on an OO design.
Robb Ryniak wrote: When I was coding on XT machines with a scant couple of Mhz at my disposal, every single instruction mattered... ALOT.
And when I used punch cards every single key press mattered since it could take a half an hour turnaround to find even syntax errors.
But those days are past and are irrelevant for most business domains. The most relevant business domain that is even close to that these days is embedded programming and even those are getting away from small space constraints (over the entire field, some areas still require it.)
Robb Ryniak wrote: like trying to make programmer's jobs as easy as possible?
Huh? Because it costs money to produce software. It costs money to fix bugs in production. Because it costs money if one losses market share due to long lead times.
Robb Ryniak wrote: I guess I'd like to understand why the quest for performance seems to have been nearly completely abandonded in favor of making code as readable as possible..
Because performance is most significantly impacted by requirements and design. Not implementation. Excluding implementation design errors, which are still design errors, performance improvements at the implementation lever are almost always in the small percentages. Whereas design/requirements problems can result in order of magnitudes difference.
|
|
|
|
|
jschell wrote: I have been doing it for 40 years and have used Fortran, Basic(various flavors), Pascal, assembly (different flavors), C, C++, C#, Java, Perl, various SQL
dialects and various scripting languages.
Awesome. Same here, pretty much. You just have 8 years on me
jschell wrote: You failed to mention maintenance costs and total life cycle costs in anything you said.
You're right... I didn't say anything about that.
jschell wrote: Which might be because you don't understand maintenance costs an life cycle costs. The fact that you understand the code means nothing it terms
of whether someone else can understand it. And in the vast majority of professional programming software that actually reaches production will require that someone else besides the original programmer must understand it at some time.
While I didn't mention anything about total cost for maintenance, it doesn't mean I don't understand or value the need for maintainable code from a cost perspective, nor does it mean that I don't value having code that is readable by others. On the contrary, maintainability, TCO and readability are all extremely high on my list of what I value. I didn't mention it in my OP simply because it is... in my opinion... secondary to design and implementation that considers performance (accuracy and stability assumed) as the higher priority. If I am faced with the choice between code that executes twice as fast and code that is twice as readable, I choose performance every time... but then comment appropriately.
For example, I had a project about 10 years ago in C++ that required some software-based imaging processing. Performance was really critical, and the math could have been done step by step, calling various other functions to make the code really, really readable... but putting a particular transformation on ONE LINE helped with compiler optimization to the point where it ran literally 10x faster. So, I wrote it BOTH ways... the "easy to read" way that perfectly expressed the logic of the algorithm, and the ugly but fast-executing way, and left the "easy to read" way as a comment block in order to aid future coders in seeing what was actually going on.
In contrast, there's a fairly popular game out right now that is written (AFAIK) in a highly tiered OO approach, and it's charming 8-bit graphics and blocky environments that are comparable to the technology of the classic DOOM/DOOM II era in the 90's, is so inconsiderate of performance issues that it can bring a Core i7 with dual top-end graphics cards to its knees... that one's a real head-scratcher.
jschell wrote: Your large method idea was one that even structured programmers rejected long ago and that rejection is further demonstrated by the wide and rapid acceptance of Object Oriented programming. As with all things this is not an absolute mandate in that every thing must be broken into smaller pieces but the vast majority should. And at least in my experience code that fails to do this generally is often obviously not based on an OO design.
I wasn't trying to say that large methods were specifically desirable, only that they shouldn't be shunned as "don't ever ever do this" kind of practices. IMO, a given construct should be used when it's appropriate... always. If a piece of functionality would benefit from a monolithic approach, it should be done that way. If a piece of functionality benefits from an highly tiered OO approach, then it should be done that way. I'm advocating a more liberal approach to design and implementation, not a specifically monolithic approach. The right approach for the right situation. Period.
jschell wrote: Because performance is most significantly impacted by requirements and design. Not implementation.
I don't agree at all. I think any design (good or bad) can be implemented with either fast or slow executing code, and my original point was that it seems the practices being widely adopted in the past 5 to 10 years are definitely not in favor of speed of execution, and I don't understand that.
jschell wrote: Huh? Because it costs money to produce software. It costs money to fix bugs in production. Because it costs money if one losses market share due to long lead times.
Now here, you make a good point that I agree with. I just don't know that a more liberal approach (as I'm advocating) is necessarily the anticedent of time efficiency... though it can be, of course. I just don't think it needs to be that way.
modified 10-Dec-12 8:27am.
|
|
|
|
|