I don't think WinRT/Metro was ever envisioned by Microsoft as a total replacement API for full .NET. After all, WinRT is a subset of .NET. I think the motivation and drive behind WinRT is the iPad. Microsoft saw (as did we all) a huge shift in consumer computing to include lightweight tablet computing (as opposed to "heavy" tablets, which have been around for a very long time, e.g. Windows XP Tablet Edition.)
Microsoft knew that if it didn't want to lose market share to Apple (considering the whole of the computing ecosystem), it needed to compete on the same grounds with the same range of offerings, rather than sticking to "desktop/laptop only". I think that's the real motivation behind WinRT. Smaller, lighter, runs on ARM.
Microsoft also wanted to make sure that unlike the iPad, that their offering had the rather novel convenience of having your portable apps run on both the "WinPad" and the desktop... something that can't be done with iOS. Since they couldn't support a full blown traditional application model on the tablet (given the storage, CPU, and battery life differences), they needed to create something new that would work on both platforms... hence, WinRT was born. And sure, Microsoft is pushing WinRT right now, and with good reason: they desperately want the WinPad to succeed. It has nothing to do with abandoning the desktop experience, or abandoning the full .NET framework.
In other words, I am very much inclined to believe that full .NET will continue to be developed by Microsoft, and WinRT is really for lightweight apps that need to be run on both "WinPad" and desktop. (Is anyone else calling it "WinPad", or did I just coin the term?? lol)
Let's say, for "giggles", that Microsoft was actually stupid enough to completely abandon the full .NET framework in favor of the lighter WinRT. What then? Nothing. Fortunately for us, Microsoft isn't the only game in town as far as .NET development goes. I've been fairly impressed by Mono, which offers a variation on .NET that's even cross platform... running on Windows, Linux, Mac, and even Android & iOS. Since Microsoft isn't the only one offering the full .NET "experience", if Microsoft jumps ship for an all-WinRT future (very unlikely IMO), all you'd need to do is switch to Mono, where your C# code should work pretty much as-is.
So, don't worry about it. Continue writing .NET apps until you know you'll need an app to run on both platforms. Then use the WinRT subset for that particular app. I think (and apparently Microsoft does as well) that there's room in the computing ecosystem for both full-on .NET desktop apps and lightweight WinRT apps. In short, WinRT is an addition to our computing ecosystem, not a replacement for full .NET.
Bottom line is that Metro is for "Windows 8 Apps", not "Windows Apps"... the latter being "real" Windows apps as we all know and love, and the former being lightweight apps that run on BOTH the desktop and WinRT. For applications appropriate for the portable space, that's probably the way to go... the *only* way to go if you want to target WinPad (Windows 8 tablets). For Line of Business applications that will require a more desktop experience, the full .NET is probably the way to go. (If portability is needed in such cases, I could see making a "light" version of an app with simpler controls and fewer features with Metro as a side-by-side addition to the main desktop app written against full .NET... that's pretty common these days for road warriors with iPhone apps that let them do some things for work while on the road, while having full features at their desk.)
Regarding WinRT appearing on the desktop, it does indeed, but only for Windows 8. (AFAIK there will be no WinRT mode ever for Windows 7, Vista, or XP.) This is done so that apps written for WinPad will also run on the Windows 8 desktop... it doesn't mean that WinRT is the "real core" of the Windows 8 desktop, or even the most important aspect. It's just something added to the desktop experience to let you use your Metro apps on both types of devices. This would be analogous to Apple creating a means to let apps for iPhone run natively on Mac OSX.
So, again, I figure it's not a replacement API, but a new additional API that has its purpose, just as the full .NET experience also has its purpose.
No. Of course it's not true. I often like to lie to people in answers because I'm a sadist who wants them to waste time.
Of course it's true. I know it because I'm doing development with WPF on Windows 8 desktop using WinRT functionality where I can, and I've run up against the limitations. For instance, you can't use the WinRT camera APIs. You can't access the contracts. There are many more things you can't do. I believe MS has documentation somewhere on what APIs can be used.
*pre-emptive celebratory nipple tassle jiggle* - Sean Ewington
I am trying to create a UML Class Diagram Editor and Generator of Class Files in (PHP but open to support for other language) based on the Diagram as a school project and I am having a difficulty on designing my classes to fit with the project.
for example, if I will create a class UMLEntity(these are the items that can be added diagram) base class which will be extended by UMLClass and UMLInterface that both have UMLMember (UMLProperty, UMLMethod). The problem is UMLEntity and UMLMember both have modifiers (public, protected, private, static, abstract, final etc) but limited depending on each type (ex. interface can only have public members and cannot be static, interface cannot be static). so maybe I should automatically set those members as public, but how?
Can anyone help me out what is the better way for this scenario.
Or can you please share your approach if you'll be the one to develop this project
I am trying to create a UML Class Diagram Editor and Generator of Class
That is a big project so hopefully you have severly limited the scope so you actually have a chance at succeeding at something.
The 'descriptor' class defines the entity but it doesn't validate it. Validation occurs only at input, so in your case as part of the GUI itself.
Each descriptor has methods. Each descriptor has a type ('class', 'interface'). Methods, which also have a descriptor class (type of 'method') have a an access modifier that specifies what the access is.
That's not the way the forums work. Presenting information like that would require a fair bit of code and description, and that's something that should be done through an article. The way it works is, you try something; if it doesn't work, you come here for help with the code that you've put together and we help fix that code.
*pre-emptive celebratory nipple tassle jiggle* - Sean Ewington
Ok, I've been doing a lot of reading on these forums lately. I've noticed that a lot of people are making a lot of noise over various coding practices, calling them a "code smell". For example, "never have more than 5 fields in a class, or it's a code smell and should be refactored", as if such advice were "Bible-tastic Goodness". I honestly don't understand why the whole of development has shifted into extremely compact functions/methods as a seemingly "unbreakable rule", where if a class (or God forbid a single method) does more than one thing, it's "bad".
Granted, if you're dealing with inexperienced coders on a team, it's not ideal to let them run amok with hundreds of lines of code without encouraging them to keep things manageable... so it that context, I do see some value in keeping methods and classes compact. However, if you assume a project must meet the same objectives either way, then this requires that functionality must get broken up into an extremely complex tree of various classes, etc., where methods call methods call methods, ad infinitum. My point is that engendering such practices comes at the price of performance.
I've been programming now for over 30 years, and have used a huge variety of languages - everything from GW-BASIC to Assembler to C++ to VB to C#, and everything in between. I've designed systems that run manufacturing facilities, and I've written games that have done well enough to garner millions of downloads. I've been doing this a LONG time doing a LOT of different things with it... and the singular most important lesson I've learned throughout all of this is as follows:
Never, ever, make the computer do more than is required to achieve the desired results; and accomplish this with code that is clear to someone unfamiliar with it via consistent styling, clearly thought out comments, and self-explanatory naming.
This means to never refactor functionality into a separate method just because one method is getting a bit lengthy, assuming that functionality is NEVER needed anywhere else. (And the minute it is actually needed elsewhere, then it's time to move it into a seperate procedure.) Why make the computer perform a CALL instruction (with all the associated stack management for proc address and arguments, etc.) when it doesn't need to? What for? To make it allegedly more readable?? I honestly don't understand the logic behind such refactorings or design methodologies. It seems so wasteful to design so many layers into a project just because some people have a hard time reading longer segments of code.
Maybe I'm just too "old school" for my own good, but it seems to me that good design starts with making a system perform as well as possible with code that is as readable as possible, and in that order; not by following a series of "laws" that result in code that may be more readable to the masses, but runs orders of magnitude slower.
When I was coding on XT machines with a scant couple of Mhz at my disposal, every single instruction mattered... ALOT. It just seems that so many of the current "Best Practices" don't really give much consideration, if at all, to the performance of the end product... and I honestly just don't get why. Sure, one could always just throw more hardware at a design that is more complex than it needs to be, but why should we?
I know this has turned into a bit of a rant, and for my first CodeProject post, that's probably not the most appropriate ...but I sincerely would like to know why so many "best coding practices" strike me as focusing on the wrong things... like trying to make programmer's jobs as easy as possible? After all, it will never truly be easy to be truly skilled at programming - it takes practice, no matter what policies we adopt. I guess I'd like to understand why the quest for performance seems to have been nearly completely abandonded in favor of making code as readable as possible... and I'd quite sincerely like to hear some really good reasons to accept such policies as even remotely reasonable.
I've noticed that a lot of people are making a lot of noise over various coding practices, calling them a "code smell". For example, "never have more than 5 fields in a class, or it's a code smell and should be refactored"
Most of those have learned the 'rule of thumb', without understanding what they were learning. And yes, I got a feeling that it's getting worse every year.
Robb Ryniak wrote:
and I'd quite sincerely like to hear some really good reasons to accept such policies as even remotely reasonable.
Hehe, not from me; any policy that cannot be explained is redundant. If it does not help me in doing my job, the policy is ignored - I'm paid to work, not to follow policies.
It'd be as useful as having a yearly meeting to discuss whether the current code-standard should be dropped in favor of Systems Hungarian. Meet and discuss all you want, just don't bother me with the drivel.