This is the final post in the series on the Ultimate Coder Challenge: Going Perceptual. The applications are in and now it's do or die. The biggest challenge to our contestants at this point isn't the code, or the SDK, or their idea. It's our hardware. It's the installers. It's our ability to understand what they were trying to do and that has proven to be difficult in some cases.
Perceptual computing is a form of interacting with a computer in a more natural human way. You speak or you gesture, or you may even want to just swipe. It's not about the keyboard and mouse. The biggest problem is that what we think makes a good method of interaction (cue Minority Report) is actually a terrible way to interact. It's tiring, there's no feedback, and gestures are in 3D, not 2, and not constrained to a box or button and so can be ambiguous. Further, a touch gesture as a definite start and stop. It starts when you touch and ends when you stop touching. When does a gesture start and stop?
Further, and as has been mentioned elsewhere, there's no standard. We all know how to pinch-zoom, or swipe to scroll, or even push a button. How do you push a button in 3D space? How do pinch to zoom, without the initial contraction of the fingers being misinterpreted as a shrink action first?
So on to the challengers:
Lee's work I've written about extensively, usually while ruefully shaking my head at the madman. He shamelessly takes on way more than he can chew and after endless dead ends, delays, setbacks and possibly a broken keyboard or two he comes out with something a little special. His virtual conference is, for me, an excellent example of what's possible with perceptual computing: a UI that revolves around you. You're in a virtual 3D environment talking to others and as you turn, the viewport turns with you.
Except there's one huge problem with this approach: in a video conference you never turn. If you turn then you're no longer looking at the screen in front of you. Further, as you turn, the mass-tracking (not head tracking) turns the virtual camera in the direction that you're turning, meaning the image on the screen pans, meaning, well, that you don't actually need to turn to view it. Kind of like an anti-Catch-22.
It's very cool though. Very cool.
Sixense's puppetry demo is complete. It's the ultimate kids toy and allows two people (or one talented person) to host and record a virtual puppet show. It works, but I can't help but feel that so much effort went into things like scenes and a story that the creators forgot what puppet shows are about. Dialogue. Well, and violence, for those who fondly remember the Punch and Judy days, but mostly dialogue and often costumes. So why not ditch the fancy scenes and instead allow quick changes of clothes? Or have amusing sound effects when one puppet bonks another puppet on the head? That would focus the players on thinking about their puppet rather than trying to control a puppet that keeps flying all over the countryside.
It's an excellent example of 3D free form gesture based interaction with a computer, and for that they score highly. Most importantly their user feedback on interpreting hand gestures is by far the best of any contestant.
Code Monkeys Stargate application shows promise in (for me) the ultimate goal of a head tracking video game. Unfortunately the dream never lived up to reality and the sheer effort involved in trying to control the targeting quickly took away from the luster. I did get it to work, but the headache afterwards was not worth it.
Infrared5 followed a similar path to Code Monkeys with their Kiwi challenge, however they introduced an extra variable by having the control of the application be done via a smartphone application. The connection between the game and the smartphone was seamless and slick. I could not, however, control anything but the fire button from my iPhone which made the game unplayable for me. Restarting didn't solve the issue. Further, I often ended up with a black landscape in my viewport that no amount of yelling, tilting, clicking or swiping would get me out of, and closing the app via alt-tab (there's no close button I could find) was near impossible because no sooner had you popped out of the app then you were thrust back into it, black soulless void and all.
Pete has created an image processing application that uses a series of gestures to activate filters. His biggest problem is there is no defined, standard set of gestures one can call on to immediately dive into his app. There's no help button, so you're left guessing at gestures. Thumb up and down, swiping up and down, and in my case swearing like a sailor. An excellent attempt at making gesture input a natural part of the interaction with the application, but let down, I think, but the maturity of the platform.
Eskil has demonstrated an abstraction layer that allows developers to take advantage of perceptual computing without needing to write the boilerplate code. In fact, without needing to know anything about the nuts and bolts at all. This is a tremendous achievement given the short time available.
A short note on the perceptual computing camera: I'm typing this review on my Lenevo Yoga with the camera perched on the top of the screen. The camera is heavy, as I've mentioned before, but it's only now, after hours of clenching my fist, twiddling my fingers, bobbing my head frantically and yelling 'Engage' in various accents to try and interact with the game, that I've realised I'm getting tired always hiking the screen back up to the vertical position. The camera makes the lid sag due to the weight, and when it's not sagging it's trying to tip the laptop base over apex. It needs to be lighter and it needs to be way smaller.
Another general comment on the use of the camera as an input device is that onscreen feedback is critical. If you don't get feedback on what the computer thinks you're doing you go nowhere. You can't debug. I cannot overstate how important feedback is.
Also, I do need to also comment on the Lenevo itself. I love the feel of the keyboard and most especially the palm rest. So very comfortable. But please, to anyone who is thinking of manufacturing a laptop keyboard, DO NOT reduce the width of the right shift key and squeeze in the up arrow in the space created. It means I'm constantly hitting the up arrow. Constantly. It's doing my head in. Apart from that the screen is crisp, the battery life good, and the flip back keyboard weirdly useful. Propping this thing on a table with the keyboard folded back to watch a movie or flick through the news is brilliant.
As to perceptual computing, my overwhelming feeling is "it's coming". But it's not here yet. We are trying to make a computer be like the real world. You turn or look or speak or grab at something that isn't there in the hope that the computer will mimic or replicate your intent virtually. This, to me, is akin to skuemorphism: changing something to be like something else, like Apple making it's calendar application leather bound, or having ebook readers show an animated page turn. A computer is not the real world, and it doesn't have the limitations of the real world (so to speak) so why mimic it?
I think the future of Perceptual computing will most likely be subtle. The computer may recognise you or your voice, and will recognise when you are in front of the computer, when you're looking at it, and when you're not. The end of screen savers, really, since it would just go to sleep. Gestures such as brushing away an app to close it, or flicking it to move it to another screen would be intuitive, and voice recognition would allow utility commands such as searching or bookmarking to be carried out without needing to take your hands of the keyboard or, indeed, outside of the current application.
Gaze tracking is another incredibly important, yet not currently functioning (at least on the hardware I have in front of me) feature that has a myriad of uses. My immediate use would be for eye tracking in UI testing, but even simpler could things like auto scrolling, or even auto-hiding of elements when they are not being looked at directly. The eye does, however, jump around an awful lot so the smoothing algorithm will need to be heavily weighted.
The contestants achieved some incredible feats of patience, innovation, creativity and problem solving. I take my hat off to them for their perseverance and outright foolishness in taking on a challenge that many of them were so unprepared for, yet so willing to have a go and rise to the occasion. Rise they did, so well done, guys. Get some rest and have a beer. You've earned it.
A thoughtful and interesting conclusion to the series Chris. The one thing I really felt I was missing was having a decent graphic artist so that I could do some decent on screen tutorials. This is an area that really did need someone with more artistic ability than me.
I was brought up to respect my elders. I don't respect many people nowadays.
This week is the final week of blogs by the challengers. I won't go through individual entries since pretty much all of them are at the wrap up / polish stage.
The overwhelming feeling you get when reading the blogs is one of a challenge accepted and, almost, tamed. This is uncharted territory for the contestants and that territory is not paved smoothly: the SDK is in beta and the capabilities of the hardware are still limited. There's a lot of things that would be great if they worked, but they don't. Sizense wanted to have their Big Bad Wolf puppet blow the little piggy puppet's house down by blowing on the mic. "Blowing on a mic" is not a recognised word so it can't happen. Code-Monkeys (and many others) wanted head tracking - or even gaze tracking - but the hardware simply isn't up to it at the moment. Infrared5 simply want more grunt from the Lenevo.
It's close. Really, really close and while the contestants were not always able to achieve their first order approximation of what they wanted, they have done exactly what good developers do and focus on what the outcome they want is, and then work backwards. Instead of finger tracking you use thumb tracking, instead of eye tracking you use head tracking, instead of head tracking you use body mass tracking. Instead of tracking everything just do your tracking work on the object (eg hand) you want to track and ignore all other input data. Speed improvements came quickly, as did a usable (but maybe not perfect) solution.
The point is computing power is always increasing, the SDK will only improve, the hardware will become more refined and more responsive (and offload much of the software based processing) and we will get there. Quickly.
We get the final (final) versions of the apps soon and it will be then that we, the judges, have to dive in and ask ourselves two basic questions
1. What is perceptual computing?
2. Who knocked it out of the park?
Thanks for all your encouragement over the last few weeks Chris. Reading the judge's comments have been a real highlight over the weeks and I would have to say that last weeks was one of the funniest posts I've ever read, so kudos for that.
With the competition, I tried to stay with what was in the SDK, and I stand in awe of what the other competitors produced. As for point 2, the answer (to me) is easy - Lee. Clean out the park, and 3 counties of clearance.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 6 and we're almost done. GDC[^] has been and gone and unfortunately I was not able to attend due to a small matter of our CodeProject.TV[^] launch (where's the blink tag when you need it?). While missing GDC itself almost brought a tear to my eye, the knowledge that I was missing out on the beers Pete and Chip have threatened me with made it particularly painful.
I received my Interactive Gesture Camera from Intel last week and have hooked it up. Dual cameras for depth perception, dual microphones for voice recognition, and an SDK that ties it together.
And it's heavy. Really weirdly heavy for such a small camera. This isn't a bad thing though because it means when it's sitting on the top of your monitor it's very stable, and the picture quality compared to my old logitech is much better, with far less in/out refocussing issues than I had with my old webcam. It is kind of weird having it sitting there, staring at me with those two dead eyes. Evaluating me. Scanning me in the infrared. Knowing where I am, where I'm looking, what I'm saying. Intel itself does absolutely nothing to make me feel more comfortable with their disclaimer stating:
The Camera may not be used in any “mission critical application” in which the failure of the Camera could result, directly or indirectly, in personal injury or death
Injury? Death? This thing is going to sleep in the garage from now on.
Anyway, to the challengers, or those that have not been taken hostage, injured or possibly killed by their cameras. I say this because 2 of the challengers have not submitted blog postings and I've not heard anything from them. Their muffled screams are probably still echoing against the backdrop of a small, blinking green light coming from the tiny black dense camera on their bloodstained laptop.
It sleeps outside tonight. I don't want it talking to my car, whispering to it. Subverting it.
Lee[^] enjoyed GDC and ensured Intel got their hotel bill's worth by spending an inordinate amount of time in his room cranking code. Lee's understandably at the point of polishing, and at the point of taking stock of the reality of gestures. They all sound great but how do you provide feedback for a gesture driven UI? How do you let the user know the difference between a gesture that does something, a gesture that does nothing, and a gesture that was not understood. And how do you educate your users on gestures? He's basically done, so on to testing.
Sixense[^] demo'd their puppet show at GDC and they too are at the point of polishing and introducing a little realism. Not much more to say on them.
Code-Monkeys[^] are getting desperate and are quoting Gene Simmons and resorting to tongue tracking. I'm not going there. I'll just quote the man himself:
Life is too short to have anything but delusional notions about yourself.
Infrared5[^] used GDC as their own private beta testing ground which is perfect. There must have been something in the beer at GDC though because they've left the reservation and are now focussing on foot tracking. I'm a bare-feet kinda guy myself so I'm looking forward to testing next week.
Pete[^] is in lock-down mode, that time in any application where you just have to say "no more". He's introduced some very nice gesture and voice UI - voice control to set filters, shake to add a blur effect (very cute) and gestures such as swiping your entire hand right to left to smooth. I love it - very, very intuitive, almost natural. AC/DC2 and some Twisted Sister. Nice.
Eskil[^] obviously enjoyed GDC and his update this week is primarily about the details behind head tracking.
Overall the contestants seem to be ready. There's been a lot of collaboration and sharing of ideas and code. It's a contest, but they're all in it together and definitely enjoying themselves.
As to us judges? There isn't going to be a lot of enjoyment in the judging. There's some quality work here and it will not be easy.
Week 5 and we're starting to see some rounding out of the finished creations. For us judges it's also the week we start getting the hardware to test, and my Lenevo Yoga is in my hot little hands, getting belted around and abused, as happens to all my toys. It's a very, very solid, thou uninspiring unit. It is a Lenevo, after all, but what it does it does well. Great screen, lovely tactile feel on the keyboard, excellent battery life, but boring as bat-poo. It's the Toyota Camry of laptops - solid, reliable, no nonsense without offending anyone, but you're not going to scare anyone with it.
I do, however, want to slap the person responsible for the trackpad. It's awful.
Danny at Sixense [^]has shown his handpuppet wolf wandering around a 3D backdrop. In my mind they've completed their task and the rest is polish. Using only a camera and an Ultrabook you can buy off the shelf they've created a method of interacting and controlling software using complex gestures. Sure, we've had this on the Kinect for years, but this is new to laptops and beats some other gesture based controls[^] that the media seems to be going nuts over lately. Nice one.
Lee[^], too, is at the polish stage and has some words of wisdom about voice recognition: it doesn't work all that well but be a little clever and it'll work just fine given some context. This is the story of every developer's life, I think.
Soma[^] is deep into the task of rethinking their UI. They've tried the Minority Report style UI but it really is a little tiring and, well, unemotional. Tey continue a theme on performance issues they have face, specifically voice control and speed of recognition. There's a reason Siri needs to be connected to a server to do voice recognition: it's a heavy workload. So they are getting there, but we're now seeing the compromises and trade-offs coming into play.
Infrared5[^] get an automatic 2 point bonus for including two references to AC/DC. They have implemented a face tracking solution by handling perspective correction and depth analysis themselves, in C++, using actual mathematics. Bonus 5 points right there. They are tackling the immediate problems at hand with craft solutions, and focussing on perceptual computing rather than using perceptual computing as a bit of gravy.
Pete[^] 's posted a video of his app's progress and I need to ask him one small favour: show us you in the video, or more specifically, show the gestures you're using to control the app. He's also struggling through the Dark Forest Of Feature Trade-offs and is feeling that his app is becoming least PC focussed and more of a touch app with gestures.
This is not a bad thing at all. Samsung have implemented gesture controls not to save wear and tear on finger tips, but because sometimes you can't touch-swipe. If you're wearing gloves (medical, outside work, it's cold, etc) or have dirty hands (cooking, your 2 year old, you're a messy eater etc) then touch won't cut it, but Perceptual Computing provides that small push that gets over that barrier to interaction. You can again use your computer in a manner very similar to touch, without touching. Not a big thing, and something that you would quickly forget you were doing. And this, in my mind, is the perfect interface: you forget that you're doing it.
I think you're on the right track, Pete.
Eskil[^] has articulated this perfectly: "The goal of any user interface is to disappear" and he's not in the Dark Forest Of Feature Trade-offs, he's in the Swamp of Broken Promises. For him the SDK isn't there yet, not by a long shot. So he's doing what any programmer does and is rewriting chunks. I'm looking forward to seeing how he ties all of this up at the end.
Last but not least, Simian Squared[^] have also reached the epiphany about what gestures promise: a lazy interface that extends gestures. Perceptual computing promises way, WAY more than this, but at it's core it also offers very simple things that can be very powerful and helpful. There's no wads of virtual clay splattering the walls of their pottery room - in fact it looks remarkably clean - so I'm taking that as a sign of excellent progress.
Thanks for the thoughts Chris. You're right - the next video will feature me swiping round to demonstrate. One small thing - there's not one Australian band this week, there are three. Bonus points from me for anyone other than CG who's heard all three of them.
I was brought up to respect my elders. I don't respect many people nowadays.
We're at week 4 of the Ultimate Coder Challenge[^] and at this point we're starting to see the light at the end of the tunnel. For some that's a scary sight.
Sixense[^] are well on their way to creating a virtual sock-puppet, but one that doesn't have the usual awful connotations of an online sock-puppet. This ons is, actually, a sock puppet. To be brutally frank what they have also done is shone a light onto some of the limitations inherent in the depth camera's abilities that have forced them to use slightly nonstandard sock puppet hand gestures (See IEEE Std 4802.01 - Sock Puppet Hand Control Standard 1104). IT would be a win if they could get past this limitation.
Lee[^] has gone ahead and written Yet Another Video Conferencing bus, 'cause, y'know, he has nothing better to do. I know - I just know - that he's hacked his DVR at home to Just Work Better, and his microwave is probably cowering behind the fridge screaming "Make it go away!". He has, however, produced a prototype of a conference system with his 3D avatar injected. I can't help but wonder why he didn't test his virtual teleportation on an assistant[^] first.
Simian[^] focussed mostly on their demo environment. A 3D Japanese themed pottery wheel. Probably best just to think about that for a while.
Pete[^] has switched from Aussie Pub rock to Canadian Top 40 with a little Creedance thrown in. I'm of two minds about this. He's also apologising for providing detailed coding explanation, and I'm sorry Pete but you just lost points on this. I want details. I want code. This is a coding challenge by coders for a large coding audience braying for blood. Well, a large coding audience, at least.
Pete's also hit the inevitable Voice Control Brick Wall. I'm guessing, being on the wrong end of voice control far too often, that it could be an accent issue, so I'd be interested to hear what sort of success those with a (reasonably neutral) US accent have had. Accent, to me, is the 21st century equivalent of the Date format. What, exactly, does 6/7/2013 represent without locale context? The same happens with voice. So if Pete can't talk to his app he's going to have his app talk to him. Just please include a Mute button.
Eskil[^] doesn't provide much in the way of concrete progress on the framework he's building, but does provide a walkthrough of his non-OO approach to creating and rendering UI elements. I'll be honest and say I'm not a fan of his approach. OO development helps separate who is responsible for what, and while that may not result in the tersest of code, it does promote maintainability.
Code Monkeys[^] have touched upon something that you can be sure that the likes of Apple, Google and the Kinect team at Microsoft all know: gesture based UIs are tiring. You know why Tom Cruise's character in Minority Report was so ripped? It's because he was doing 12hr days of shoulder and ab work while using those gesture gloves of his. 12 hours? Try 4 minutes.
Infrared5[^] have revealed another little worm in the Apple: gaze tracking has not been implemented in the PC SDK. It will be added later. So what did the guys do? They slammed their foot on the clutch, dropped from C# down to C++, dropped the clutch and left billowing smoke in their wake. This is exactly what I want to see from a contestant: a dammit-I'll-do-it-myself approach to dealing with issues. Now if only they had a little AC/DC playing in the background...
Thanks for the update Chris, and I'll keep the code coming - although I think you'll find that I was apologising that non coders aren't as awesome as we are. My antipodean "rock" last week was Men At Work - damn, but it was real earworm music.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 3 and we're halfway through the challenge. Hump week, so to speak. I missed the Google hangout due to jetlag and general mayhem.
Pete [^] is motoring along and getting the gesture control working. This seems an odd statement to write, but a timely one: Pete is writing an application you control through waving your hands and there's no magic, no secret incantations. He's using the same tools we use day in and day out and that, to me, is amazing. There are also no fires or explosions, very little swearing, no tantrums or hissy fits, just constant, solid, back breaking slogging through the code and getting it done. By himself. Much respect.
Infrared5[^] are bucking a trend of the previous contest with crazy statements like "We were pleased to see that all the tasks we set for ourselves wasn’t too big of a bite to take". Regardless, they too are moving on rapidly and have a demo of their Kiwi Catapult Revenge game available. The biggest challenge for them? Eye tracking, it seems. I'm praying they crack this because I have my own nefarious needs for decent and cheap eye tracking.
Eskil have also released a beta version of his Betray game using his (I'm assuming) framework. His post focusses mainly on UI and some exquisite rendering which screams, to me, too much spare time. If he has the luxury to make the UI as stunning as his examples then he's hiding something up his sleeve. Interesting.
Code-Monkeys[^] are focussing on input control and, to that extent, focussing on simplification. And their demo code is simple. Crazy simple. Work continues.
Simian Squared[^] have threatened to play Unchained Melody[^] which is an automatic failure in my book. Careful lads. Their clay modeller is progressing and while they mention piles of misshapen virtual clay there are no pics. Show us the carnage.
The Sixense guys[^] have their puppets moving! This is wicked. They are moving on to actual story telling next. Serious progress.
Lee[^] continues to bravely and foolishly attempt to change one of the biggest online industries single handedly. Or with two hands, depending. He's not only pushing perceptual computing to the limit but has decided to rewrite the conferencing network code too. He's also showing some vampire tendencies with the rising sun causing him serious damage. I worry, Lee. I really do.
Overall the contestants are plowing ahead and it's amazing to see the progress made. This offers the chance for some really polished presentations at the end and judging is going to be soul searching.
Thanks for that Chris. Have you watched the video from Nicole, Sascha and Steve yet? Worth viewing if you haven't - especially around the 5 minute mark. I'm sorry to say, but I'm going to keep the verbose blog posts coming.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 2 in the Ultimate Coder challenge sees the teams settling down to the cold harsh light of reality mixed in with a wonderful dose of reckless abandon.
Sixense Studios[^] had the wind knocked out of them a little after watching Media Molecule[^] demo a PS4 app that mimics their idea. However, they have since realised that their 6 weeks of work can still beat the two years work, and who knows how many billions, invested by Media Molecule because while Media Molecule's demo is wicked cool, it's based on pre-recorded movements and not the full physics-based hand puppets they are building.
Lee[^] is continuing his work on transporting you, via the depth perception camera, into a virtual world. I really hope he's watched this movie[^] before he goes too far down that rabbit hole. Watch his video to get a little weirded out by it all.
The guys at Code-Monkeys[^] have totally nailed another issue with the PS4 demo of Media Molecule. The PS4 demo relied on using a wand, and this is akin to using a stylus on a touchscreen. While they demoed an initial cut at their "looks can kill" eye tracking shooter I get the impression these guys are along more to help add as many stepping stones as possible to allow those who come next to reach the lofty goals of the ultimate UI, rather than assume they can create it by themselves.
Simian Squared[^] raise another interesting point that follows on from Code-Monkeys' points: the advent of the touchscreen interface has heralded a new era in user experience and programming is now, more than ever, an art. The programming tools available to us today make the task of development more and more mechanised. Drag and drop, ORMs, do-everything frameworks and convention over configuration mean writing an app is easier than ever. However, writing an app that is a pleasure to use is now harder than ever because we, as users, no longer accept substandard interfaces or a poor experience. Simian Squared are producing a virtual potter's wheel. More than simply creating a system that responds to the position of a few digits, he wants to transport you to a new world. He sums up the challenge but also the potential in his application: "a great concept artist will sometimes bend the rules of perspective or light and shadow for impact". The new interfaces available to us today make programming, more than ever, an art.
Eskil[^] continues on his quest to write a hardware abstraction API that's pluggable. Another step along the path to better UIs and (potentially) better hardware. As he writes: it's hard to get someone to buy your hardware if there are no applications that run on it. Abstracting out the API for hardware should mean that writing apps for new hardware is a snap.
Infrared5[^] continue on their quest for an eye motion interface. Whereas Eskil had serious issues with his camera, these guys are waxing lyrical about how well it's performing for them. The joys of pre-production hardware. They also add to the idea that collaboration as the key to success in this challenge. I am getting a little worried at the lack of any actual attacks on anyone's jugular, but it's early days yet and the prize pool is, I'm sure, sufficient to get the red haze settling over the contestants.
Pete[^] is attacking his task methodically and systematically and with an eclectic mix of music. The Angels? Very nice. While others are focussing on the camera Pete's started with voice recognition. Sure, over 65% of human communication is non-verbal (depends on which study you refer to), but I'm not expecting Pete to include emotion detection (yet). Gesture and touch are great for items you can see or touch, but what about those things you can't see or touch? You can ask for something, and then once you have it you can manipulate it via gestures. Voice is important.
The challenge here is to showcase perceptual computing and this means to rethink how we interact with a system at a fundamental level. Sticking to familiar paradigms may make it easier for a person to approach a technology, but it doesn't help them take full advantage of a technology. It holds them back. Touchscreen interfaces never caught on until the hardware and user interface advanced sufficiently to make it intuitively natural to swipe and pinch. The hardware had to be fast and reactive enough that a gentle swipe would achieve a result, and just as importantly the UI presented to the user had to be obvious enough to encourage and respond to these gentle swipes. A stylus retards the use of a touch interface, and a wand retards the progression of a gesture based interface.
What the gesture and voice based based UI looks like, and how this can be presented to the user in an obvious and natural manner, is what this challenge is about.
Last year saw the Ultimate Coder Challenge pit 6 teams against each other to create the Ultimate App for the Ultimate personal computer - the Ultrabook. The sadists at Intel are back at it with a new twist: create an application that shows off a convertible Ultrabook[^] and/or takes advantage of the Intel Perceptual Computing SDK 2013 Beta[^]
Let me say from the outset that I'm ignoring the "or" in the "and/or" above. The contestants must create an app that shows off the hardware and uses the perceptual computing SDK to have a chance. This means
The application needs to take advantage of the Ultrabook's specific features such as the sensors, the touchscreen, always on/always connected, power management and/or graphics.
The application must make sense for a laptop form-factor and a tablet form-factor
The application must make use of gesture controls, or eye tracking, or voice control, or anything else hidden in that magical SDK.
I'll add a fourth requirement
The application must make sense as an Ultrabook application
What I mean by this is that an application that is an existing application shoehorned into an Ultrabook with support for an Ultrabook tacked on in a way that doesn't harmonise with the original application will not get my vote.
So, on to the challengers.
Sixence Studios[^] (I keep wanting to hand them a "p") are old hands at the perceptual computing stuff. They've demo'd at Intel keynotes and are developing a virtual puppet application. I will be interested to see how this works in the tablet form factor.
Lee Bamber[^] refuses to back down from a challenge, and this is the third contest I've had the honour of judging him in. His entry will be a virtual conference that will allow you to transport yourself into a 3D world. "ambitious to the point of foolishness" is what he writes. He's mad. I love it.
Simian Squared[^] will be creating a virtual potter's wheel complete with virtual clay. Please note that points will be deducted for any "Ghost" moments that appear in any videos demonstrating the application.
Code-Monkeys[^] continue the primate theme and will be taking their existing Stargate Gunship game and making it a fully immersive. Gestures for firing, voice commands to control weaponry and gaze capture for targeting. Gaze targeting is something I feel is going to totally and utterly change the nature of video games and I'm very keen to see how this works. A shooter game that reacts as fast as you can look is going to get crazy. I can feel the headaches already.
Infrared5/Brass Monkey[^]. Again with the Monkeys. This feels weird. They will be creating a 3D FPS using head tracking, facial recognition and voice. This will be a little different in that the angle of your head will change the view on the screen to make it more immersive. Interesting idea, and their art looks killer.
Quel Solaar[^] has decided to make it simple and reinvent the entire PC interface. He will create a game, a data visualizer and a creative tool that will make use of his open source software layer in order to make it "easy for any developer to make use of the diverse hardware available to us". Any input (voice, gaze, gesture), any display (phones, tablets, laptops, workstations) and any hardware configuration. And I thought Lee was nuts.
Our very own Pete O'Hanlon[^] is taking the safe path and creating a voice and gesture enabled image editing application. This seems specifically an effort to show off the perceptual computing SDK rather than show off an application, and I like that. Further, he's using touch as an input, thus being inclusive of the traditional Ultrabook features rather than just plowing on with the sexy, younger, more nubile features of the PerC SDK.
Each week I'll post an update of how the teams are progressing. May the best team win.
Thank you for pointing out the reality check on the application needing to work on an Ultrabook to get your vote. I wish more competitions were forthcoming on what the real judging criteria is having wasted time on competitions that didn't. I was going to enter the Perceptual Coding contest but I don't have an Ultrabook. You just saved me a ton of time.
In other words he is only talking about the Perceptual SDK in regards to the Ultrabook challenge, but the inverse, using an Ultrabook with the Perceptual challenge, is not true?
I see the distinction now that you point it out. My worry would still be though that as a judge he'd still be significantly biased in favor of an Ultrabook compatible entry in the Perceptual challenge seeing as that he is admitting that bias. Again, I see from your comment he does not specifically state that for the Perceptual challenge, only for the Ultrabook challenge, but I'd like to hear from Chris himself that he wouldn't favor an Ultrabook entry.
I'm not being pedantic about this. I spent a great deal of time on an entry for another challenge only to find out afterwards that it never had a chance of winning, due to the judge's bias towards a particular class of app. Several judges even told me in an unsolicited manner how much they liked my entry, but from the finalists chosen it became obvious that an app like mine could not win, despite the fact it was in a vertical that was even proposed by one of the judges for the contest in a forum post for suggested entries.
Unfortunately it was one of those releases where, if no one noticed anything different then it was a stunning success.
Under the hood we're working to expand our notion of what a member's account means. For most people it means nothing, but for those who write articles or post messages or who want to actively participate - and this is a lot - then your account is your spot, your area, your personality.
The question we've been asking ourselves ever since we launched RootAdmin[^] is: do we have separate accounts for separate sites or combine them. Initially the answer was a clear "separate accounts" since what someone says about themselves on one site may not be relevant for another site, or conversely: someone may choose not to say something on one site that they would say about themselves on another.
However, counter arguments were that you are who you are, and biographies don't have to always be about the site. They should be about you. Your picture is your picture, and your display name should be unique across sites, not just on one site. Otherwise your persona may be spoofed on another site without your knowledge.
Further, we've now added CodeProject.TV (currently in Beta) and we very much want what someone does on CodeProject.TV to appear on CodeProject, and for their reputation and expertise on CodeProject to be reflected on CodeProject.TV.
So we're steadily moving towards having your Account live in the network of sites, not within a site itself. Each site will continue to have a site specific profile that talks about the number of posts or articles you've posted, but you will be you across all sites.
In working towards this we've embarked on a plan to throw away large chunks of code. Recklessly, joyously, we cut the code loose and bind the ends up with electrical tape, like any good Engineer. What we'll end up with is a CodeProject made of services, not of modules and DLLs. A CodeProject whose parts can be mixed and matched and used in many places for many different things by many different systems. We started this process back in October (yes, the time that we temporarily disabled voting in the forums) and today's code drop represents the next major step in that migration.
We turned off voting a few weeks ago because of a load issue. Things have been a little hectic so fixing the issue has taken some time, but it also allowed us to see how the community fared without voting.
Quite nicely, as it turns out.
There are, however, two exceptions to this.
1. It drove me crazy that I could not upvote someone in The Lounge[^].
2. It drove me crazy that there was no way to warn people away from poor discussions in the discussion forums other than via the hammer called the reporting flag.
In doing this I had the opportunity to reqork things a little so I added a few options to the voting, 2 of which are to only allow up/down voting (we had this, but in a different form) and also to only allow up-voting.
We'll see how it goes and continue to season to taste.
2. It drove me crazy that there was no way to warn people away from poor discussions in the discussion forums other than via the hammer called the reporting flag.
Since downvoting is not available in The Lounge and The Soapbox (and I support the decision to keep it out of both), how are you going to achieve that?
Those two forums are typically the ones that have poor discussions.
"When you don't know what you're doing it's best to do it quickly" - Jase #DuckDynasty
I've added the ability to turn off email notifications for articles and forums. You already have the ability to set your defaults to not allow private email replies to your messages, but this extends this so that at any time you can turn email notifications on or off globally.
Lee, John, George & Suresh, Sagar, Shailesh and Andreas have submitted their works, their creations, their results of endless sleepless nights and possibly a fair bit of cursing and we, the judges, have the task of picking the apps to pieces with a small pair of tweezers. Metaphorically.
The original task for the contestants is to "create apps that take full advantage of the performance advances, graphic excellence, touch and sensor technologies of the latest Ultrabook™ computers". That's fairly broad, and I would add that a critical component of the challenge is to showcase the Ultrabook.
The Ultrabook is a new device, the love-child of an ultra-light laptop and a tablet. The operating system of choice, and in fact the only one to currently take full advantage of the hardware is Windows 8, and Windows 8 fully reflects the Dr Jekyll and Mr Hyde nature of the unit. It's a laptop. Though if you ignore the keyboard and hold it awkwardly it's a tablet. Yet it's a PC. A fast, light, energy efficient, peripherally rich and accommodating computer that does everything you expect from a laptop, and oh so more.
To showcase an Ultrabook, then, one needs to showcase the operating system to allow the operating system to showcase the Ultrabook, and when I think of something being showcased I expect to see something unexpected, maybe contrived, but above all, something entertaining and possibly educational.
So I want to be entertained and educated by these applications. I want to run the applications and, from them, understand what an Ultrabook is.
LoveHearts is a social message game with a couple of games within the game. Lee went to extraordinary lengths to port his OpenGL based framework to DirectX, and succeeded, give or take having to downgrade a video driver. His application takes advantage of the touchscreen, light sensor, NFC, the compass and features such as notifications.
It's a technical marvel. It's a triumph of sheer bloody mindedness over common sense. It's a monument to perseverance. It is not, however, an application that makes any sense to me. You swipe the wrapper, you get a token, and a small piece of candy appears. You touch that (for wont of anything else to do) and it floats to the top of the screen. Touch an item at the top of the screen various actions can be taken such as sending a message, reading (and sending) jokes and poems, or playing a game. There is a bug in the app and sometimes, no matter what item up top I press, the train game appears which, after trying a dozen times and watching Lee's video, I still have no idea how to play. No matter what I do the train careens forward with a mind of its own.
The idea behind Shufflr is an interesting one. You are presented with a series of videos potentially of interest to you. The Ultrabook twist is that it would be touchscreen enabled and would use the tilt sensors to shuffle backwards and forwards between videos. Add to it the potential for transferring information via NFC, using the ambient light sensor to make it easier on the eyes, and maybe WiDi to throw the video onto your TV and you have a neat app.
In judging this application I came across several serious glitches: launching it would show the launch screen, then I'd be thrown back onto the Start screen. Launch again and it would tell me it was logging me in, and then I'm thrown out again. Rinse, repeat, and eventually after a few restarts I'm in. The first screen provides an overlay with the various gestures. This is incredibly important, and the #1 issue I have is that once you dismiss this screen you are unable to find it again. I was, frankly, lost trying to control the app. Shuffling the videos works fine, though pinch to zoom doesn't. Shaking works to reshuffle, but care must be taken when holding the Ultrabook on your lap because leaning even slightly will trigger a video swap. Too bad if you were enjoying the show. The two modes - DailyFix and Flipside - could be highlighted far more than they currently are. This, to me, is a failing of minimalist design: it took me a good half dozen uses of the app to realise that the " DailyFix FlipSide " words at the top left were actually links that, when clicked, changed the app mode.
One final niggle: when viewing the start screen, Shufflr displays video caps on the live tile. However, it doesn't brand the live tile with the Shufflr name so, among the dozens of tiles I have on my start screen, it's extremely difficult to spot the Shufflr tile.
BioIQ is a simple teaching game where you label the parts of, well, parts. A plant cell, the heart, eye and other internal gooey bits. It keeps its live tile updated but its primary nod to the Ultrabook is its touchscreen capability. For this app, that's really all that makes sense (unless they wanted to make it really hard and force you to slide the labels to the organs using tilt). It's an app that, when you use it, you don't even realise you're using a touchscreen laptop. That's not a bad thing.
Wind up football is an extremely simple, graphics heavy game with the rules "grab the ball, keep away from the mobs". Instructions are minimal, but as you play around you realise you can touch one of your team members on the screen, draw a line to that unit's destination and in a manner of speaking direct the play. However, the goal seems to be to avoid the other team while, at the same time, beating the daylights out of the other team by tapping on an icon when one of your units gets close enough. It uses touch, it uses the GPU, and uses the communications APIs to enable multiplayer action. It's extremely polished and solid, but the jury is still out for me as to whether this is the application that I would fire up first when showing off a new Ultrabook to a friend.
MoneyBags is an expense tracking application that focuses on being seriously productive rather than seriously fun. My initial experience with it was great - it's the only entry that's self packaged with an installer - but on activating the application with the supplied product key the application is stalled on the activation screen. Restarting got me past this, and then I was presented with a basic tour - always a nice touch.
The application takes advantage of the touchscreen, power states, GPU and the horsepower under the hood. Again, however, it's not an app that I would showcase as a prime example of what makes an ultrabook exciting. It does, however, have a trick up its sleeve: NFC communication so you can transfer transactions from your smartphone to the application. I am, however, one of the faceless mass of iPhone users who must put up with an NFC free device so I'm unable to test this capability.
A second issue that struck me was that, even though the application was touch screen enabled, it was most definitely not touch screen optimised. On the left hand side is a scrollable list of categories. There is a scrollbar, but one would expect that simply swiping on the list would scroll it. Unfortunately you need to touch and move the scrollbar which, on my screen, is about 2mm wide - significantly smaller than my big fat thumb. Scrolling often resulted in nothing happening, or worse, one of the categories being accidentally opened. Further touch issues were evident in the lower nav bar: the home and settings icons were way too small to be easily touched, and the other option labels, while bigger, were still on the uncomfortably small side. This is, unfortunately, an app better suited to a mouse than a touchscreen.
The language trainer, which I thought was a web-based HTML5 application, is in fact a Metro app written in HTML5. A standard PowerShell based install and a Start screen tile, and in you go. I chose the French lesson, since in Canada we're meant to be fluent in French and English, but evidently my French is not up to par with "rue" not being the correct translation of "street" and none of "siège, banc, or selle" being enough to satisfy "seat". You only get one try, and there are no hints, so it's a little frustrating to work out what it thinks the answer should be. The app uses touch screen input, but, as far as I can tell, no other Ultrabook features.
Judging finished this week and the points will be tallied and a winner announced. Good luck to all, and I take my hat of to all participants for dedicating their time and energy to entertaining us judges.
The final round of updates has come so it's time to see what the developers have produced. Actual judging starts this week and until I sit down with the score card I'll keep my comments light and breezy.
Lee is done and his app takes advantage of an extraordinary array of Ultrabook functionality. Messaging, movement sensors, location, light sensors, the webcam, multi touch, graphics (to an insane degree), parallel coding, communications as well as a nice foray into InApp purchases. This is a man possessed. This is a man who needs sleep.
George and Suresh have summed up their 6 week journey with a few demo's of their app. 6 weeks, day in, day out, and they are done, with the added benefit that they get to demo via recorded video, rather than the traditional live demo that worked 100 times during rehearsal and failed in front of a studio audience. They have also covered a great issue regarding packaging and distribution. The standard dev way of distributing a Metro, sorry, Windows Store, app is via an installer power by PowerShell. It's very, very clunky so improvements in this area get them brownie points. They too have hit the gamut of Ultrabook features, so testing will be fun.
Shailesh at Clef Software is likewise done and their app is currently going through the store verification process. Ah, gotta love red tape. Although, you gotta love apps that are certified to be virus and malware free, too.
John has wrapped up with a plea to us hard, unforgiving and downright cynical judges that it's all about the experience, and not about the technical excellence of the code. As a coder I'm immediately offended. Technical Excellence or Die! As a user, and as a coder who has 9 million other coders constantly, unrelentingly, passionately picking apart my application, I totally and utterly agree. The pursuit of technical excellence can lead to a truly awful solution, because devs often forget that users are an integral part of the requirements.
Sagar provides a brief discussion of their use of always-on / always-connected. Again we're hearing of driver issues, and again the guys, like others, have spelunked into territory angels fear to tread and done a little driver hacking. I live for the day that drivers are a thing of the past.
Andreas has posted his final post on his efforts to convert a HTML5 app to the new Windows 8 UI design. No code or sample apps for now, so full judging will have to wait until next week.
So no more contestant blogs, and one final round of judging to go.
The ultimate coder challenge is winding down and the contestants have made their penultimate post. Were it me doing the coding I'd still be at the planning stage, but well and truly ready to pull 7 all-nighters to get the thing done by next week's deadline. The six contestants are, however, made of sterner, or at least more organised stuff than myself.
Last week was the Intel Developer Forum so no blogs to review. Contestants and judges were too busy running around exhibit halls and consuming whatever freebies were available to do anything serious, though from the sounds of it secret elves back at home base kept the cauldrons bubbling. Nothing like a bit of tag team development.
So on to the challengers:
Lee looks like his app is fully baked. Actually Lee himself looked pretty baked in some of his IDF trip photos.
George & Suresh also seem to be at a good point with their app and they have added what I'd consider a killer Ultrabook feature to their app: NFC exchange of transactions from mobile devices to their MoneyBags Ultrabook application. This is the essence of what the Ultrabook enables: a completely new way of interacting with the device. It's not a computer that sits on your desk to do spreadsheets. It's a seamless part of your day and you interact with it in ways not possible with other devices. Well done, guys.
Shailesh discuss their experience in submitting their (desktop) app to the Intel AppUp store. One of the great features here is the in-App Unlocking API which enables unlocking additional game levels within an app. It's great to know these things are baked into the core.
John has waxed lyrical about what Ultrabooks mean. It seems like the week at IDF has enabled the contestants to understand completely the vision of Greg Welch, the father of the Ultrabook. Again, it's about providing an application that understands the context of the user. Where are they? How bright is it? Are they moving? What devices are near them? How is the user touching the device, and is he, as my hope has always been, about to try and kick the Ultrabook between a set of uprights on the footy field?
Sagar discuss their addition of GPS sensor info and multi touch. It's icing on the cake time for them.
Andreas discusses some touch additions to his app. He's using click events, but I can't help but wonder if touch and drag events would be more appropriate in this case. A click is an up/down event pair, whereas when you interact with a screen using touch it's often a down/hold/drag sequence. There are endless possibilities here but I guess I'll have to wait until next week.
They are close. They all seem pretty wrecked and/or extremely excited and wound up after IDF. Who can blame them.
One point I should make is that the units the developers, and us judges, are using are prototypes. They will never appear on a store shelf and are not what you would consider fully polished. Driver issues have been the biggest hurdle, as well as small issues between the versions of Windows 8 installed. Our units came with a version slightly earlier than the version available now, so the slight OS differences have also added to the excitement. This is truly living on the bleeding edge, but it's a very comfortable, well crafted, with a really nice rubbery cover on the top bleeding edge. We really don't do bleeding edges like we used to.
We're past hump week, if such a thing were actually possible in this challenge[^], and we're starting to see the applications come to life. Overall the contestants' apps are coming together and they guys are focusing more on showcasing the Ultrabook and Windows 8 API than merely grinding out the framework code for their application. The contest has gone from "how am I going to get this done" to "how can I make it rock their socks?", and has expanded to more philosophical and design discussions on the nature of the Ultrabook and what it means for user interaction.
Lee is powering ahead. As a reminder he is working on an application that combines social messaging with random pot-luck. His OpenGL-to-DirectX 11 translation engine is working. He has NFC happening (after a little device foreplay - really, too much information) as well as a bunch of other Ultrabook specific support that is truly keeping with the spirit of showing off the Ultrabook hardware and Windows 8 API. Check out his videos if you like a little time-lapse craziness.
George and Suresh at Blue innovations look like they're close to being done with their MoneyBags 2.0. They are, methodically, working through their list of Ultrabook features they wish to support (I think it''s all of them, at last check) and have gone as far as to provide an eBook detailing their progress and their earned wisdom. Grab yourself a copy of A Simple Guide to Ultrabook Development. An important UI issue these lads have discussed is ease of use of the touchscreen. Grab a tablet or iPad and think about how easy it is to reach various parts of the screen. On a 3.5" screen it's all accessible. On a 7" the centre bits take a little wiggling. On a 13" touchscreen laptop there are definitely parts of a screen that are easier to hit than others and an application's design should take this into account. Their use of hidden menus, though, would draw a serious, horizontal-brow'd frown from Jakob Nielson. Don't make your application an adventure game. It should all be obvious.
John and Gavin are also in a great personal space. They are feature complete. Complete with bugs and with optimisations to be done, but complete. They mirror comments made by others that hover is dead. Anyone who's written an app or a website optimised for touch knows that you don't have a mouse or cursor. Unless you insist on stylus based devices, you crazy cat, you. Touchscreens may, in future, have the ability to detect your finger from a centimetre away and provide hover events, but for the moment it's binary: you're touching or you're not.
Sagar loves a little drama and did what any red blooded developer would do when given a pre-release piece of hardware running a pre-release OS with pre-release drivers: he tried upgrading to RTM bits. You can picture how that went. Regardless, their Shufflr video sampler application is fully bean-bag enabled using the inclination and accelerometers. Quick tilt-and-back to flip to the next video. Tilt-and-hold to scan through a video. Very nice.
Andrea is continuing to work on his language trainer. It's coming along, but as I've said in previous posts: I'd like to see something that more fully showcases the Ultrabook. At the very least a discussion on the ins-and-outs of developing touch screen UIs for web applications would be valuable.
Overall, we're close. The Intel Developer Forum is next week so contestants will have a week off to booze, I mean, discuss strategy with peers in informal round-tables, so there will be a break in the regular scheduling.
Initially I thought my vote for the top app was sewn up early. However, as we see how the contestant think, and how they approach the application development process, I'm now torn in 3 different directions. Pushing the boundary hard and far always gets point from me, but sitting down and methodically working through the issues to produce an app that makes sense, rather than on;y being a showcase, shows a deeper commitment to me.
We only have two more weeks for each to finalise their offerings. This will be interesting.
The Ultimate Coder challenge continues and it looks like the contestants are getting down and dirty. To add to the spice I now have in my hot little hands a prototype next-gen Ultrabook loaded with Windows 8 with which to test actual applications. The unit is a pre-production test unit, so it will never actually be on the market, nor is it meant to be a perfectly polished example of the genre. The fact that they provided the sort of power cable you'd find on power tools, and the lid is rubberised so that (a) you can get a good grip on the thing, and (b) it probably bounces when dropped, speaks volumes about the kind of torture to which they are expecting it to be subject. It makes me want to look at them with big, wide, innocent eyes and reassure them that I won't break it. I promise.
I am deliberately not investigating the new Ultrabook features on the demo unit. In fact I'm deliberately not even trying to find out which features it has because I want the contestants, through their apps, to teach me. I want to discover the features, and I want to be amazed. I am, however, re-familiarising myself with windows 8 and The Design Formerly Known As Metro (DFNAM) UI. I'll say outright I'm not a fan of the schizophrenic Desktop/The DFNAM UI split personality.
As a reminder: central to this quest is requirement that contestants "create apps that take full advantage of the performance advances, graphic excellence, touch and sensor technologies of the latest Ultrabook™ computers". This is a competition and while an awesome application that blows me away gets points, only an application that takes full advantage of the unique abilities of an Ultrabook running Windows 8 will win. Showcase the platform, not your application, and prepare to get out of your comfort zone.
As has been a theme, Lee is tromping through with steel shod boots where angels fear to tread. The man is crazy, and I dig that about him. If you are looking to develop Windows 8 applications, follow Lee's blog. He's starting from basics - commenting out windows.h, building (and failing to have success with) static libraries, multi-core development, sensors, DirectX 11 and everything that comes with that.
George and Suresh are continuing with their MoneyBag rewrite. They have progressed to the point where a preview is available, but the download requires registration and no registration email made its way to my inbox. So, as much as I'd love to comment on what they've done hands on, I can't. However, they have continued to provide extensive details on how they are progressing and the challenges they are facing, and have provided a checklist of Ultrabook features they are targeting. Not all Ultrabook features centre around sensors and jet-packs. These guys are focusing on the subtler things such as instant on, touchscreens and Smart Connect.
John at Soma games are discussing what they are using more than how they are supporting Ultrabooks. In particular they posted that they will be using the Unity 3D 4.0 engine - which is an interesting gamble since it hasn't been released yet. While it's great that the Unity 3D 4.0 engine will support the DFNAM UI, I would like to have heard more on how this ties in with their application really showcasing the Ultrabook experience. They demoed touchscreen, DFNAM support means it plays nice with Windows 8, and Unity 3D should push the GPU hard, but I'm hoping there will be some other sensor or power management or notification based component of the game that makes you think "this is a great game on a PC, but it's killer on a Ultrabook".
Sagar have hit another seemingly unnecessary roadblock: ambient light sensor support need the DFNAM. They are also having NFC sensor issues on their Ultrabook, which just continues their run of bad luck. Part of the challenge in this competition is that it's being run on pre-production hardware and so driver support may be immature. I know I ran into a wall trying to get a new touchpad driver, so I'm hoping their contacts at Microsoft and Intel will come through with the goods. At least the accelerometer is working for them.
Andreas is plugging away at his language training app. Since he's chosen to use HTML, issues with building libraries, using native code, and all the fun with sensors is a non-issue. Although, that's a double-edged sword since it limits his ability to really show off what an ultrabook can do.
We'll see what everyone has up there sleeve next week.
The Ultimate Code Challenge[^] continues into its second week. 6 developers, 6 svelte 3rd generation Ultrabooks, 6 apps that will wow and amaze us. In 6 weeks.
Most of you are thinking "and if it were my boss, he'd demand I do it in 3, and the specs would change on day 20". When I was a lad...
Lee[^] continues to work on Love Hearts, a social app for sugar addicts that utilises the Always On Notifications system to send alerts to your co-addicts and have their machine, which they thought was safely asleep, respond to your message by 'pinging' you. At 2am, presumably. Are they talking a polite "pip", or an actual 140dB full sonar ping. I'm sure there's a setting for it somewhere.
At the core, Lee is looking to create an app that showcases the abilities of the Ultrabook and the Windows 8 OS. Pick up the Ultrabook and the game responds. The app is playable using only the touchscreen, and his fundamental philosophy is that the game should be discoverable in an enjoyable way.
I am a little worried, but also grinning a big grin, when he discusses a major issue with Windows 8: a metro style app cannot use OpenGL. So he's going to write his own OpenGL library in DirectX 11. That's so awesome.
George and Suresh[^] continue to discuss MoneyBag, an expense tracking application based on (but a complete rewrite of) their 1.0 version of the same name. The standard challenges of screen resolution and touch vs mouse are discussed, as well as the use of power saving APIs in Windows 8.
I'll be honest with this one: I want to see more use of Ultrabook specific features. I hear rumours of an NFC chip on the units, and what better way to drive adoption of a financial planning application than by offering your users the means to bankrupt themselves by whipping out their Ultrabook from their jeans pocket, tapping it at their local supermarket, and spending themselves broke in a frenzy of Ultrabook NFC tapping madness. I'd do it, simply for the looks on the cashier's face. And then, obviously, I'd need to do some serious self-reflection, probably with the aid of a financial management app, and work out where all the money went.
Shailesh[^] continues to work on his BioIO teaching app. Again, the theme of variable screen resolutions and touch enabled interfaces came up, which is not enough of a differentiator among this group of contestants. I'm hoping they have a killer feature that showcases the Ultrabooks up their sleeves. One niggling comment is that as a dev I read code better than prose, so their screen shots of code at an equivalent of 3px font is a little painful to read. Guys - any chance of posting code as, well, code, instead of images?
Soma games[^] are writing Wind Up football. While I was hoping for a fully immersive experience involving the use of the gyro, accelerometer and touch screen capabilities, combined with your boot and a run-up kick, it turns out they had something far more prosaic, and potentially more sustainable in mind: a touch-screen football game. And it looks awesome. These guys are developing an iPad version in parallel, so have a headstart in terms of game design, graphics and audio, but are already identifying challenges such as variable screen resolution, and, well, a keyboard.
Sagar[^] have entered the second week and hit a brick wall. Metro apps can't use WiDi to stream Metro live tiles to a TV. This is crazy, and invites the inevitable comparison with iOS and AirPlay. They have also hit hurdles trying out the sample code for Ultrabook sensors, and have generously provided info on how to fix them. 5 weeks to go and they've done a reset. This is a pity, but the bigger pity seems to be the Metro/Desktop dichotomy. We all understand that Metro is for phones, tablets, and touch-enabled Ultrabooks, and Desktop for those times you need a desktop, but splitting support for hardware between WinRT and native will make no sense to users. Why would you have NFC support in Desktop apps but not Metro apps? Are you more likely to tap your desktop or your phone at a checkout?
Andreas[^] seems also to have stepped into the minefield that are the Microsoft samples. Once he had the exceptions under control, he found a neat little issue: push notifications will throw an exception if you have no internet connection. Obvious, really, and I understand that samples are merely snippets to get you started, but no exception handling? Guys... In any case, Andreas is on his way.
A se two common themes from the contestants. The first is that the UI for their apps should be easily discoverable. Time and effort is being spent ensuring that the actions you need to take to achieve outcome are obvious. This, to me, signals a serious maturity in Windows application development. Engineers are realising that users don't appreciate their technical excellence: they appreciate an app so easy to use that they forget they are using an app. Hallelujah.
The second is that the samples for Win8 sensor support is a mess, the support for sensors is uneven, and that the library support between Metro and Desktop apps is not consistent. This is a challenge. An unnecessary challenge, in my opinion. It does, however, make my life as a judge far easier because the harder the challenge the more the field is split.
A small disclaimer: in this post I use the term "Metro" to refer to the name of the Design Language Formerly Known As Metro simply because that's what the contestants are using. It has another name that I keep forgetting, so for now, when you see the now-defunct term "Metro", simply replace it with "The Design Language Formerly Known As Metro" and all will become clear.
There have been many arguments on whether code should be commented. Here's my experience.
Comments fall into two buckets: Object and method decorations - those that explain what a file, object or class does - and in-code explanatory comments that appear inside methods or blocks of code to add explanations, notes, or to explain the non-intuitive.
Anyone who says that there is no place for comments inside methods is, to me, misguided at best. Code is not a literary work of fiction open to various interpretations. It's a precise series of instructions, and sparing, sensible, well-placed notes on what's going on inside a method can prevent disasters.
There are many, many, many developers and proscribers of dogma that insist that decorative comments are also unnecessary. The standard argument is that names should be clear, descriptive, unambiguous, and as long as necessary.
If we all spoke the same language, had the same cultural background, same experiences, same literary ability, and all wrote code at exactly the same time, using the same, precise naming conventions, then yes, good naming will solve most ills and decorative comments are not that essential.
However, we don't work in this environment and it's extremely short sited, and costly in the long run, to think we do.
A term used in one context may mean something different in another. A trivial example is "Create" which could mean create a new object in memory, or store an existing object in a row in a database.
A term used in one culture may mean something different or, in fact, the opposite in another. To "table" something in North America means "to postpone for consideration". In the UK, Australia and the rest of the English speaking world "to table" means to begin consideration of the topic.
While it's straightforward to use names that are more descriptive it's important to understand that ambiguity is often difficult for a single developer to spot. They know what they mean, but it's only after other developers look at their code that it becomes apparent that other developers may not. Do not fall into the trap of assuming everyone understands what you mean.
One solution is to mandate that names be fully descriptive: CacheObject, UploadToCloudStorage, DiscussIssue. This helps a little, but very soon you hit the point where providing an unambiguous descriptive name stretches the limits of acceptable name lengths. Steve McConnell writes that method names should be between 9 and 15 characters. Good luck.
Still, this doesn't help. No matter how well you name something, how consistent you try to be, how dire your threats are to other devs, you'll always have situations where you just don't know, with absolute certainty, what a method does. With no comments the developer needs to go and read the method to understand what's happening. This is a monumental waste of time, and worse: it's frought with peril when code is read but the intent not understood.
Another issue is parameters. While the same arguments for tight and descriptive method names should be applied to parameters, it's almost impossible to encode in a parameter name things such as restrictions on acceptable input values or notes on special value handling. Comments on parameters allow you to understand the results of suppling null, 0 or empty values, and to understand the limits of what you can supply.
My approach is you should be very, very careful with object and method names, and strive to be descriptive and unambiguous and have as your goal a 95% clarity on naming. That is, 95% of the time a developer reads a method name, that name is clear and unambiguous. However, the list of ambiguous names - that 5% - will vary per developer. That list of ambiguous names may even vary over time for yourself. A simple, clear, well-written, and up-to-date comment will solve this ambiguity.
The "up-to-date" specifier raises the issue of drift. The purpose of a given method may drift slightly from its original intent. The comment attached to that method may then be slightly (or seriously) out of sync with the intent. So too may the method name. To use the argument that comments are useless, and at worst, dangerous because they may not represent what the method does can, and should be applied to method naming as well. When a developer updates a method is it easier for them to make a note of any provisos in the method comment, or is it easier for them to rename the method, and hence the object's API? The method name and the comment should both be kept up to date. Developers get tired and cut corners though.
The way I approach software development is to assume the worst. I assume the inputs to my methods will be bogus. I assume methods will return null. I assume the database will explode in a searing ball of plasma when I run a query. I also assume that my wetware will also have issues and that, at one time or another there will be confusion.
The means that all methods and parameters are commented. This ads approximately a minute of development time to each method. It also adds a small amount of time each time a method is changed to scan the comment and ensure it's consistent. It also means we have a ton of comments that, 95% of the time, add no value. However, since the set of methods that raise ambiguity or clarification issues is non-fixed, it's not practical to simply comment 5% of the code.
While it's tempting to say "just comment the methods that need it", this leads to a slippery slope that we've seen in practice again and again. The test of "what needs it" is carried out by the coder, who almost by definition finds their code clear and unambiguous. One by one "obvious" methods are created without comments and soon we have devs interupting their work and that of the author to discuss what's happening.
The application of under a minute of effort saves 5 minutes of conversation and the inherent costs involved in task switching productive developers.
Comments aren't things that hang around code like bad groupies. They are code, and when the code is updated, so too must the comment.
The Intel Ultimate Code: Ultrabook challenge[^] is an interesting experiment. On the surface it’s a coding challenge: Six developers compete for six weeks to create apps that take full advantage of the performance advances, graphic excellence, touch and sensor technologies of the latest Ultrabook computers. Scratch a little deeper and you realise that this is a 1 part coding and 5 parts hair-tearing game of strategy combined with your worst mid-term practical, ever.
Six Developers (well, eight actually - you can see the rules are already being tested at this early stage) get 6 weeks to develop the ultimate app for the Ultrabook that makes use of Windows 8, touchscreen capabilities, sensors such as gyroscope, GPS and NFC (to name a few), and the raw power of a 3rd gen Ivy Bridge i7 CPU.
For the next 6 weeks I'll be posting updates on the progress of the challengers. These are seasoned developers. They have been around the block and have a full shed of tools and tricks at their disposal. They are not to be trifled with. I am expecting, and maybe hoping, the veneer of gentile competitiveness to fall away quickly and settle down to a nice exciting game of psych.
Lee[^], for example already has an app-in-a-box application that will enable him to write his apps in Basic and target 7 different platforms. He’s using Basic. To write the ultimate app on the ultimate notebook in front of millions of developers. You can see the sorts of mind games that have already started.
George and Suresh[^] have reportedly tried out over 30 designs concepts and more than 200 assets to arrive at their final design. In less than 3 days. They also already have mockups of their final app and will be building on an existing app. In this day and age merely changing the font is enough to warrant a major release, so I’m going to be watching these guys closely. And it should be noted that any use of Comic Sans in an application leads to immediate disqualification.
Shailesh[^] from clemsoftware will be creating an Ultrabook tuned version of their BioIQ picture puzzle game. Basically: label the parts of the organisms and you win. I’m guessing touch will be a large part of the ultrabookification of this app, but what I’d really like to see is something far more immersive such as a modern day version of the children's “Doctor” game. Either through touch, or by tilting and moving the entire ultrabook you control a surgeon's knife and perform something simple like a coronary bypass. I breezed over the specs of the ultrabooks sent to the devs, but I’m sure there’s something that would add a little je ne sais quoi to it all. An electrified touchpad or the NFC chip wiping your credit cards in a “simulated” malpractice suit would add a little spice, no?
John[^] and Gavin from Soma games (I think this makes it 8+ devs, right?) are creating an app called wind up football that takes advantage of the touchscreen and accelerometer. I will be satisfied with nothing less than an app that requires you to actually kick the Ultrabook in the same way virtual golf courses have you hit a golf ball into a sheet. The touch screen can measure the location and, potentially, vector of your foot, the accelerometer can then calculate the projected path, and the gyroscope would be used to measure rotation. Their challenge will be accurately simulating the aerodynamics of a flying, spinning Ultrabook, but I assume that’s why they also mentioned the new CPUs as being integral to their app. Nice one, boys. I’m definitely looking forward to this one.
Sagar[^] and his crew made much mention of the trials of actually getting their hands on their Ultrabook, which is actually a step further than us judges have managed to get because we’re evidently embargoed from getting our greasy, cynical paws on the shiny new ‘books until the challengers have completed their penultimate post. Sagar did mention in passing that the judges pics were way cooler than the challenger’s pics so he gets 2 points this week. However, I do need to subtract 2 points for dropping the “e” in his product’s name. Ever since auto-correct was invented spelling has gone to hell in a hand-bascet.
The short version is we have a new article submission wizard (and updated systems) that provides
- An all new, single page article editor.
- An auto-save facility in case of crashes
- The ability for members to edit "edited" articles safely. No more needing to send in updates manually.
- Simplified references to uploaded files.
- A new "Alternative article" option that allows you to create alternate versions of existing articles
- An update for Tips n' Tricks so that they now use the standard article UI
- You can now upload images and downloads for blog and tip articles.
- The ability to easily switch article types (Make an article a tip, promote a technical blog to full article, etc)
The longer version:
About 6 months ago we finally had the time to revamp the aging submission wizard. I wanted a single page editor that allowed in-page (ie Ajax) file and that looked very much like what the final article would look like. The idea is that it would feel like you were editing the article in-place. Click on the title to edit it, upload a file and add the file to the content with a single click etc. And, of course, auto-save with a simple recovery model for those bad times.
I also wanted to address the need to allow our authors more access to their articles. Currently what we do is we pick the top articles and edit them. This editing corrects formatting, spelling, cleans the downloads and generally ensures that the article conforms to our standards. However, once an article is edited by an editor it is inviolate: it can no longer be updated online by the original author.
The reason for this is that, after spending so much time fixing articles, we were getting a little frustrated when members would go an re-edit the article's we edited and re-introduce all the errors we had fixed. This is understandable because they would often simply take the copy of article they had originally written, make corrections to it, then copy and paste it over whatever we had done. So we put an ednd to that for our own sanity and made a pact with ourselves (and with you) that we would be as fast as possible in posting updates you sent in.
However, this punishes those who are good authors for the sake of protecting the few that are bad, so we've come up with a compromise, and also a solution to a subtle problem.
Previously when you posted an article using the wizard, the article would be placed in a Pending queue and would be reviewed by other members who would then approve, disapprove, and/or comment on the article. After approval the article became public and everyone was happy. Except that the author could now edit their new article, upload a bunch of inappropriate material, and have it available immediately. The solution was to modify our system so that all edits of articles create a new pending version of the article. After editing, the old version will still be seen by most members, but moderators will be able to see (and approve) the new version. Once approved the new version replaces the old version and goes live.
In doing this we had to tackle a few issues with files. We choose not to store files as database BLOBs, but as system files, so where do we store your upload files while you're editing? When you start the submission wizard you haven't chosen a section, yet you can upload files. When editing an existing article you may need to upload new versions of files (updated zips or images) but we need to ensure the old version of those files and images are still available for the current article.
We ended up introducing a "Working" directory for your new uploads in order to separate out the old and the new, but this then made life difficult for those looking to reference files in their article's HTML. Previously we had the concept of a "Basename" for an article, which was effectively the name of the article's directory, and which author's used to reference an uploaded file (eg src="basename/myfile.zip"). We've abandoned that since it causes problems with name uniqueness, and in fact abandoned the whole concept of asking members to worry about directories. Now you simply reference an uploaded file by its filename, and we make sure we track things like which file (old or new) you're talking about, as well as ensuring we adjust the references in your articles during the various stages (composing to pending to available).
We've also introduced the concept of Alterative Articles. There are many, many articles that are no longer being maintained and this is a first step to allow other members to take over abandoned articles, or to simply provide different implementations such as a different language.
To provide a symmetric article experience we've now upgraded the Tips n Tricks articles to be displayed in the same manner as traditional articles (as well as their alternatives), and now make it very simple to convert a tip to a standard article, or to any other article type. No more complaint about short articles or long tips. We can quickly recategorise as needed.
This also brings a nice benefit: you can now upload images and zips to your blog and tips articles.
With regards to moving tips to the new UI - you might notice something a little weird with your rep. We moved all the comments that were associated with tips into their own separate forum for each tip instead of having the confusing comments-per-tip-plus-bonus-forum-at-the-bottom.
This release should be conidered a Beta release, so please send in all feedback and bug reports to the Bugs and Suggestions forum.
As far as we know the kinks are gone, but we have seen a couple of issues from bugs in our old system that only manifested once we moved to the new system. It's amazing how these bugs hibernate - like cicadas.
What is the URL of the article? I'll take a look and sort out whatever the issue is.
I fixed up the links and it's all good now, though I think I may have inadvertently published a version of your article you were still working on. If you wish to rollback, go to the Revisions tab on your article, choose the version you wish to revert back to, and hit revert.