This morning I had an experience that provided such a classic picture of the entire IT industry for me right now:
I went into the Microsoft store and was looking at an Acer Aspire S7. It looked nice and said on the blurb "128GB SSD". So I took a peek at the Computer's properties and saw "57.9GB free of 79.8GB" on drive C - the only drive visible.
I asked the sales guy where the 128 - 80 = 48GB was. He told me the missing space was used by the OS, which I politely disagreed with because the OS was currently on Drive C and was using about 22GB of space. He then tells me that the demo software they have installed that's using up the space (I again disagree), and then tells me it's the recovery partition that's using the space, so I ask him to show me this 48GB recovery partition. He hits Window-C, the (HD) screen totally fills with Control panel applets and he types in "Disk management" but nothing appears. He scans the list of applets briefly then gives up and then right-swipes to get the settings but again gives up, and after fumbling around finds a list of partitions, but is unable to get me the size of any of them. He then turns to me and says "this is really outside of a sales thing - I need to get you my tech guy".
He clearly didn't know what he was doing, but he had a good enough clue to be able to navigate around better than most people I've seen who have used Win8. Yet he couldn't answer a simple question relating to what the tag says and what's actually on sale, and said it was a technical, not a sale question. I left the store feeling the same way you feel when you leave a mechanics who tells you you need to get the air in your tyres exchanged at the beginning and end of Winter and that'll be $149.99, please.
I felt lost when he was going all over the place trying to answer the question (and I've used win8 an awful lot) and then I felt like my question was unimportant to them, that I shouldn't be asking it, and that the answers I got were made up (which they were).
It felt complicated, It felt confusing, and it was impossible to make a choice on laptops because there were no answers, and that the answers I would get I couldn't trust anyway.
I wander 3 doors down to the Apple store, look at the properties of a 1TB iMac and ask to see the actual size of the HDD. The sales dude does a single right-click, Get info and shows me that of 999.4GB, there is 978.7GB free. We're done.
There's 1 keyboard layout. You can have light (11" or 13") and medium powered with OK screens or heavier, thicker, more powerful with retina displays (13" or 15"). It's easy - except that I want a retina display on an Air. Not because any other laptop I've ever seen as a retina display: only because the Macbook Pro's have a retina display. I don't actually, in isolation, want a retina display, I just don't want to feel like I'm missing out on something.
When I look at Tablets I see the iPad, Android or Surface devices and they are all fairly simply to use. Phones, be it Android, Win Phone 8, iPhone or Blackerry are all simple to use. They are in fact simpler to use than ever, with only Feature phones being simpler (but many of them were tear the hair out annoying).
Yet Laptops and PCs seem to have increasing their complexity and choice and confusion making the buying decision complicated and intimidating. Windows 8 has made actually using a laptop confusing and complicated. Put these together and you have a sales nightmare: you don't know which one to buy and while trying to decide you don't know how to actually use the thing you think you need to buy.
And then you wander over to Apple and you think "My God this is so simple" and you have limited choice, and you feel you have a chance at making a decision.
Previously, however, the decision would come down to "Do I pay a 30%-50% premium on essentially the same hardware just to get an Apple". For me this has always been game over - I'm simply not willing to pay that much. Yet today I'm looking at a complicated Windows 8 machine that was more expensive than the simple Apple machine.
Buying a PC or laptop/Ultrabook is no longer easy or as cheap as it was a year or so ago. Win8 is (to me anyway) a technically better and more secure operating system than MacOS ruined by an awful UI. Apple has a still-maturing OS that is staring to acknowledge that security is important but still crashes, still locks up and still can't seem to work out how to handle network calls on a background thread. But it's simple, the machines will never offend anyone with their looks, you get what you pay for, and they are now in the same price bracket (or below) many of the Ultrabooks.
I can understand why PC sales have fallen, and for me it's not just tablets. What I don't understand is why Apple hasn't gone for the jugular like they did on Windows Vista.
This is the final post in the series on the Ultimate Coder Challenge: Going Perceptual. The applications are in and now it's do or die. The biggest challenge to our contestants at this point isn't the code, or the SDK, or their idea. It's our hardware. It's the installers. It's our ability to understand what they were trying to do and that has proven to be difficult in some cases.
Perceptual computing is a form of interacting with a computer in a more natural human way. You speak or you gesture, or you may even want to just swipe. It's not about the keyboard and mouse. The biggest problem is that what we think makes a good method of interaction (cue Minority Report) is actually a terrible way to interact. It's tiring, there's no feedback, and gestures are in 3D, not 2, and not constrained to a box or button and so can be ambiguous. Further, a touch gesture as a definite start and stop. It starts when you touch and ends when you stop touching. When does a gesture start and stop?
Further, and as has been mentioned elsewhere, there's no standard. We all know how to pinch-zoom, or swipe to scroll, or even push a button. How do you push a button in 3D space? How do pinch to zoom, without the initial contraction of the fingers being misinterpreted as a shrink action first?
So on to the challengers:
Lee's work I've written about extensively, usually while ruefully shaking my head at the madman. He shamelessly takes on way more than he can chew and after endless dead ends, delays, setbacks and possibly a broken keyboard or two he comes out with something a little special. His virtual conference is, for me, an excellent example of what's possible with perceptual computing: a UI that revolves around you. You're in a virtual 3D environment talking to others and as you turn, the viewport turns with you.
Except there's one huge problem with this approach: in a video conference you never turn. If you turn then you're no longer looking at the screen in front of you. Further, as you turn, the mass-tracking (not head tracking) turns the virtual camera in the direction that you're turning, meaning the image on the screen pans, meaning, well, that you don't actually need to turn to view it. Kind of like an anti-Catch-22.
It's very cool though. Very cool.
Sixense's puppetry demo is complete. It's the ultimate kids toy and allows two people (or one talented person) to host and record a virtual puppet show. It works, but I can't help but feel that so much effort went into things like scenes and a story that the creators forgot what puppet shows are about. Dialogue. Well, and violence, for those who fondly remember the Punch and Judy days, but mostly dialogue and often costumes. So why not ditch the fancy scenes and instead allow quick changes of clothes? Or have amusing sound effects when one puppet bonks another puppet on the head? That would focus the players on thinking about their puppet rather than trying to control a puppet that keeps flying all over the countryside.
It's an excellent example of 3D free form gesture based interaction with a computer, and for that they score highly. Most importantly their user feedback on interpreting hand gestures is by far the best of any contestant.
Code Monkeys Stargate application shows promise in (for me) the ultimate goal of a head tracking video game. Unfortunately the dream never lived up to reality and the sheer effort involved in trying to control the targeting quickly took away from the luster. I did get it to work, but the headache afterwards was not worth it.
Infrared5 followed a similar path to Code Monkeys with their Kiwi challenge, however they introduced an extra variable by having the control of the application be done via a smartphone application. The connection between the game and the smartphone was seamless and slick. I could not, however, control anything but the fire button from my iPhone which made the game unplayable for me. Restarting didn't solve the issue. Further, I often ended up with a black landscape in my viewport that no amount of yelling, tilting, clicking or swiping would get me out of, and closing the app via alt-tab (there's no close button I could find) was near impossible because no sooner had you popped out of the app then you were thrust back into it, black soulless void and all.
Pete has created an image processing application that uses a series of gestures to activate filters. His biggest problem is there is no defined, standard set of gestures one can call on to immediately dive into his app. There's no help button, so you're left guessing at gestures. Thumb up and down, swiping up and down, and in my case swearing like a sailor. An excellent attempt at making gesture input a natural part of the interaction with the application, but let down, I think, but the maturity of the platform.
Eskil has demonstrated an abstraction layer that allows developers to take advantage of perceptual computing without needing to write the boilerplate code. In fact, without needing to know anything about the nuts and bolts at all. This is a tremendous achievement given the short time available.
A short note on the perceptual computing camera: I'm typing this review on my Lenevo Yoga with the camera perched on the top of the screen. The camera is heavy, as I've mentioned before, but it's only now, after hours of clenching my fist, twiddling my fingers, bobbing my head frantically and yelling 'Engage' in various accents to try and interact with the game, that I've realised I'm getting tired always hiking the screen back up to the vertical position. The camera makes the lid sag due to the weight, and when it's not sagging it's trying to tip the laptop base over apex. It needs to be lighter and it needs to be way smaller.
Another general comment on the use of the camera as an input device is that onscreen feedback is critical. If you don't get feedback on what the computer thinks you're doing you go nowhere. You can't debug. I cannot overstate how important feedback is.
Also, I do need to also comment on the Lenevo itself. I love the feel of the keyboard and most especially the palm rest. So very comfortable. But please, to anyone who is thinking of manufacturing a laptop keyboard, DO NOT reduce the width of the right shift key and squeeze in the up arrow in the space created. It means I'm constantly hitting the up arrow. Constantly. It's doing my head in. Apart from that the screen is crisp, the battery life good, and the flip back keyboard weirdly useful. Propping this thing on a table with the keyboard folded back to watch a movie or flick through the news is brilliant.
As to perceptual computing, my overwhelming feeling is "it's coming". But it's not here yet. We are trying to make a computer be like the real world. You turn or look or speak or grab at something that isn't there in the hope that the computer will mimic or replicate your intent virtually. This, to me, is akin to skuemorphism: changing something to be like something else, like Apple making it's calendar application leather bound, or having ebook readers show an animated page turn. A computer is not the real world, and it doesn't have the limitations of the real world (so to speak) so why mimic it?
I think the future of Perceptual computing will most likely be subtle. The computer may recognise you or your voice, and will recognise when you are in front of the computer, when you're looking at it, and when you're not. The end of screen savers, really, since it would just go to sleep. Gestures such as brushing away an app to close it, or flicking it to move it to another screen would be intuitive, and voice recognition would allow utility commands such as searching or bookmarking to be carried out without needing to take your hands of the keyboard or, indeed, outside of the current application.
Gaze tracking is another incredibly important, yet not currently functioning (at least on the hardware I have in front of me) feature that has a myriad of uses. My immediate use would be for eye tracking in UI testing, but even simpler could things like auto scrolling, or even auto-hiding of elements when they are not being looked at directly. The eye does, however, jump around an awful lot so the smoothing algorithm will need to be heavily weighted.
The contestants achieved some incredible feats of patience, innovation, creativity and problem solving. I take my hat off to them for their perseverance and outright foolishness in taking on a challenge that many of them were so unprepared for, yet so willing to have a go and rise to the occasion. Rise they did, so well done, guys. Get some rest and have a beer. You've earned it.
A thoughtful and interesting conclusion to the series Chris. The one thing I really felt I was missing was having a decent graphic artist so that I could do some decent on screen tutorials. This is an area that really did need someone with more artistic ability than me.
I was brought up to respect my elders. I don't respect many people nowadays.
This week is the final week of blogs by the challengers. I won't go through individual entries since pretty much all of them are at the wrap up / polish stage.
The overwhelming feeling you get when reading the blogs is one of a challenge accepted and, almost, tamed. This is uncharted territory for the contestants and that territory is not paved smoothly: the SDK is in beta and the capabilities of the hardware are still limited. There's a lot of things that would be great if they worked, but they don't. Sizense wanted to have their Big Bad Wolf puppet blow the little piggy puppet's house down by blowing on the mic. "Blowing on a mic" is not a recognised word so it can't happen. Code-Monkeys (and many others) wanted head tracking - or even gaze tracking - but the hardware simply isn't up to it at the moment. Infrared5 simply want more grunt from the Lenevo.
It's close. Really, really close and while the contestants were not always able to achieve their first order approximation of what they wanted, they have done exactly what good developers do and focus on what the outcome they want is, and then work backwards. Instead of finger tracking you use thumb tracking, instead of eye tracking you use head tracking, instead of head tracking you use body mass tracking. Instead of tracking everything just do your tracking work on the object (eg hand) you want to track and ignore all other input data. Speed improvements came quickly, as did a usable (but maybe not perfect) solution.
The point is computing power is always increasing, the SDK will only improve, the hardware will become more refined and more responsive (and offload much of the software based processing) and we will get there. Quickly.
We get the final (final) versions of the apps soon and it will be then that we, the judges, have to dive in and ask ourselves two basic questions
1. What is perceptual computing?
2. Who knocked it out of the park?
Thanks for all your encouragement over the last few weeks Chris. Reading the judge's comments have been a real highlight over the weeks and I would have to say that last weeks was one of the funniest posts I've ever read, so kudos for that.
With the competition, I tried to stay with what was in the SDK, and I stand in awe of what the other competitors produced. As for point 2, the answer (to me) is easy - Lee. Clean out the park, and 3 counties of clearance.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 6 and we're almost done. GDC[^] has been and gone and unfortunately I was not able to attend due to a small matter of our CodeProject.TV[^] launch (where's the blink tag when you need it?). While missing GDC itself almost brought a tear to my eye, the knowledge that I was missing out on the beers Pete and Chip have threatened me with made it particularly painful.
I received my Interactive Gesture Camera from Intel last week and have hooked it up. Dual cameras for depth perception, dual microphones for voice recognition, and an SDK that ties it together.
And it's heavy. Really weirdly heavy for such a small camera. This isn't a bad thing though because it means when it's sitting on the top of your monitor it's very stable, and the picture quality compared to my old logitech is much better, with far less in/out refocussing issues than I had with my old webcam. It is kind of weird having it sitting there, staring at me with those two dead eyes. Evaluating me. Scanning me in the infrared. Knowing where I am, where I'm looking, what I'm saying. Intel itself does absolutely nothing to make me feel more comfortable with their disclaimer stating:
The Camera may not be used in any “mission critical application” in which the failure of the Camera could result, directly or indirectly, in personal injury or death
Injury? Death? This thing is going to sleep in the garage from now on.
Anyway, to the challengers, or those that have not been taken hostage, injured or possibly killed by their cameras. I say this because 2 of the challengers have not submitted blog postings and I've not heard anything from them. Their muffled screams are probably still echoing against the backdrop of a small, blinking green light coming from the tiny black dense camera on their bloodstained laptop.
It sleeps outside tonight. I don't want it talking to my car, whispering to it. Subverting it.
Lee[^] enjoyed GDC and ensured Intel got their hotel bill's worth by spending an inordinate amount of time in his room cranking code. Lee's understandably at the point of polishing, and at the point of taking stock of the reality of gestures. They all sound great but how do you provide feedback for a gesture driven UI? How do you let the user know the difference between a gesture that does something, a gesture that does nothing, and a gesture that was not understood. And how do you educate your users on gestures? He's basically done, so on to testing.
Sixense[^] demo'd their puppet show at GDC and they too are at the point of polishing and introducing a little realism. Not much more to say on them.
Code-Monkeys[^] are getting desperate and are quoting Gene Simmons and resorting to tongue tracking. I'm not going there. I'll just quote the man himself:
Life is too short to have anything but delusional notions about yourself.
Infrared5[^] used GDC as their own private beta testing ground which is perfect. There must have been something in the beer at GDC though because they've left the reservation and are now focussing on foot tracking. I'm a bare-feet kinda guy myself so I'm looking forward to testing next week.
Pete[^] is in lock-down mode, that time in any application where you just have to say "no more". He's introduced some very nice gesture and voice UI - voice control to set filters, shake to add a blur effect (very cute) and gestures such as swiping your entire hand right to left to smooth. I love it - very, very intuitive, almost natural. AC/DC2 and some Twisted Sister. Nice.
Eskil[^] obviously enjoyed GDC and his update this week is primarily about the details behind head tracking.
Overall the contestants seem to be ready. There's been a lot of collaboration and sharing of ideas and code. It's a contest, but they're all in it together and definitely enjoying themselves.
As to us judges? There isn't going to be a lot of enjoyment in the judging. There's some quality work here and it will not be easy.
Week 5 and we're starting to see some rounding out of the finished creations. For us judges it's also the week we start getting the hardware to test, and my Lenevo Yoga is in my hot little hands, getting belted around and abused, as happens to all my toys. It's a very, very solid, thou uninspiring unit. It is a Lenevo, after all, but what it does it does well. Great screen, lovely tactile feel on the keyboard, excellent battery life, but boring as bat-poo. It's the Toyota Camry of laptops - solid, reliable, no nonsense without offending anyone, but you're not going to scare anyone with it.
I do, however, want to slap the person responsible for the trackpad. It's awful.
Danny at Sixense [^]has shown his handpuppet wolf wandering around a 3D backdrop. In my mind they've completed their task and the rest is polish. Using only a camera and an Ultrabook you can buy off the shelf they've created a method of interacting and controlling software using complex gestures. Sure, we've had this on the Kinect for years, but this is new to laptops and beats some other gesture based controls[^] that the media seems to be going nuts over lately. Nice one.
Lee[^], too, is at the polish stage and has some words of wisdom about voice recognition: it doesn't work all that well but be a little clever and it'll work just fine given some context. This is the story of every developer's life, I think.
Soma[^] is deep into the task of rethinking their UI. They've tried the Minority Report style UI but it really is a little tiring and, well, unemotional. Tey continue a theme on performance issues they have face, specifically voice control and speed of recognition. There's a reason Siri needs to be connected to a server to do voice recognition: it's a heavy workload. So they are getting there, but we're now seeing the compromises and trade-offs coming into play.
Infrared5[^] get an automatic 2 point bonus for including two references to AC/DC. They have implemented a face tracking solution by handling perspective correction and depth analysis themselves, in C++, using actual mathematics. Bonus 5 points right there. They are tackling the immediate problems at hand with craft solutions, and focussing on perceptual computing rather than using perceptual computing as a bit of gravy.
Pete[^] 's posted a video of his app's progress and I need to ask him one small favour: show us you in the video, or more specifically, show the gestures you're using to control the app. He's also struggling through the Dark Forest Of Feature Trade-offs and is feeling that his app is becoming least PC focussed and more of a touch app with gestures.
This is not a bad thing at all. Samsung have implemented gesture controls not to save wear and tear on finger tips, but because sometimes you can't touch-swipe. If you're wearing gloves (medical, outside work, it's cold, etc) or have dirty hands (cooking, your 2 year old, you're a messy eater etc) then touch won't cut it, but Perceptual Computing provides that small push that gets over that barrier to interaction. You can again use your computer in a manner very similar to touch, without touching. Not a big thing, and something that you would quickly forget you were doing. And this, in my mind, is the perfect interface: you forget that you're doing it.
I think you're on the right track, Pete.
Eskil[^] has articulated this perfectly: "The goal of any user interface is to disappear" and he's not in the Dark Forest Of Feature Trade-offs, he's in the Swamp of Broken Promises. For him the SDK isn't there yet, not by a long shot. So he's doing what any programmer does and is rewriting chunks. I'm looking forward to seeing how he ties all of this up at the end.
Last but not least, Simian Squared[^] have also reached the epiphany about what gestures promise: a lazy interface that extends gestures. Perceptual computing promises way, WAY more than this, but at it's core it also offers very simple things that can be very powerful and helpful. There's no wads of virtual clay splattering the walls of their pottery room - in fact it looks remarkably clean - so I'm taking that as a sign of excellent progress.
Thanks for the thoughts Chris. You're right - the next video will feature me swiping round to demonstrate. One small thing - there's not one Australian band this week, there are three. Bonus points from me for anyone other than CG who's heard all three of them.
I was brought up to respect my elders. I don't respect many people nowadays.
We're at week 4 of the Ultimate Coder Challenge[^] and at this point we're starting to see the light at the end of the tunnel. For some that's a scary sight.
Sixense[^] are well on their way to creating a virtual sock-puppet, but one that doesn't have the usual awful connotations of an online sock-puppet. This ons is, actually, a sock puppet. To be brutally frank what they have also done is shone a light onto some of the limitations inherent in the depth camera's abilities that have forced them to use slightly nonstandard sock puppet hand gestures (See IEEE Std 4802.01 - Sock Puppet Hand Control Standard 1104). IT would be a win if they could get past this limitation.
Lee[^] has gone ahead and written Yet Another Video Conferencing bus, 'cause, y'know, he has nothing better to do. I know - I just know - that he's hacked his DVR at home to Just Work Better, and his microwave is probably cowering behind the fridge screaming "Make it go away!". He has, however, produced a prototype of a conference system with his 3D avatar injected. I can't help but wonder why he didn't test his virtual teleportation on an assistant[^] first.
Simian[^] focussed mostly on their demo environment. A 3D Japanese themed pottery wheel. Probably best just to think about that for a while.
Pete[^] has switched from Aussie Pub rock to Canadian Top 40 with a little Creedance thrown in. I'm of two minds about this. He's also apologising for providing detailed coding explanation, and I'm sorry Pete but you just lost points on this. I want details. I want code. This is a coding challenge by coders for a large coding audience braying for blood. Well, a large coding audience, at least.
Pete's also hit the inevitable Voice Control Brick Wall. I'm guessing, being on the wrong end of voice control far too often, that it could be an accent issue, so I'd be interested to hear what sort of success those with a (reasonably neutral) US accent have had. Accent, to me, is the 21st century equivalent of the Date format. What, exactly, does 6/7/2013 represent without locale context? The same happens with voice. So if Pete can't talk to his app he's going to have his app talk to him. Just please include a Mute button.
Eskil[^] doesn't provide much in the way of concrete progress on the framework he's building, but does provide a walkthrough of his non-OO approach to creating and rendering UI elements. I'll be honest and say I'm not a fan of his approach. OO development helps separate who is responsible for what, and while that may not result in the tersest of code, it does promote maintainability.
Code Monkeys[^] have touched upon something that you can be sure that the likes of Apple, Google and the Kinect team at Microsoft all know: gesture based UIs are tiring. You know why Tom Cruise's character in Minority Report was so ripped? It's because he was doing 12hr days of shoulder and ab work while using those gesture gloves of his. 12 hours? Try 4 minutes.
Infrared5[^] have revealed another little worm in the Apple: gaze tracking has not been implemented in the PC SDK. It will be added later. So what did the guys do? They slammed their foot on the clutch, dropped from C# down to C++, dropped the clutch and left billowing smoke in their wake. This is exactly what I want to see from a contestant: a dammit-I'll-do-it-myself approach to dealing with issues. Now if only they had a little AC/DC playing in the background...
Thanks for the update Chris, and I'll keep the code coming - although I think you'll find that I was apologising that non coders aren't as awesome as we are. My antipodean "rock" last week was Men At Work - damn, but it was real earworm music.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 3 and we're halfway through the challenge. Hump week, so to speak. I missed the Google hangout due to jetlag and general mayhem.
Pete [^] is motoring along and getting the gesture control working. This seems an odd statement to write, but a timely one: Pete is writing an application you control through waving your hands and there's no magic, no secret incantations. He's using the same tools we use day in and day out and that, to me, is amazing. There are also no fires or explosions, very little swearing, no tantrums or hissy fits, just constant, solid, back breaking slogging through the code and getting it done. By himself. Much respect.
Infrared5[^] are bucking a trend of the previous contest with crazy statements like "We were pleased to see that all the tasks we set for ourselves wasn’t too big of a bite to take". Regardless, they too are moving on rapidly and have a demo of their Kiwi Catapult Revenge game available. The biggest challenge for them? Eye tracking, it seems. I'm praying they crack this because I have my own nefarious needs for decent and cheap eye tracking.
Eskil have also released a beta version of his Betray game using his (I'm assuming) framework. His post focusses mainly on UI and some exquisite rendering which screams, to me, too much spare time. If he has the luxury to make the UI as stunning as his examples then he's hiding something up his sleeve. Interesting.
Code-Monkeys[^] are focussing on input control and, to that extent, focussing on simplification. And their demo code is simple. Crazy simple. Work continues.
Simian Squared[^] have threatened to play Unchained Melody[^] which is an automatic failure in my book. Careful lads. Their clay modeller is progressing and while they mention piles of misshapen virtual clay there are no pics. Show us the carnage.
The Sixense guys[^] have their puppets moving! This is wicked. They are moving on to actual story telling next. Serious progress.
Lee[^] continues to bravely and foolishly attempt to change one of the biggest online industries single handedly. Or with two hands, depending. He's not only pushing perceptual computing to the limit but has decided to rewrite the conferencing network code too. He's also showing some vampire tendencies with the rising sun causing him serious damage. I worry, Lee. I really do.
Overall the contestants are plowing ahead and it's amazing to see the progress made. This offers the chance for some really polished presentations at the end and judging is going to be soul searching.
Thanks for that Chris. Have you watched the video from Nicole, Sascha and Steve yet? Worth viewing if you haven't - especially around the 5 minute mark. I'm sorry to say, but I'm going to keep the verbose blog posts coming.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 2 in the Ultimate Coder challenge sees the teams settling down to the cold harsh light of reality mixed in with a wonderful dose of reckless abandon.
Sixense Studios[^] had the wind knocked out of them a little after watching Media Molecule[^] demo a PS4 app that mimics their idea. However, they have since realised that their 6 weeks of work can still beat the two years work, and who knows how many billions, invested by Media Molecule because while Media Molecule's demo is wicked cool, it's based on pre-recorded movements and not the full physics-based hand puppets they are building.
Lee[^] is continuing his work on transporting you, via the depth perception camera, into a virtual world. I really hope he's watched this movie[^] before he goes too far down that rabbit hole. Watch his video to get a little weirded out by it all.
The guys at Code-Monkeys[^] have totally nailed another issue with the PS4 demo of Media Molecule. The PS4 demo relied on using a wand, and this is akin to using a stylus on a touchscreen. While they demoed an initial cut at their "looks can kill" eye tracking shooter I get the impression these guys are along more to help add as many stepping stones as possible to allow those who come next to reach the lofty goals of the ultimate UI, rather than assume they can create it by themselves.
Simian Squared[^] raise another interesting point that follows on from Code-Monkeys' points: the advent of the touchscreen interface has heralded a new era in user experience and programming is now, more than ever, an art. The programming tools available to us today make the task of development more and more mechanised. Drag and drop, ORMs, do-everything frameworks and convention over configuration mean writing an app is easier than ever. However, writing an app that is a pleasure to use is now harder than ever because we, as users, no longer accept substandard interfaces or a poor experience. Simian Squared are producing a virtual potter's wheel. More than simply creating a system that responds to the position of a few digits, he wants to transport you to a new world. He sums up the challenge but also the potential in his application: "a great concept artist will sometimes bend the rules of perspective or light and shadow for impact". The new interfaces available to us today make programming, more than ever, an art.
Eskil[^] continues on his quest to write a hardware abstraction API that's pluggable. Another step along the path to better UIs and (potentially) better hardware. As he writes: it's hard to get someone to buy your hardware if there are no applications that run on it. Abstracting out the API for hardware should mean that writing apps for new hardware is a snap.
Infrared5[^] continue on their quest for an eye motion interface. Whereas Eskil had serious issues with his camera, these guys are waxing lyrical about how well it's performing for them. The joys of pre-production hardware. They also add to the idea that collaboration as the key to success in this challenge. I am getting a little worried at the lack of any actual attacks on anyone's jugular, but it's early days yet and the prize pool is, I'm sure, sufficient to get the red haze settling over the contestants.
Pete[^] is attacking his task methodically and systematically and with an eclectic mix of music. The Angels? Very nice. While others are focussing on the camera Pete's started with voice recognition. Sure, over 65% of human communication is non-verbal (depends on which study you refer to), but I'm not expecting Pete to include emotion detection (yet). Gesture and touch are great for items you can see or touch, but what about those things you can't see or touch? You can ask for something, and then once you have it you can manipulate it via gestures. Voice is important.
The challenge here is to showcase perceptual computing and this means to rethink how we interact with a system at a fundamental level. Sticking to familiar paradigms may make it easier for a person to approach a technology, but it doesn't help them take full advantage of a technology. It holds them back. Touchscreen interfaces never caught on until the hardware and user interface advanced sufficiently to make it intuitively natural to swipe and pinch. The hardware had to be fast and reactive enough that a gentle swipe would achieve a result, and just as importantly the UI presented to the user had to be obvious enough to encourage and respond to these gentle swipes. A stylus retards the use of a touch interface, and a wand retards the progression of a gesture based interface.
What the gesture and voice based based UI looks like, and how this can be presented to the user in an obvious and natural manner, is what this challenge is about.
Last year saw the Ultimate Coder Challenge pit 6 teams against each other to create the Ultimate App for the Ultimate personal computer - the Ultrabook. The sadists at Intel are back at it with a new twist: create an application that shows off a convertible Ultrabook[^] and/or takes advantage of the Intel Perceptual Computing SDK 2013 Beta[^]
Let me say from the outset that I'm ignoring the "or" in the "and/or" above. The contestants must create an app that shows off the hardware and uses the perceptual computing SDK to have a chance. This means
The application needs to take advantage of the Ultrabook's specific features such as the sensors, the touchscreen, always on/always connected, power management and/or graphics.
The application must make sense for a laptop form-factor and a tablet form-factor
The application must make use of gesture controls, or eye tracking, or voice control, or anything else hidden in that magical SDK.
I'll add a fourth requirement
The application must make sense as an Ultrabook application
What I mean by this is that an application that is an existing application shoehorned into an Ultrabook with support for an Ultrabook tacked on in a way that doesn't harmonise with the original application will not get my vote.
So, on to the challengers.
Sixence Studios[^] (I keep wanting to hand them a "p") are old hands at the perceptual computing stuff. They've demo'd at Intel keynotes and are developing a virtual puppet application. I will be interested to see how this works in the tablet form factor.
Lee Bamber[^] refuses to back down from a challenge, and this is the third contest I've had the honour of judging him in. His entry will be a virtual conference that will allow you to transport yourself into a 3D world. "ambitious to the point of foolishness" is what he writes. He's mad. I love it.
Simian Squared[^] will be creating a virtual potter's wheel complete with virtual clay. Please note that points will be deducted for any "Ghost" moments that appear in any videos demonstrating the application.
Code-Monkeys[^] continue the primate theme and will be taking their existing Stargate Gunship game and making it a fully immersive. Gestures for firing, voice commands to control weaponry and gaze capture for targeting. Gaze targeting is something I feel is going to totally and utterly change the nature of video games and I'm very keen to see how this works. A shooter game that reacts as fast as you can look is going to get crazy. I can feel the headaches already.
Infrared5/Brass Monkey[^]. Again with the Monkeys. This feels weird. They will be creating a 3D FPS using head tracking, facial recognition and voice. This will be a little different in that the angle of your head will change the view on the screen to make it more immersive. Interesting idea, and their art looks killer.
Quel Solaar[^] has decided to make it simple and reinvent the entire PC interface. He will create a game, a data visualizer and a creative tool that will make use of his open source software layer in order to make it "easy for any developer to make use of the diverse hardware available to us". Any input (voice, gaze, gesture), any display (phones, tablets, laptops, workstations) and any hardware configuration. And I thought Lee was nuts.
Our very own Pete O'Hanlon[^] is taking the safe path and creating a voice and gesture enabled image editing application. This seems specifically an effort to show off the perceptual computing SDK rather than show off an application, and I like that. Further, he's using touch as an input, thus being inclusive of the traditional Ultrabook features rather than just plowing on with the sexy, younger, more nubile features of the PerC SDK.
Each week I'll post an update of how the teams are progressing. May the best team win.
Thank you for pointing out the reality check on the application needing to work on an Ultrabook to get your vote. I wish more competitions were forthcoming on what the real judging criteria is having wasted time on competitions that didn't. I was going to enter the Perceptual Coding contest but I don't have an Ultrabook. You just saved me a ton of time.
In other words he is only talking about the Perceptual SDK in regards to the Ultrabook challenge, but the inverse, using an Ultrabook with the Perceptual challenge, is not true?
I see the distinction now that you point it out. My worry would still be though that as a judge he'd still be significantly biased in favor of an Ultrabook compatible entry in the Perceptual challenge seeing as that he is admitting that bias. Again, I see from your comment he does not specifically state that for the Perceptual challenge, only for the Ultrabook challenge, but I'd like to hear from Chris himself that he wouldn't favor an Ultrabook entry.
I'm not being pedantic about this. I spent a great deal of time on an entry for another challenge only to find out afterwards that it never had a chance of winning, due to the judge's bias towards a particular class of app. Several judges even told me in an unsolicited manner how much they liked my entry, but from the finalists chosen it became obvious that an app like mine could not win, despite the fact it was in a vertical that was even proposed by one of the judges for the contest in a forum post for suggested entries.
Unfortunately it was one of those releases where, if no one noticed anything different then it was a stunning success.
Under the hood we're working to expand our notion of what a member's account means. For most people it means nothing, but for those who write articles or post messages or who want to actively participate - and this is a lot - then your account is your spot, your area, your personality.
The question we've been asking ourselves ever since we launched RootAdmin[^] is: do we have separate accounts for separate sites or combine them. Initially the answer was a clear "separate accounts" since what someone says about themselves on one site may not be relevant for another site, or conversely: someone may choose not to say something on one site that they would say about themselves on another.
However, counter arguments were that you are who you are, and biographies don't have to always be about the site. They should be about you. Your picture is your picture, and your display name should be unique across sites, not just on one site. Otherwise your persona may be spoofed on another site without your knowledge.
Further, we've now added CodeProject.TV (currently in Beta) and we very much want what someone does on CodeProject.TV to appear on CodeProject, and for their reputation and expertise on CodeProject to be reflected on CodeProject.TV.
So we're steadily moving towards having your Account live in the network of sites, not within a site itself. Each site will continue to have a site specific profile that talks about the number of posts or articles you've posted, but you will be you across all sites.
In working towards this we've embarked on a plan to throw away large chunks of code. Recklessly, joyously, we cut the code loose and bind the ends up with electrical tape, like any good Engineer. What we'll end up with is a CodeProject made of services, not of modules and DLLs. A CodeProject whose parts can be mixed and matched and used in many places for many different things by many different systems. We started this process back in October (yes, the time that we temporarily disabled voting in the forums) and today's code drop represents the next major step in that migration.
We turned off voting a few weeks ago because of a load issue. Things have been a little hectic so fixing the issue has taken some time, but it also allowed us to see how the community fared without voting.
Quite nicely, as it turns out.
There are, however, two exceptions to this.
1. It drove me crazy that I could not upvote someone in The Lounge[^].
2. It drove me crazy that there was no way to warn people away from poor discussions in the discussion forums other than via the hammer called the reporting flag.
In doing this I had the opportunity to reqork things a little so I added a few options to the voting, 2 of which are to only allow up/down voting (we had this, but in a different form) and also to only allow up-voting.
We'll see how it goes and continue to season to taste.
The Code Project | Co-founder
Microsoft C++ MVP
Last Visit: 31-Dec-99 18:00 Last Update: 27-May-17 23:22