We skipped VS2013 / .NET 4.5 and jumped straight over to VS 2013 / .NET 4.5.1 because, y'know, it's far more exciting running your production servers on beta software rather than on the boring "tested" stuff.
We're not using any of the fun stuff explicitly, yet, but Matthew has already been eyeing off a bunch of code that can do with some async action. The thing that's most immediate to me is the multi-core JIT and startup time; all cores are actually getting used, CPU usage is up where it should be, and the site spools up much, much nicer than it ever has. Simply getting the advantages of the framework improvements is (almost) enough for me.
The other obvious timesaver is build time: much, much faster than VS2010, even when bogged down with all the other stuff I have open. I'm developing, testing, and running the site on my Macbook Air on Win7. There's the VS IDE, SQL Server Management Studio, IIS running the actual site, Outlook groaning under the weight of a 23Gb pst, various Word docs and spreadsheets, 6 remote desktop windows and half a dozen browser windows and it's all humming along nicely.
Although sometimes (especially during compile time) the humming sounds suspiciously like the Mac's fan is about to attempt takeoff. It gets disturbingly loud.
I'm still not taken with the new VS look - a little harsh, a little lacking on warmth, but it's way faster and, so far, more stable than my old creaking install of VS2010.
As to my experiment with moving my developer life onto a tiny, ultralight laptop: the jury's still out. 7/10 so far.
I got sick of typing URLs for members and so, well, I coded.
To provide a link to another member just use the tried and true @username syntax, where the username is the username generated from their name (or manually modified) in the form first-last. Everyone's profile shows the username just under their profile image.
So if I want to shout out to a ray of sunshine I can just go @Michael-Martin (no link - just type that literally) and when the message is saved the link is generated.
Obviously this is opening a can of worms and I know the next two requests. Yes, soon.
The mobile version of CodeProject has been updated a little to help those with fat fingers (ie me). It's by no means perfect, but we've aimed for a simplified UI, easy to read fonts and easy to touch buttons while still maintaining as much browsing and reading functionality as we can.
We have, however, limited some actions (eg voting) for a subsequent rev. We'd rather focus on providing a nice UI to read articles than worry that fat fingers (looking in the mirror again) will accidentally hit the down (or up) vote button while trying to read the next article.
The Essential Guide to Mobile App Testing[^] should be required reading for devs and those who pruport to manage devs who are involved in mobile app development. It's a rare, rare day that I promote a specific whitepaper but as part of our new Research Library[^] we've been working incredibly hard to find companies that have spent the time to create research material that helps you make decisions instead of simply showing you powerpoint slides of their product.
In the spirit over avoiding real work I've been playing around with an idea that is ridiculously simple but may provide a little entertainment for our members: Stylable member profiles.
Go to your settings page[^] and hit the Customisation tab and you'll see a text area for entering in styles augment or override our basic styles.
This is fraught with peril on so many levels. Firstly, you might break our page. Secondly, when we update our styles or page layout, we may break your styling. Thirdly, things could just get messy. Really messy.
But that's what life's all about, isn't it. So enjoy.
Secondly. we've introduced a new article type called "Reference". This will be fleshed out a little more soon, but for now we wanted to provide a place for things that I've wanted to post for eons: tables and reference sheets. What's ASCII value of X? What's the HTML entity for Y? Stuff like that. Let's start simple and work our way up.
This morning I had an experience that provided such a classic picture of the entire IT industry for me right now:
I went into the Microsoft store and was looking at an Acer Aspire S7. It looked nice and said on the blurb "128GB SSD". So I took a peek at the Computer's properties and saw "57.9GB free of 79.8GB" on drive C - the only drive visible.
I asked the sales guy where the 128 - 80 = 48GB was. He told me the missing space was used by the OS, which I politely disagreed with because the OS was currently on Drive C and was using about 22GB of space. He then tells me that the demo software they have installed that's using up the space (I again disagree), and then tells me it's the recovery partition that's using the space, so I ask him to show me this 48GB recovery partition. He hits Window-C, the (HD) screen totally fills with Control panel applets and he types in "Disk management" but nothing appears. He scans the list of applets briefly then gives up and then right-swipes to get the settings but again gives up, and after fumbling around finds a list of partitions, but is unable to get me the size of any of them. He then turns to me and says "this is really outside of a sales thing - I need to get you my tech guy".
He clearly didn't know what he was doing, but he had a good enough clue to be able to navigate around better than most people I've seen who have used Win8. Yet he couldn't answer a simple question relating to what the tag says and what's actually on sale, and said it was a technical, not a sale question. I left the store feeling the same way you feel when you leave a mechanics who tells you you need to get the air in your tyres exchanged at the beginning and end of Winter and that'll be $149.99, please.
I felt lost when he was going all over the place trying to answer the question (and I've used win8 an awful lot) and then I felt like my question was unimportant to them, that I shouldn't be asking it, and that the answers I got were made up (which they were).
It felt complicated, It felt confusing, and it was impossible to make a choice on laptops because there were no answers, and that the answers I would get I couldn't trust anyway.
I wander 3 doors down to the Apple store, look at the properties of a 1TB iMac and ask to see the actual size of the HDD. The sales dude does a single right-click, Get info and shows me that of 999.4GB, there is 978.7GB free. We're done.
There's 1 keyboard layout. You can have light (11" or 13") and medium powered with OK screens or heavier, thicker, more powerful with retina displays (13" or 15"). It's easy - except that I want a retina display on an Air. Not because any other laptop I've ever seen as a retina display: only because the Macbook Pro's have a retina display. I don't actually, in isolation, want a retina display, I just don't want to feel like I'm missing out on something.
When I look at Tablets I see the iPad, Android or Surface devices and they are all fairly simply to use. Phones, be it Android, Win Phone 8, iPhone or Blackerry are all simple to use. They are in fact simpler to use than ever, with only Feature phones being simpler (but many of them were tear the hair out annoying).
Yet Laptops and PCs seem to have increasing their complexity and choice and confusion making the buying decision complicated and intimidating. Windows 8 has made actually using a laptop confusing and complicated. Put these together and you have a sales nightmare: you don't know which one to buy and while trying to decide you don't know how to actually use the thing you think you need to buy.
And then you wander over to Apple and you think "My God this is so simple" and you have limited choice, and you feel you have a chance at making a decision.
Previously, however, the decision would come down to "Do I pay a 30%-50% premium on essentially the same hardware just to get an Apple". For me this has always been game over - I'm simply not willing to pay that much. Yet today I'm looking at a complicated Windows 8 machine that was more expensive than the simple Apple machine.
Buying a PC or laptop/Ultrabook is no longer easy or as cheap as it was a year or so ago. Win8 is (to me anyway) a technically better and more secure operating system than MacOS ruined by an awful UI. Apple has a still-maturing OS that is staring to acknowledge that security is important but still crashes, still locks up and still can't seem to work out how to handle network calls on a background thread. But it's simple, the machines will never offend anyone with their looks, you get what you pay for, and they are now in the same price bracket (or below) many of the Ultrabooks.
I can understand why PC sales have fallen, and for me it's not just tablets. What I don't understand is why Apple hasn't gone for the jugular like they did on Windows Vista.
This is the final post in the series on the Ultimate Coder Challenge: Going Perceptual. The applications are in and now it's do or die. The biggest challenge to our contestants at this point isn't the code, or the SDK, or their idea. It's our hardware. It's the installers. It's our ability to understand what they were trying to do and that has proven to be difficult in some cases.
Perceptual computing is a form of interacting with a computer in a more natural human way. You speak or you gesture, or you may even want to just swipe. It's not about the keyboard and mouse. The biggest problem is that what we think makes a good method of interaction (cue Minority Report) is actually a terrible way to interact. It's tiring, there's no feedback, and gestures are in 3D, not 2, and not constrained to a box or button and so can be ambiguous. Further, a touch gesture as a definite start and stop. It starts when you touch and ends when you stop touching. When does a gesture start and stop?
Further, and as has been mentioned elsewhere, there's no standard. We all know how to pinch-zoom, or swipe to scroll, or even push a button. How do you push a button in 3D space? How do pinch to zoom, without the initial contraction of the fingers being misinterpreted as a shrink action first?
So on to the challengers:
Lee's work I've written about extensively, usually while ruefully shaking my head at the madman. He shamelessly takes on way more than he can chew and after endless dead ends, delays, setbacks and possibly a broken keyboard or two he comes out with something a little special. His virtual conference is, for me, an excellent example of what's possible with perceptual computing: a UI that revolves around you. You're in a virtual 3D environment talking to others and as you turn, the viewport turns with you.
Except there's one huge problem with this approach: in a video conference you never turn. If you turn then you're no longer looking at the screen in front of you. Further, as you turn, the mass-tracking (not head tracking) turns the virtual camera in the direction that you're turning, meaning the image on the screen pans, meaning, well, that you don't actually need to turn to view it. Kind of like an anti-Catch-22.
It's very cool though. Very cool.
Sixense's puppetry demo is complete. It's the ultimate kids toy and allows two people (or one talented person) to host and record a virtual puppet show. It works, but I can't help but feel that so much effort went into things like scenes and a story that the creators forgot what puppet shows are about. Dialogue. Well, and violence, for those who fondly remember the Punch and Judy days, but mostly dialogue and often costumes. So why not ditch the fancy scenes and instead allow quick changes of clothes? Or have amusing sound effects when one puppet bonks another puppet on the head? That would focus the players on thinking about their puppet rather than trying to control a puppet that keeps flying all over the countryside.
It's an excellent example of 3D free form gesture based interaction with a computer, and for that they score highly. Most importantly their user feedback on interpreting hand gestures is by far the best of any contestant.
Code Monkeys Stargate application shows promise in (for me) the ultimate goal of a head tracking video game. Unfortunately the dream never lived up to reality and the sheer effort involved in trying to control the targeting quickly took away from the luster. I did get it to work, but the headache afterwards was not worth it.
Infrared5 followed a similar path to Code Monkeys with their Kiwi challenge, however they introduced an extra variable by having the control of the application be done via a smartphone application. The connection between the game and the smartphone was seamless and slick. I could not, however, control anything but the fire button from my iPhone which made the game unplayable for me. Restarting didn't solve the issue. Further, I often ended up with a black landscape in my viewport that no amount of yelling, tilting, clicking or swiping would get me out of, and closing the app via alt-tab (there's no close button I could find) was near impossible because no sooner had you popped out of the app then you were thrust back into it, black soulless void and all.
Pete has created an image processing application that uses a series of gestures to activate filters. His biggest problem is there is no defined, standard set of gestures one can call on to immediately dive into his app. There's no help button, so you're left guessing at gestures. Thumb up and down, swiping up and down, and in my case swearing like a sailor. An excellent attempt at making gesture input a natural part of the interaction with the application, but let down, I think, but the maturity of the platform.
Eskil has demonstrated an abstraction layer that allows developers to take advantage of perceptual computing without needing to write the boilerplate code. In fact, without needing to know anything about the nuts and bolts at all. This is a tremendous achievement given the short time available.
A short note on the perceptual computing camera: I'm typing this review on my Lenevo Yoga with the camera perched on the top of the screen. The camera is heavy, as I've mentioned before, but it's only now, after hours of clenching my fist, twiddling my fingers, bobbing my head frantically and yelling 'Engage' in various accents to try and interact with the game, that I've realised I'm getting tired always hiking the screen back up to the vertical position. The camera makes the lid sag due to the weight, and when it's not sagging it's trying to tip the laptop base over apex. It needs to be lighter and it needs to be way smaller.
Another general comment on the use of the camera as an input device is that onscreen feedback is critical. If you don't get feedback on what the computer thinks you're doing you go nowhere. You can't debug. I cannot overstate how important feedback is.
Also, I do need to also comment on the Lenevo itself. I love the feel of the keyboard and most especially the palm rest. So very comfortable. But please, to anyone who is thinking of manufacturing a laptop keyboard, DO NOT reduce the width of the right shift key and squeeze in the up arrow in the space created. It means I'm constantly hitting the up arrow. Constantly. It's doing my head in. Apart from that the screen is crisp, the battery life good, and the flip back keyboard weirdly useful. Propping this thing on a table with the keyboard folded back to watch a movie or flick through the news is brilliant.
As to perceptual computing, my overwhelming feeling is "it's coming". But it's not here yet. We are trying to make a computer be like the real world. You turn or look or speak or grab at something that isn't there in the hope that the computer will mimic or replicate your intent virtually. This, to me, is akin to skuemorphism: changing something to be like something else, like Apple making it's calendar application leather bound, or having ebook readers show an animated page turn. A computer is not the real world, and it doesn't have the limitations of the real world (so to speak) so why mimic it?
I think the future of Perceptual computing will most likely be subtle. The computer may recognise you or your voice, and will recognise when you are in front of the computer, when you're looking at it, and when you're not. The end of screen savers, really, since it would just go to sleep. Gestures such as brushing away an app to close it, or flicking it to move it to another screen would be intuitive, and voice recognition would allow utility commands such as searching or bookmarking to be carried out without needing to take your hands of the keyboard or, indeed, outside of the current application.
Gaze tracking is another incredibly important, yet not currently functioning (at least on the hardware I have in front of me) feature that has a myriad of uses. My immediate use would be for eye tracking in UI testing, but even simpler could things like auto scrolling, or even auto-hiding of elements when they are not being looked at directly. The eye does, however, jump around an awful lot so the smoothing algorithm will need to be heavily weighted.
The contestants achieved some incredible feats of patience, innovation, creativity and problem solving. I take my hat off to them for their perseverance and outright foolishness in taking on a challenge that many of them were so unprepared for, yet so willing to have a go and rise to the occasion. Rise they did, so well done, guys. Get some rest and have a beer. You've earned it.
A thoughtful and interesting conclusion to the series Chris. The one thing I really felt I was missing was having a decent graphic artist so that I could do some decent on screen tutorials. This is an area that really did need someone with more artistic ability than me.
I was brought up to respect my elders. I don't respect many people nowadays.
This week is the final week of blogs by the challengers. I won't go through individual entries since pretty much all of them are at the wrap up / polish stage.
The overwhelming feeling you get when reading the blogs is one of a challenge accepted and, almost, tamed. This is uncharted territory for the contestants and that territory is not paved smoothly: the SDK is in beta and the capabilities of the hardware are still limited. There's a lot of things that would be great if they worked, but they don't. Sizense wanted to have their Big Bad Wolf puppet blow the little piggy puppet's house down by blowing on the mic. "Blowing on a mic" is not a recognised word so it can't happen. Code-Monkeys (and many others) wanted head tracking - or even gaze tracking - but the hardware simply isn't up to it at the moment. Infrared5 simply want more grunt from the Lenevo.
It's close. Really, really close and while the contestants were not always able to achieve their first order approximation of what they wanted, they have done exactly what good developers do and focus on what the outcome they want is, and then work backwards. Instead of finger tracking you use thumb tracking, instead of eye tracking you use head tracking, instead of head tracking you use body mass tracking. Instead of tracking everything just do your tracking work on the object (eg hand) you want to track and ignore all other input data. Speed improvements came quickly, as did a usable (but maybe not perfect) solution.
The point is computing power is always increasing, the SDK will only improve, the hardware will become more refined and more responsive (and offload much of the software based processing) and we will get there. Quickly.
We get the final (final) versions of the apps soon and it will be then that we, the judges, have to dive in and ask ourselves two basic questions
1. What is perceptual computing?
2. Who knocked it out of the park?
Thanks for all your encouragement over the last few weeks Chris. Reading the judge's comments have been a real highlight over the weeks and I would have to say that last weeks was one of the funniest posts I've ever read, so kudos for that.
With the competition, I tried to stay with what was in the SDK, and I stand in awe of what the other competitors produced. As for point 2, the answer (to me) is easy - Lee. Clean out the park, and 3 counties of clearance.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 6 and we're almost done. GDC[^] has been and gone and unfortunately I was not able to attend due to a small matter of our CodeProject.TV[^] launch (where's the blink tag when you need it?). While missing GDC itself almost brought a tear to my eye, the knowledge that I was missing out on the beers Pete and Chip have threatened me with made it particularly painful.
I received my Interactive Gesture Camera from Intel last week and have hooked it up. Dual cameras for depth perception, dual microphones for voice recognition, and an SDK that ties it together.
And it's heavy. Really weirdly heavy for such a small camera. This isn't a bad thing though because it means when it's sitting on the top of your monitor it's very stable, and the picture quality compared to my old logitech is much better, with far less in/out refocussing issues than I had with my old webcam. It is kind of weird having it sitting there, staring at me with those two dead eyes. Evaluating me. Scanning me in the infrared. Knowing where I am, where I'm looking, what I'm saying. Intel itself does absolutely nothing to make me feel more comfortable with their disclaimer stating:
The Camera may not be used in any “mission critical application” in which the failure of the Camera could result, directly or indirectly, in personal injury or death
Injury? Death? This thing is going to sleep in the garage from now on.
Anyway, to the challengers, or those that have not been taken hostage, injured or possibly killed by their cameras. I say this because 2 of the challengers have not submitted blog postings and I've not heard anything from them. Their muffled screams are probably still echoing against the backdrop of a small, blinking green light coming from the tiny black dense camera on their bloodstained laptop.
It sleeps outside tonight. I don't want it talking to my car, whispering to it. Subverting it.
Lee[^] enjoyed GDC and ensured Intel got their hotel bill's worth by spending an inordinate amount of time in his room cranking code. Lee's understandably at the point of polishing, and at the point of taking stock of the reality of gestures. They all sound great but how do you provide feedback for a gesture driven UI? How do you let the user know the difference between a gesture that does something, a gesture that does nothing, and a gesture that was not understood. And how do you educate your users on gestures? He's basically done, so on to testing.
Sixense[^] demo'd their puppet show at GDC and they too are at the point of polishing and introducing a little realism. Not much more to say on them.
Code-Monkeys[^] are getting desperate and are quoting Gene Simmons and resorting to tongue tracking. I'm not going there. I'll just quote the man himself:
Life is too short to have anything but delusional notions about yourself.
Infrared5[^] used GDC as their own private beta testing ground which is perfect. There must have been something in the beer at GDC though because they've left the reservation and are now focussing on foot tracking. I'm a bare-feet kinda guy myself so I'm looking forward to testing next week.
Pete[^] is in lock-down mode, that time in any application where you just have to say "no more". He's introduced some very nice gesture and voice UI - voice control to set filters, shake to add a blur effect (very cute) and gestures such as swiping your entire hand right to left to smooth. I love it - very, very intuitive, almost natural. AC/DC2 and some Twisted Sister. Nice.
Eskil[^] obviously enjoyed GDC and his update this week is primarily about the details behind head tracking.
Overall the contestants seem to be ready. There's been a lot of collaboration and sharing of ideas and code. It's a contest, but they're all in it together and definitely enjoying themselves.
As to us judges? There isn't going to be a lot of enjoyment in the judging. There's some quality work here and it will not be easy.
Week 5 and we're starting to see some rounding out of the finished creations. For us judges it's also the week we start getting the hardware to test, and my Lenevo Yoga is in my hot little hands, getting belted around and abused, as happens to all my toys. It's a very, very solid, thou uninspiring unit. It is a Lenevo, after all, but what it does it does well. Great screen, lovely tactile feel on the keyboard, excellent battery life, but boring as bat-poo. It's the Toyota Camry of laptops - solid, reliable, no nonsense without offending anyone, but you're not going to scare anyone with it.
I do, however, want to slap the person responsible for the trackpad. It's awful.
Danny at Sixense [^]has shown his handpuppet wolf wandering around a 3D backdrop. In my mind they've completed their task and the rest is polish. Using only a camera and an Ultrabook you can buy off the shelf they've created a method of interacting and controlling software using complex gestures. Sure, we've had this on the Kinect for years, but this is new to laptops and beats some other gesture based controls[^] that the media seems to be going nuts over lately. Nice one.
Lee[^], too, is at the polish stage and has some words of wisdom about voice recognition: it doesn't work all that well but be a little clever and it'll work just fine given some context. This is the story of every developer's life, I think.
Soma[^] is deep into the task of rethinking their UI. They've tried the Minority Report style UI but it really is a little tiring and, well, unemotional. Tey continue a theme on performance issues they have face, specifically voice control and speed of recognition. There's a reason Siri needs to be connected to a server to do voice recognition: it's a heavy workload. So they are getting there, but we're now seeing the compromises and trade-offs coming into play.
Infrared5[^] get an automatic 2 point bonus for including two references to AC/DC. They have implemented a face tracking solution by handling perspective correction and depth analysis themselves, in C++, using actual mathematics. Bonus 5 points right there. They are tackling the immediate problems at hand with craft solutions, and focussing on perceptual computing rather than using perceptual computing as a bit of gravy.
Pete[^] 's posted a video of his app's progress and I need to ask him one small favour: show us you in the video, or more specifically, show the gestures you're using to control the app. He's also struggling through the Dark Forest Of Feature Trade-offs and is feeling that his app is becoming least PC focussed and more of a touch app with gestures.
This is not a bad thing at all. Samsung have implemented gesture controls not to save wear and tear on finger tips, but because sometimes you can't touch-swipe. If you're wearing gloves (medical, outside work, it's cold, etc) or have dirty hands (cooking, your 2 year old, you're a messy eater etc) then touch won't cut it, but Perceptual Computing provides that small push that gets over that barrier to interaction. You can again use your computer in a manner very similar to touch, without touching. Not a big thing, and something that you would quickly forget you were doing. And this, in my mind, is the perfect interface: you forget that you're doing it.
I think you're on the right track, Pete.
Eskil[^] has articulated this perfectly: "The goal of any user interface is to disappear" and he's not in the Dark Forest Of Feature Trade-offs, he's in the Swamp of Broken Promises. For him the SDK isn't there yet, not by a long shot. So he's doing what any programmer does and is rewriting chunks. I'm looking forward to seeing how he ties all of this up at the end.
Last but not least, Simian Squared[^] have also reached the epiphany about what gestures promise: a lazy interface that extends gestures. Perceptual computing promises way, WAY more than this, but at it's core it also offers very simple things that can be very powerful and helpful. There's no wads of virtual clay splattering the walls of their pottery room - in fact it looks remarkably clean - so I'm taking that as a sign of excellent progress.
Thanks for the thoughts Chris. You're right - the next video will feature me swiping round to demonstrate. One small thing - there's not one Australian band this week, there are three. Bonus points from me for anyone other than CG who's heard all three of them.
I was brought up to respect my elders. I don't respect many people nowadays.
We're at week 4 of the Ultimate Coder Challenge[^] and at this point we're starting to see the light at the end of the tunnel. For some that's a scary sight.
Sixense[^] are well on their way to creating a virtual sock-puppet, but one that doesn't have the usual awful connotations of an online sock-puppet. This ons is, actually, a sock puppet. To be brutally frank what they have also done is shone a light onto some of the limitations inherent in the depth camera's abilities that have forced them to use slightly nonstandard sock puppet hand gestures (See IEEE Std 4802.01 - Sock Puppet Hand Control Standard 1104). IT would be a win if they could get past this limitation.
Lee[^] has gone ahead and written Yet Another Video Conferencing bus, 'cause, y'know, he has nothing better to do. I know - I just know - that he's hacked his DVR at home to Just Work Better, and his microwave is probably cowering behind the fridge screaming "Make it go away!". He has, however, produced a prototype of a conference system with his 3D avatar injected. I can't help but wonder why he didn't test his virtual teleportation on an assistant[^] first.
Simian[^] focussed mostly on their demo environment. A 3D Japanese themed pottery wheel. Probably best just to think about that for a while.
Pete[^] has switched from Aussie Pub rock to Canadian Top 40 with a little Creedance thrown in. I'm of two minds about this. He's also apologising for providing detailed coding explanation, and I'm sorry Pete but you just lost points on this. I want details. I want code. This is a coding challenge by coders for a large coding audience braying for blood. Well, a large coding audience, at least.
Pete's also hit the inevitable Voice Control Brick Wall. I'm guessing, being on the wrong end of voice control far too often, that it could be an accent issue, so I'd be interested to hear what sort of success those with a (reasonably neutral) US accent have had. Accent, to me, is the 21st century equivalent of the Date format. What, exactly, does 6/7/2013 represent without locale context? The same happens with voice. So if Pete can't talk to his app he's going to have his app talk to him. Just please include a Mute button.
Eskil[^] doesn't provide much in the way of concrete progress on the framework he's building, but does provide a walkthrough of his non-OO approach to creating and rendering UI elements. I'll be honest and say I'm not a fan of his approach. OO development helps separate who is responsible for what, and while that may not result in the tersest of code, it does promote maintainability.
Code Monkeys[^] have touched upon something that you can be sure that the likes of Apple, Google and the Kinect team at Microsoft all know: gesture based UIs are tiring. You know why Tom Cruise's character in Minority Report was so ripped? It's because he was doing 12hr days of shoulder and ab work while using those gesture gloves of his. 12 hours? Try 4 minutes.
Infrared5[^] have revealed another little worm in the Apple: gaze tracking has not been implemented in the PC SDK. It will be added later. So what did the guys do? They slammed their foot on the clutch, dropped from C# down to C++, dropped the clutch and left billowing smoke in their wake. This is exactly what I want to see from a contestant: a dammit-I'll-do-it-myself approach to dealing with issues. Now if only they had a little AC/DC playing in the background...
The Code Project | Co-founder
Microsoft C++ MVP
Last Visit: 22-Oct-19 17:02 Last Update: 22-Oct-19 17:02