We have no active subscriptions in our Paypal account (all cancelled). You can confirm this on your end by looking in your Paypal account to see if you have any recurring subscriptions set up. Look on the "My Pre-approved Payments" page on your paypal profile.
Codeproject is great for writing and publishing article, github/git is great for hosting code.
Do you ever evaluated some sort of integration?
For example instead of manually uploading a zip file you can provide a way to pull it from a git repo. In this way updating the source code can be really fast and easy.
A similar solution can be provided for article text (a markdown file inside the repository?).
I say this because I hate to always keep up to date my articles on codeproject (cleaning the project, zip it, uploading it, modify the link, ...). Usually I simply put a header that point to the official github repository and I never update it on codeproject.
One of the big issues with having 10M members is names. Everyone has a name, and most want to use their name, or at least something vaguely resembling it. The issue is that Real Names are messy, human things meant for messy human things, and are terrible as a way to label things in a way that makes it easy (for a programmer) to reference that name within text or in a URI. We can't have //www.codeproject.com/members/Chris Maunder because the HTTP spec doesn't allow spaces in URIs, nor can we confidently say Chris Maunder refers to me in text, because it could also refer to someone names Chris who is rambling incoherently[^].
So we have Display names as a way to label your content such as posts and articles, and we have usernames as a way to provide a human readable and programmer parsable handle to your account. //www.codeproject.com/members/chris-maunder as a link to you and @chris-maunder as a reference to you in messages.
You don't need to use @username and you can be safely ignore the feature if it bugs you. However, if you like the convenience then a member's username can be found on their profile page or in the popup that appears when you hover over their name in the forums (assuming you have "Profile Popups" enabled in the forums).
We've finished reworking our caching of forums and articles and are happy to see load times for forums go from half-second to 6 milliseconds. That's beyond what we thought we'd get. Start diving deep into messages from the days of yore and load times don't appreciably change. We've essentially opened up the entire corpus of Forum postings for instant retrieval and slain a number of bugs, thrown out pages of code in the process and reduced our database load by a factor of three. It's almost idling now.
On the article side of things we've improved performance even more and have one final push, after which time we'll hunt down and nuke any remaining load issues.
For a decade we've been working against a local cache on each of the webservers. This meant that we either had to keep the time-to-live short, or we had to work out a sensible way of ensuring that when a member changes an article on one server, and is then directed to another server, they see their updated information - even though the local caches didn't talk to one another.
Yes: distributed caching is a solved problem but there weren't many canned solutions when we started, and we did end up doing some clever things to ensure it all looked sensible, give or take some "expected" caching issues such as a deleted article still occasionally being around for 10 or so minutes. "Expected" really comes does to what is forgiveable, and in this day and age even stuff like that stretches the friendship so we've finally had a chance to bite the bullet, plug up the local cache and add a couple of Redis [^] servers. We're using the ServiceStack Redis client[^] and implemented - fairly easily - a distributed cache that not just solves our cache-sync issues but speeds up application spool up time since the cache is off-server and independent of the webservers themselves. No need to recache on startup - the data's already there.
We are, obviously, seeing our cache load times go up since it's no longer a local cache but requires a network round trip plus serialisation, the overall database load is nicely down and our code is far cleaner.
I was sick of moving between computers, sick of the power outages in our building knocking me off my machine, and sick of having to Remote Desktop from home to my office machine to be productive. I'd setup various machines that would allow me to do the basics when I had to use them (eg for travel) but it's never the same. Like sleeping in someone's spare room - no matter how comfortable it's never quite the same.
I figured that laptops these days were pretty damn powerful and after road testing[^] a couple of Ultrabooks I decided that anything Core i7 with 8GB RAM would be more than enough for me. All I needed was something that would let me install Windows 7, something that was light, and something that had a big, fast SSD.
Enter the mid-2013 Macbook Air. Core i7, 8GB RAM, and the fastest 256GB SSD around.
To cut to the chase: it's an excellent dev machine and is faster than my 4 year old quad i7 desktop. I'm seriously impressed. I'm now able to work on a single machine anywhere in the world without having to compromise by switching to a slower machine for travelling, and I have the added bonus that I no longer need a desktop for the office and a laptop for travel. A single unit does the trick.
The annoying bits
It's a Mac. Apple did not go out of their way to make the Bootcamp experience exceptional. The trackpad sucks in Windows, yet it's by far my favourite trackpad when in MacOS. It's brilliant. Trackpad++ sort of fixes this, though.
I tried using parallels to create a VM from my Bootcamp partition in order to run VS while in the Mac environment. This was great, and you get the proper trackpad experience, but the big glaring issue was that I needed to use a USB DisplayLink adapater to hookup to an external monitor and installing DispalyLink drivers in bootcamp and then running it under Parallels causes the Windows VM to bluescreen. Parallels is aware of the issue and had no plans at the time to do anything about it.
So I stick to Bootcamp or MacOs and never the twain shall meet.
Docking stations became a big issue because I need a lot of screen real estate. I hate cable spaghetti, though, and tried a number of options before settling on an option that gives me almost everyhing for the (ironically) cheapest price: A thunderbolt display.
Thunderbolt displays are expensive. However, they come with a split thunderbolt / power adapter that plugs into the thunderbolt port on one side and provide a power cable to the laptop on the other side. Within the thunderbolt display are a pair of excellent speakers, a webcam, USB 3.0 ports, and gigabit ethernet. It's essentially a fully self-contained docking station built into one of the nicest monitors I've ever used, and with the 27" running 2560 x 1440, it allows me to run VS on one half and SQL MS or Chrome or anything else on the other half in the same manner that I'd previously been using two separate screens.
So factor in the cost of two 19" screens, a docking station ($250 - $300) plus speakers / external webcam + cables and you'll find that a refurbished 27" thunderbolt display is way cheaper, far more convenient and (for me at least) a much nicer experience.
The drawback is that Windows doesn't play well with thunderbolt and you may have to physically shut down your machine before unplugging the monitor if you have the monitor set as your primary display. Further, you need to plug the monitor in before you boot up a windows box because Windows only scans for thunderbolt on bootup. This is really, really annoying.
The only other annoying bit is fan noise. I hammer that poor little laptop and in a quiet room at 2AM when you're building code and running a zillion unit tests then thing really winds up and gets a bit rowdy. I'm still waiting to see what Apple does with the 13" Macbook Pro since a quad core Haswell unit could have a little more headroom before it starts to get hot and bothered - or at the very least it'll be done with it's tasks sooner meaning noise for a shorter time. A retina display would be nice, but totally not needed, but the added weight is a real issue. Touchscreen - while soemthing I've grown to love with the Ultrabooks - is a complete waste for me. The laptop sits by my monitor, closed, while I work. I have no desire to put finger prints all over my big display, and after my experiences with the Perceptual Computing Challenge I know how tired arms get after spending even short periods trying to navigate with your arms up.
Overall a 7/10.
- single machine whereever I am in the world
- excellent setup with the external thunderbolt display
- built in UPS. Love it.
- Totally fast enough.
- Windows issues with thunderbolt connections
- Noisy when hot and bothered
- Did I rally say a computer was "fast enough"? I lied. No such thing.
We skipped VS2013 / .NET 4.5 and jumped straight over to VS 2013 / .NET 4.5.1 because, y'know, it's far more exciting running your production servers on beta software rather than on the boring "tested" stuff.
We're not using any of the fun stuff explicitly, yet, but Matthew has already been eyeing off a bunch of code that can do with some async action. The thing that's most immediate to me is the multi-core JIT and startup time; all cores are actually getting used, CPU usage is up where it should be, and the site spools up much, much nicer than it ever has. Simply getting the advantages of the framework improvements is (almost) enough for me.
The other obvious timesaver is build time: much, much faster than VS2010, even when bogged down with all the other stuff I have open. I'm developing, testing, and running the site on my Macbook Air on Win7. There's the VS IDE, SQL Server Management Studio, IIS running the actual site, Outlook groaning under the weight of a 23Gb pst, various Word docs and spreadsheets, 6 remote desktop windows and half a dozen browser windows and it's all humming along nicely.
Although sometimes (especially during compile time) the humming sounds suspiciously like the Mac's fan is about to attempt takeoff. It gets disturbingly loud.
I'm still not taken with the new VS look - a little harsh, a little lacking on warmth, but it's way faster and, so far, more stable than my old creaking install of VS2010.
As to my experiment with moving my developer life onto a tiny, ultralight laptop: the jury's still out. 7/10 so far.
I got sick of typing URLs for members and so, well, I coded.
To provide a link to another member just use the tried and true @username syntax, where the username is the username generated from their name (or manually modified) in the form first-last. Everyone's profile shows the username just under their profile image.
So if I want to shout out to a ray of sunshine I can just go @Michael-Martin (no link - just type that literally) and when the message is saved the link is generated.
Obviously this is opening a can of worms and I know the next two requests. Yes, soon.
The mobile version of CodeProject has been updated a little to help those with fat fingers (ie me). It's by no means perfect, but we've aimed for a simplified UI, easy to read fonts and easy to touch buttons while still maintaining as much browsing and reading functionality as we can.
We have, however, limited some actions (eg voting) for a subsequent rev. We'd rather focus on providing a nice UI to read articles than worry that fat fingers (looking in the mirror again) will accidentally hit the down (or up) vote button while trying to read the next article.
The Essential Guide to Mobile App Testing[^] should be required reading for devs and those who pruport to manage devs who are involved in mobile app development. It's a rare, rare day that I promote a specific whitepaper but as part of our new Research Library[^] we've been working incredibly hard to find companies that have spent the time to create research material that helps you make decisions instead of simply showing you powerpoint slides of their product.
In the spirit over avoiding real work I've been playing around with an idea that is ridiculously simple but may provide a little entertainment for our members: Stylable member profiles.
Go to your settings page[^] and hit the Customisation tab and you'll see a text area for entering in styles augment or override our basic styles.
This is fraught with peril on so many levels. Firstly, you might break our page. Secondly, when we update our styles or page layout, we may break your styling. Thirdly, things could just get messy. Really messy.
But that's what life's all about, isn't it. So enjoy.
Secondly. we've introduced a new article type called "Reference". This will be fleshed out a little more soon, but for now we wanted to provide a place for things that I've wanted to post for eons: tables and reference sheets. What's ASCII value of X? What's the HTML entity for Y? Stuff like that. Let's start simple and work our way up.
This morning I had an experience that provided such a classic picture of the entire IT industry for me right now:
I went into the Microsoft store and was looking at an Acer Aspire S7. It looked nice and said on the blurb "128GB SSD". So I took a peek at the Computer's properties and saw "57.9GB free of 79.8GB" on drive C - the only drive visible.
I asked the sales guy where the 128 - 80 = 48GB was. He told me the missing space was used by the OS, which I politely disagreed with because the OS was currently on Drive C and was using about 22GB of space. He then tells me that the demo software they have installed that's using up the space (I again disagree), and then tells me it's the recovery partition that's using the space, so I ask him to show me this 48GB recovery partition. He hits Window-C, the (HD) screen totally fills with Control panel applets and he types in "Disk management" but nothing appears. He scans the list of applets briefly then gives up and then right-swipes to get the settings but again gives up, and after fumbling around finds a list of partitions, but is unable to get me the size of any of them. He then turns to me and says "this is really outside of a sales thing - I need to get you my tech guy".
He clearly didn't know what he was doing, but he had a good enough clue to be able to navigate around better than most people I've seen who have used Win8. Yet he couldn't answer a simple question relating to what the tag says and what's actually on sale, and said it was a technical, not a sale question. I left the store feeling the same way you feel when you leave a mechanics who tells you you need to get the air in your tyres exchanged at the beginning and end of Winter and that'll be $149.99, please.
I felt lost when he was going all over the place trying to answer the question (and I've used win8 an awful lot) and then I felt like my question was unimportant to them, that I shouldn't be asking it, and that the answers I got were made up (which they were).
It felt complicated, It felt confusing, and it was impossible to make a choice on laptops because there were no answers, and that the answers I would get I couldn't trust anyway.
I wander 3 doors down to the Apple store, look at the properties of a 1TB iMac and ask to see the actual size of the HDD. The sales dude does a single right-click, Get info and shows me that of 999.4GB, there is 978.7GB free. We're done.
There's 1 keyboard layout. You can have light (11" or 13") and medium powered with OK screens or heavier, thicker, more powerful with retina displays (13" or 15"). It's easy - except that I want a retina display on an Air. Not because any other laptop I've ever seen as a retina display: only because the Macbook Pro's have a retina display. I don't actually, in isolation, want a retina display, I just don't want to feel like I'm missing out on something.
When I look at Tablets I see the iPad, Android or Surface devices and they are all fairly simply to use. Phones, be it Android, Win Phone 8, iPhone or Blackerry are all simple to use. They are in fact simpler to use than ever, with only Feature phones being simpler (but many of them were tear the hair out annoying).
Yet Laptops and PCs seem to have increasing their complexity and choice and confusion making the buying decision complicated and intimidating. Windows 8 has made actually using a laptop confusing and complicated. Put these together and you have a sales nightmare: you don't know which one to buy and while trying to decide you don't know how to actually use the thing you think you need to buy.
And then you wander over to Apple and you think "My God this is so simple" and you have limited choice, and you feel you have a chance at making a decision.
Previously, however, the decision would come down to "Do I pay a 30%-50% premium on essentially the same hardware just to get an Apple". For me this has always been game over - I'm simply not willing to pay that much. Yet today I'm looking at a complicated Windows 8 machine that was more expensive than the simple Apple machine.
Buying a PC or laptop/Ultrabook is no longer easy or as cheap as it was a year or so ago. Win8 is (to me anyway) a technically better and more secure operating system than MacOS ruined by an awful UI. Apple has a still-maturing OS that is staring to acknowledge that security is important but still crashes, still locks up and still can't seem to work out how to handle network calls on a background thread. But it's simple, the machines will never offend anyone with their looks, you get what you pay for, and they are now in the same price bracket (or below) many of the Ultrabooks.
I can understand why PC sales have fallen, and for me it's not just tablets. What I don't understand is why Apple hasn't gone for the jugular like they did on Windows Vista.
This is the final post in the series on the Ultimate Coder Challenge: Going Perceptual. The applications are in and now it's do or die. The biggest challenge to our contestants at this point isn't the code, or the SDK, or their idea. It's our hardware. It's the installers. It's our ability to understand what they were trying to do and that has proven to be difficult in some cases.
Perceptual computing is a form of interacting with a computer in a more natural human way. You speak or you gesture, or you may even want to just swipe. It's not about the keyboard and mouse. The biggest problem is that what we think makes a good method of interaction (cue Minority Report) is actually a terrible way to interact. It's tiring, there's no feedback, and gestures are in 3D, not 2, and not constrained to a box or button and so can be ambiguous. Further, a touch gesture as a definite start and stop. It starts when you touch and ends when you stop touching. When does a gesture start and stop?
Further, and as has been mentioned elsewhere, there's no standard. We all know how to pinch-zoom, or swipe to scroll, or even push a button. How do you push a button in 3D space? How do pinch to zoom, without the initial contraction of the fingers being misinterpreted as a shrink action first?
So on to the challengers:
Lee's work I've written about extensively, usually while ruefully shaking my head at the madman. He shamelessly takes on way more than he can chew and after endless dead ends, delays, setbacks and possibly a broken keyboard or two he comes out with something a little special. His virtual conference is, for me, an excellent example of what's possible with perceptual computing: a UI that revolves around you. You're in a virtual 3D environment talking to others and as you turn, the viewport turns with you.
Except there's one huge problem with this approach: in a video conference you never turn. If you turn then you're no longer looking at the screen in front of you. Further, as you turn, the mass-tracking (not head tracking) turns the virtual camera in the direction that you're turning, meaning the image on the screen pans, meaning, well, that you don't actually need to turn to view it. Kind of like an anti-Catch-22.
It's very cool though. Very cool.
Sixense's puppetry demo is complete. It's the ultimate kids toy and allows two people (or one talented person) to host and record a virtual puppet show. It works, but I can't help but feel that so much effort went into things like scenes and a story that the creators forgot what puppet shows are about. Dialogue. Well, and violence, for those who fondly remember the Punch and Judy days, but mostly dialogue and often costumes. So why not ditch the fancy scenes and instead allow quick changes of clothes? Or have amusing sound effects when one puppet bonks another puppet on the head? That would focus the players on thinking about their puppet rather than trying to control a puppet that keeps flying all over the countryside.
It's an excellent example of 3D free form gesture based interaction with a computer, and for that they score highly. Most importantly their user feedback on interpreting hand gestures is by far the best of any contestant.
Code Monkeys Stargate application shows promise in (for me) the ultimate goal of a head tracking video game. Unfortunately the dream never lived up to reality and the sheer effort involved in trying to control the targeting quickly took away from the luster. I did get it to work, but the headache afterwards was not worth it.
Infrared5 followed a similar path to Code Monkeys with their Kiwi challenge, however they introduced an extra variable by having the control of the application be done via a smartphone application. The connection between the game and the smartphone was seamless and slick. I could not, however, control anything but the fire button from my iPhone which made the game unplayable for me. Restarting didn't solve the issue. Further, I often ended up with a black landscape in my viewport that no amount of yelling, tilting, clicking or swiping would get me out of, and closing the app via alt-tab (there's no close button I could find) was near impossible because no sooner had you popped out of the app then you were thrust back into it, black soulless void and all.
Pete has created an image processing application that uses a series of gestures to activate filters. His biggest problem is there is no defined, standard set of gestures one can call on to immediately dive into his app. There's no help button, so you're left guessing at gestures. Thumb up and down, swiping up and down, and in my case swearing like a sailor. An excellent attempt at making gesture input a natural part of the interaction with the application, but let down, I think, but the maturity of the platform.
Eskil has demonstrated an abstraction layer that allows developers to take advantage of perceptual computing without needing to write the boilerplate code. In fact, without needing to know anything about the nuts and bolts at all. This is a tremendous achievement given the short time available.
A short note on the perceptual computing camera: I'm typing this review on my Lenevo Yoga with the camera perched on the top of the screen. The camera is heavy, as I've mentioned before, but it's only now, after hours of clenching my fist, twiddling my fingers, bobbing my head frantically and yelling 'Engage' in various accents to try and interact with the game, that I've realised I'm getting tired always hiking the screen back up to the vertical position. The camera makes the lid sag due to the weight, and when it's not sagging it's trying to tip the laptop base over apex. It needs to be lighter and it needs to be way smaller.
Another general comment on the use of the camera as an input device is that onscreen feedback is critical. If you don't get feedback on what the computer thinks you're doing you go nowhere. You can't debug. I cannot overstate how important feedback is.
Also, I do need to also comment on the Lenevo itself. I love the feel of the keyboard and most especially the palm rest. So very comfortable. But please, to anyone who is thinking of manufacturing a laptop keyboard, DO NOT reduce the width of the right shift key and squeeze in the up arrow in the space created. It means I'm constantly hitting the up arrow. Constantly. It's doing my head in. Apart from that the screen is crisp, the battery life good, and the flip back keyboard weirdly useful. Propping this thing on a table with the keyboard folded back to watch a movie or flick through the news is brilliant.
As to perceptual computing, my overwhelming feeling is "it's coming". But it's not here yet. We are trying to make a computer be like the real world. You turn or look or speak or grab at something that isn't there in the hope that the computer will mimic or replicate your intent virtually. This, to me, is akin to skuemorphism: changing something to be like something else, like Apple making it's calendar application leather bound, or having ebook readers show an animated page turn. A computer is not the real world, and it doesn't have the limitations of the real world (so to speak) so why mimic it?
I think the future of Perceptual computing will most likely be subtle. The computer may recognise you or your voice, and will recognise when you are in front of the computer, when you're looking at it, and when you're not. The end of screen savers, really, since it would just go to sleep. Gestures such as brushing away an app to close it, or flicking it to move it to another screen would be intuitive, and voice recognition would allow utility commands such as searching or bookmarking to be carried out without needing to take your hands of the keyboard or, indeed, outside of the current application.
Gaze tracking is another incredibly important, yet not currently functioning (at least on the hardware I have in front of me) feature that has a myriad of uses. My immediate use would be for eye tracking in UI testing, but even simpler could things like auto scrolling, or even auto-hiding of elements when they are not being looked at directly. The eye does, however, jump around an awful lot so the smoothing algorithm will need to be heavily weighted.
The contestants achieved some incredible feats of patience, innovation, creativity and problem solving. I take my hat off to them for their perseverance and outright foolishness in taking on a challenge that many of them were so unprepared for, yet so willing to have a go and rise to the occasion. Rise they did, so well done, guys. Get some rest and have a beer. You've earned it.
A thoughtful and interesting conclusion to the series Chris. The one thing I really felt I was missing was having a decent graphic artist so that I could do some decent on screen tutorials. This is an area that really did need someone with more artistic ability than me.
I was brought up to respect my elders. I don't respect many people nowadays.
This week is the final week of blogs by the challengers. I won't go through individual entries since pretty much all of them are at the wrap up / polish stage.
The overwhelming feeling you get when reading the blogs is one of a challenge accepted and, almost, tamed. This is uncharted territory for the contestants and that territory is not paved smoothly: the SDK is in beta and the capabilities of the hardware are still limited. There's a lot of things that would be great if they worked, but they don't. Sizense wanted to have their Big Bad Wolf puppet blow the little piggy puppet's house down by blowing on the mic. "Blowing on a mic" is not a recognised word so it can't happen. Code-Monkeys (and many others) wanted head tracking - or even gaze tracking - but the hardware simply isn't up to it at the moment. Infrared5 simply want more grunt from the Lenevo.
It's close. Really, really close and while the contestants were not always able to achieve their first order approximation of what they wanted, they have done exactly what good developers do and focus on what the outcome they want is, and then work backwards. Instead of finger tracking you use thumb tracking, instead of eye tracking you use head tracking, instead of head tracking you use body mass tracking. Instead of tracking everything just do your tracking work on the object (eg hand) you want to track and ignore all other input data. Speed improvements came quickly, as did a usable (but maybe not perfect) solution.
The point is computing power is always increasing, the SDK will only improve, the hardware will become more refined and more responsive (and offload much of the software based processing) and we will get there. Quickly.
We get the final (final) versions of the apps soon and it will be then that we, the judges, have to dive in and ask ourselves two basic questions
1. What is perceptual computing?
2. Who knocked it out of the park?
Thanks for all your encouragement over the last few weeks Chris. Reading the judge's comments have been a real highlight over the weeks and I would have to say that last weeks was one of the funniest posts I've ever read, so kudos for that.
With the competition, I tried to stay with what was in the SDK, and I stand in awe of what the other competitors produced. As for point 2, the answer (to me) is easy - Lee. Clean out the park, and 3 counties of clearance.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 6 and we're almost done. GDC[^] has been and gone and unfortunately I was not able to attend due to a small matter of our CodeProject.TV[^] launch (where's the blink tag when you need it?). While missing GDC itself almost brought a tear to my eye, the knowledge that I was missing out on the beers Pete and Chip have threatened me with made it particularly painful.
I received my Interactive Gesture Camera from Intel last week and have hooked it up. Dual cameras for depth perception, dual microphones for voice recognition, and an SDK that ties it together.
And it's heavy. Really weirdly heavy for such a small camera. This isn't a bad thing though because it means when it's sitting on the top of your monitor it's very stable, and the picture quality compared to my old logitech is much better, with far less in/out refocussing issues than I had with my old webcam. It is kind of weird having it sitting there, staring at me with those two dead eyes. Evaluating me. Scanning me in the infrared. Knowing where I am, where I'm looking, what I'm saying. Intel itself does absolutely nothing to make me feel more comfortable with their disclaimer stating:
The Camera may not be used in any “mission critical application” in which the failure of the Camera could result, directly or indirectly, in personal injury or death
Injury? Death? This thing is going to sleep in the garage from now on.
Anyway, to the challengers, or those that have not been taken hostage, injured or possibly killed by their cameras. I say this because 2 of the challengers have not submitted blog postings and I've not heard anything from them. Their muffled screams are probably still echoing against the backdrop of a small, blinking green light coming from the tiny black dense camera on their bloodstained laptop.
It sleeps outside tonight. I don't want it talking to my car, whispering to it. Subverting it.
Lee[^] enjoyed GDC and ensured Intel got their hotel bill's worth by spending an inordinate amount of time in his room cranking code. Lee's understandably at the point of polishing, and at the point of taking stock of the reality of gestures. They all sound great but how do you provide feedback for a gesture driven UI? How do you let the user know the difference between a gesture that does something, a gesture that does nothing, and a gesture that was not understood. And how do you educate your users on gestures? He's basically done, so on to testing.
Sixense[^] demo'd their puppet show at GDC and they too are at the point of polishing and introducing a little realism. Not much more to say on them.
Code-Monkeys[^] are getting desperate and are quoting Gene Simmons and resorting to tongue tracking. I'm not going there. I'll just quote the man himself:
Life is too short to have anything but delusional notions about yourself.
Infrared5[^] used GDC as their own private beta testing ground which is perfect. There must have been something in the beer at GDC though because they've left the reservation and are now focussing on foot tracking. I'm a bare-feet kinda guy myself so I'm looking forward to testing next week.
Pete[^] is in lock-down mode, that time in any application where you just have to say "no more". He's introduced some very nice gesture and voice UI - voice control to set filters, shake to add a blur effect (very cute) and gestures such as swiping your entire hand right to left to smooth. I love it - very, very intuitive, almost natural. AC/DC2 and some Twisted Sister. Nice.
Eskil[^] obviously enjoyed GDC and his update this week is primarily about the details behind head tracking.
Overall the contestants seem to be ready. There's been a lot of collaboration and sharing of ideas and code. It's a contest, but they're all in it together and definitely enjoying themselves.
As to us judges? There isn't going to be a lot of enjoyment in the judging. There's some quality work here and it will not be easy.
Week 5 and we're starting to see some rounding out of the finished creations. For us judges it's also the week we start getting the hardware to test, and my Lenevo Yoga is in my hot little hands, getting belted around and abused, as happens to all my toys. It's a very, very solid, thou uninspiring unit. It is a Lenevo, after all, but what it does it does well. Great screen, lovely tactile feel on the keyboard, excellent battery life, but boring as bat-poo. It's the Toyota Camry of laptops - solid, reliable, no nonsense without offending anyone, but you're not going to scare anyone with it.
I do, however, want to slap the person responsible for the trackpad. It's awful.
Danny at Sixense [^]has shown his handpuppet wolf wandering around a 3D backdrop. In my mind they've completed their task and the rest is polish. Using only a camera and an Ultrabook you can buy off the shelf they've created a method of interacting and controlling software using complex gestures. Sure, we've had this on the Kinect for years, but this is new to laptops and beats some other gesture based controls[^] that the media seems to be going nuts over lately. Nice one.
Lee[^], too, is at the polish stage and has some words of wisdom about voice recognition: it doesn't work all that well but be a little clever and it'll work just fine given some context. This is the story of every developer's life, I think.
Soma[^] is deep into the task of rethinking their UI. They've tried the Minority Report style UI but it really is a little tiring and, well, unemotional. Tey continue a theme on performance issues they have face, specifically voice control and speed of recognition. There's a reason Siri needs to be connected to a server to do voice recognition: it's a heavy workload. So they are getting there, but we're now seeing the compromises and trade-offs coming into play.
Infrared5[^] get an automatic 2 point bonus for including two references to AC/DC. They have implemented a face tracking solution by handling perspective correction and depth analysis themselves, in C++, using actual mathematics. Bonus 5 points right there. They are tackling the immediate problems at hand with craft solutions, and focussing on perceptual computing rather than using perceptual computing as a bit of gravy.
Pete[^] 's posted a video of his app's progress and I need to ask him one small favour: show us you in the video, or more specifically, show the gestures you're using to control the app. He's also struggling through the Dark Forest Of Feature Trade-offs and is feeling that his app is becoming least PC focussed and more of a touch app with gestures.
This is not a bad thing at all. Samsung have implemented gesture controls not to save wear and tear on finger tips, but because sometimes you can't touch-swipe. If you're wearing gloves (medical, outside work, it's cold, etc) or have dirty hands (cooking, your 2 year old, you're a messy eater etc) then touch won't cut it, but Perceptual Computing provides that small push that gets over that barrier to interaction. You can again use your computer in a manner very similar to touch, without touching. Not a big thing, and something that you would quickly forget you were doing. And this, in my mind, is the perfect interface: you forget that you're doing it.
I think you're on the right track, Pete.
Eskil[^] has articulated this perfectly: "The goal of any user interface is to disappear" and he's not in the Dark Forest Of Feature Trade-offs, he's in the Swamp of Broken Promises. For him the SDK isn't there yet, not by a long shot. So he's doing what any programmer does and is rewriting chunks. I'm looking forward to seeing how he ties all of this up at the end.
Last but not least, Simian Squared[^] have also reached the epiphany about what gestures promise: a lazy interface that extends gestures. Perceptual computing promises way, WAY more than this, but at it's core it also offers very simple things that can be very powerful and helpful. There's no wads of virtual clay splattering the walls of their pottery room - in fact it looks remarkably clean - so I'm taking that as a sign of excellent progress.
Thanks for the thoughts Chris. You're right - the next video will feature me swiping round to demonstrate. One small thing - there's not one Australian band this week, there are three. Bonus points from me for anyone other than CG who's heard all three of them.
I was brought up to respect my elders. I don't respect many people nowadays.
We're at week 4 of the Ultimate Coder Challenge[^] and at this point we're starting to see the light at the end of the tunnel. For some that's a scary sight.
Sixense[^] are well on their way to creating a virtual sock-puppet, but one that doesn't have the usual awful connotations of an online sock-puppet. This ons is, actually, a sock puppet. To be brutally frank what they have also done is shone a light onto some of the limitations inherent in the depth camera's abilities that have forced them to use slightly nonstandard sock puppet hand gestures (See IEEE Std 4802.01 - Sock Puppet Hand Control Standard 1104). IT would be a win if they could get past this limitation.
Lee[^] has gone ahead and written Yet Another Video Conferencing bus, 'cause, y'know, he has nothing better to do. I know - I just know - that he's hacked his DVR at home to Just Work Better, and his microwave is probably cowering behind the fridge screaming "Make it go away!". He has, however, produced a prototype of a conference system with his 3D avatar injected. I can't help but wonder why he didn't test his virtual teleportation on an assistant[^] first.
Simian[^] focussed mostly on their demo environment. A 3D Japanese themed pottery wheel. Probably best just to think about that for a while.
Pete[^] has switched from Aussie Pub rock to Canadian Top 40 with a little Creedance thrown in. I'm of two minds about this. He's also apologising for providing detailed coding explanation, and I'm sorry Pete but you just lost points on this. I want details. I want code. This is a coding challenge by coders for a large coding audience braying for blood. Well, a large coding audience, at least.
Pete's also hit the inevitable Voice Control Brick Wall. I'm guessing, being on the wrong end of voice control far too often, that it could be an accent issue, so I'd be interested to hear what sort of success those with a (reasonably neutral) US accent have had. Accent, to me, is the 21st century equivalent of the Date format. What, exactly, does 6/7/2013 represent without locale context? The same happens with voice. So if Pete can't talk to his app he's going to have his app talk to him. Just please include a Mute button.
Eskil[^] doesn't provide much in the way of concrete progress on the framework he's building, but does provide a walkthrough of his non-OO approach to creating and rendering UI elements. I'll be honest and say I'm not a fan of his approach. OO development helps separate who is responsible for what, and while that may not result in the tersest of code, it does promote maintainability.
Code Monkeys[^] have touched upon something that you can be sure that the likes of Apple, Google and the Kinect team at Microsoft all know: gesture based UIs are tiring. You know why Tom Cruise's character in Minority Report was so ripped? It's because he was doing 12hr days of shoulder and ab work while using those gesture gloves of his. 12 hours? Try 4 minutes.
Infrared5[^] have revealed another little worm in the Apple: gaze tracking has not been implemented in the PC SDK. It will be added later. So what did the guys do? They slammed their foot on the clutch, dropped from C# down to C++, dropped the clutch and left billowing smoke in their wake. This is exactly what I want to see from a contestant: a dammit-I'll-do-it-myself approach to dealing with issues. Now if only they had a little AC/DC playing in the background...
Thanks for the update Chris, and I'll keep the code coming - although I think you'll find that I was apologising that non coders aren't as awesome as we are. My antipodean "rock" last week was Men At Work - damn, but it was real earworm music.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 3 and we're halfway through the challenge. Hump week, so to speak. I missed the Google hangout due to jetlag and general mayhem.
Pete [^] is motoring along and getting the gesture control working. This seems an odd statement to write, but a timely one: Pete is writing an application you control through waving your hands and there's no magic, no secret incantations. He's using the same tools we use day in and day out and that, to me, is amazing. There are also no fires or explosions, very little swearing, no tantrums or hissy fits, just constant, solid, back breaking slogging through the code and getting it done. By himself. Much respect.
Infrared5[^] are bucking a trend of the previous contest with crazy statements like "We were pleased to see that all the tasks we set for ourselves wasn’t too big of a bite to take". Regardless, they too are moving on rapidly and have a demo of their Kiwi Catapult Revenge game available. The biggest challenge for them? Eye tracking, it seems. I'm praying they crack this because I have my own nefarious needs for decent and cheap eye tracking.
Eskil have also released a beta version of his Betray game using his (I'm assuming) framework. His post focusses mainly on UI and some exquisite rendering which screams, to me, too much spare time. If he has the luxury to make the UI as stunning as his examples then he's hiding something up his sleeve. Interesting.
Code-Monkeys[^] are focussing on input control and, to that extent, focussing on simplification. And their demo code is simple. Crazy simple. Work continues.
Simian Squared[^] have threatened to play Unchained Melody[^] which is an automatic failure in my book. Careful lads. Their clay modeller is progressing and while they mention piles of misshapen virtual clay there are no pics. Show us the carnage.
The Sixense guys[^] have their puppets moving! This is wicked. They are moving on to actual story telling next. Serious progress.
Lee[^] continues to bravely and foolishly attempt to change one of the biggest online industries single handedly. Or with two hands, depending. He's not only pushing perceptual computing to the limit but has decided to rewrite the conferencing network code too. He's also showing some vampire tendencies with the rising sun causing him serious damage. I worry, Lee. I really do.
Overall the contestants are plowing ahead and it's amazing to see the progress made. This offers the chance for some really polished presentations at the end and judging is going to be soul searching.
Thanks for that Chris. Have you watched the video from Nicole, Sascha and Steve yet? Worth viewing if you haven't - especially around the 5 minute mark. I'm sorry to say, but I'm going to keep the verbose blog posts coming.
I was brought up to respect my elders. I don't respect many people nowadays.
Week 2 in the Ultimate Coder challenge sees the teams settling down to the cold harsh light of reality mixed in with a wonderful dose of reckless abandon.
Sixense Studios[^] had the wind knocked out of them a little after watching Media Molecule[^] demo a PS4 app that mimics their idea. However, they have since realised that their 6 weeks of work can still beat the two years work, and who knows how many billions, invested by Media Molecule because while Media Molecule's demo is wicked cool, it's based on pre-recorded movements and not the full physics-based hand puppets they are building.
Lee[^] is continuing his work on transporting you, via the depth perception camera, into a virtual world. I really hope he's watched this movie[^] before he goes too far down that rabbit hole. Watch his video to get a little weirded out by it all.
The guys at Code-Monkeys[^] have totally nailed another issue with the PS4 demo of Media Molecule. The PS4 demo relied on using a wand, and this is akin to using a stylus on a touchscreen. While they demoed an initial cut at their "looks can kill" eye tracking shooter I get the impression these guys are along more to help add as many stepping stones as possible to allow those who come next to reach the lofty goals of the ultimate UI, rather than assume they can create it by themselves.
Simian Squared[^] raise another interesting point that follows on from Code-Monkeys' points: the advent of the touchscreen interface has heralded a new era in user experience and programming is now, more than ever, an art. The programming tools available to us today make the task of development more and more mechanised. Drag and drop, ORMs, do-everything frameworks and convention over configuration mean writing an app is easier than ever. However, writing an app that is a pleasure to use is now harder than ever because we, as users, no longer accept substandard interfaces or a poor experience. Simian Squared are producing a virtual potter's wheel. More than simply creating a system that responds to the position of a few digits, he wants to transport you to a new world. He sums up the challenge but also the potential in his application: "a great concept artist will sometimes bend the rules of perspective or light and shadow for impact". The new interfaces available to us today make programming, more than ever, an art.
Eskil[^] continues on his quest to write a hardware abstraction API that's pluggable. Another step along the path to better UIs and (potentially) better hardware. As he writes: it's hard to get someone to buy your hardware if there are no applications that run on it. Abstracting out the API for hardware should mean that writing apps for new hardware is a snap.
Infrared5[^] continue on their quest for an eye motion interface. Whereas Eskil had serious issues with his camera, these guys are waxing lyrical about how well it's performing for them. The joys of pre-production hardware. They also add to the idea that collaboration as the key to success in this challenge. I am getting a little worried at the lack of any actual attacks on anyone's jugular, but it's early days yet and the prize pool is, I'm sure, sufficient to get the red haze settling over the contestants.
Pete[^] is attacking his task methodically and systematically and with an eclectic mix of music. The Angels? Very nice. While others are focussing on the camera Pete's started with voice recognition. Sure, over 65% of human communication is non-verbal (depends on which study you refer to), but I'm not expecting Pete to include emotion detection (yet). Gesture and touch are great for items you can see or touch, but what about those things you can't see or touch? You can ask for something, and then once you have it you can manipulate it via gestures. Voice is important.
The challenge here is to showcase perceptual computing and this means to rethink how we interact with a system at a fundamental level. Sticking to familiar paradigms may make it easier for a person to approach a technology, but it doesn't help them take full advantage of a technology. It holds them back. Touchscreen interfaces never caught on until the hardware and user interface advanced sufficiently to make it intuitively natural to swipe and pinch. The hardware had to be fast and reactive enough that a gentle swipe would achieve a result, and just as importantly the UI presented to the user had to be obvious enough to encourage and respond to these gentle swipes. A stylus retards the use of a touch interface, and a wand retards the progression of a gesture based interface.
What the gesture and voice based based UI looks like, and how this can be presented to the user in an obvious and natural manner, is what this challenge is about.
Last year saw the Ultimate Coder Challenge pit 6 teams against each other to create the Ultimate App for the Ultimate personal computer - the Ultrabook. The sadists at Intel are back at it with a new twist: create an application that shows off a convertible Ultrabook[^] and/or takes advantage of the Intel Perceptual Computing SDK 2013 Beta[^]
Let me say from the outset that I'm ignoring the "or" in the "and/or" above. The contestants must create an app that shows off the hardware and uses the perceptual computing SDK to have a chance. This means
The application needs to take advantage of the Ultrabook's specific features such as the sensors, the touchscreen, always on/always connected, power management and/or graphics.
The application must make sense for a laptop form-factor and a tablet form-factor
The application must make use of gesture controls, or eye tracking, or voice control, or anything else hidden in that magical SDK.
I'll add a fourth requirement
The application must make sense as an Ultrabook application
What I mean by this is that an application that is an existing application shoehorned into an Ultrabook with support for an Ultrabook tacked on in a way that doesn't harmonise with the original application will not get my vote.
So, on to the challengers.
Sixence Studios[^] (I keep wanting to hand them a "p") are old hands at the perceptual computing stuff. They've demo'd at Intel keynotes and are developing a virtual puppet application. I will be interested to see how this works in the tablet form factor.
Lee Bamber[^] refuses to back down from a challenge, and this is the third contest I've had the honour of judging him in. His entry will be a virtual conference that will allow you to transport yourself into a 3D world. "ambitious to the point of foolishness" is what he writes. He's mad. I love it.
Simian Squared[^] will be creating a virtual potter's wheel complete with virtual clay. Please note that points will be deducted for any "Ghost" moments that appear in any videos demonstrating the application.
Code-Monkeys[^] continue the primate theme and will be taking their existing Stargate Gunship game and making it a fully immersive. Gestures for firing, voice commands to control weaponry and gaze capture for targeting. Gaze targeting is something I feel is going to totally and utterly change the nature of video games and I'm very keen to see how this works. A shooter game that reacts as fast as you can look is going to get crazy. I can feel the headaches already.
Infrared5/Brass Monkey[^]. Again with the Monkeys. This feels weird. They will be creating a 3D FPS using head tracking, facial recognition and voice. This will be a little different in that the angle of your head will change the view on the screen to make it more immersive. Interesting idea, and their art looks killer.
Quel Solaar[^] has decided to make it simple and reinvent the entire PC interface. He will create a game, a data visualizer and a creative tool that will make use of his open source software layer in order to make it "easy for any developer to make use of the diverse hardware available to us". Any input (voice, gaze, gesture), any display (phones, tablets, laptops, workstations) and any hardware configuration. And I thought Lee was nuts.
Our very own Pete O'Hanlon[^] is taking the safe path and creating a voice and gesture enabled image editing application. This seems specifically an effort to show off the perceptual computing SDK rather than show off an application, and I like that. Further, he's using touch as an input, thus being inclusive of the traditional Ultrabook features rather than just plowing on with the sexy, younger, more nubile features of the PerC SDK.
Each week I'll post an update of how the teams are progressing. May the best team win.