One of the big issues with having 10M members is names. Everyone has a name, and most want to use their name, or at least something vaguely resembling it. The issue is that Real Names are messy, human things meant for messy human things, and are terrible as a way to label things in a way that makes it easy (for a programmer) to reference that name within text or in a URI. We can't have //www.codeproject.com/members/Chris Maunder because the HTTP spec doesn't allow spaces in URIs, nor can we confidently say Chris Maunder refers to me in text, because it could also refer to someone names Chris who is rambling incoherently[^].
So we have Display names as a way to label your content such as posts and articles, and we have usernames as a way to provide a human readable and programmer parsable handle to your account. //www.codeproject.com/members/chris-maunder as a link to you and @chris-maunder as a reference to you in messages.
You don't need to use @username and you can be safely ignore the feature if it bugs you. However, if you like the convenience then a member's username can be found on their profile page or in the popup that appears when you hover over their name in the forums (assuming you have "Profile Popups" enabled in the forums).
We've finished reworking our caching of forums and articles and are happy to see load times for forums go from half-second to 6 milliseconds. That's beyond what we thought we'd get. Start diving deep into messages from the days of yore and load times don't appreciably change. We've essentially opened up the entire corpus of Forum postings for instant retrieval and slain a number of bugs, thrown out pages of code in the process and reduced our database load by a factor of three. It's almost idling now.
On the article side of things we've improved performance even more and have one final push, after which time we'll hunt down and nuke any remaining load issues.
For a decade we've been working against a local cache on each of the webservers. This meant that we either had to keep the time-to-live short, or we had to work out a sensible way of ensuring that when a member changes an article on one server, and is then directed to another server, they see their updated information - even though the local caches didn't talk to one another.
Yes: distributed caching is a solved problem but there weren't many canned solutions when we started, and we did end up doing some clever things to ensure it all looked sensible, give or take some "expected" caching issues such as a deleted article still occasionally being around for 10 or so minutes. "Expected" really comes does to what is forgiveable, and in this day and age even stuff like that stretches the friendship so we've finally had a chance to bite the bullet, plug up the local cache and add a couple of Redis [^] servers. We're using the ServiceStack Redis client[^] and implemented - fairly easily - a distributed cache that not just solves our cache-sync issues but speeds up application spool up time since the cache is off-server and independent of the webservers themselves. No need to recache on startup - the data's already there.
We are, obviously, seeing our cache load times go up since it's no longer a local cache but requires a network round trip plus serialisation, the overall database load is nicely down and our code is far cleaner.
I was sick of moving between computers, sick of the power outages in our building knocking me off my machine, and sick of having to Remote Desktop from home to my office machine to be productive. I'd setup various machines that would allow me to do the basics when I had to use them (eg for travel) but it's never the same. Like sleeping in someone's spare room - no matter how comfortable it's never quite the same.
I figured that laptops these days were pretty damn powerful and after road testing[^] a couple of Ultrabooks I decided that anything Core i7 with 8GB RAM would be more than enough for me. All I needed was something that would let me install Windows 7, something that was light, and something that had a big, fast SSD.
Enter the mid-2013 Macbook Air. Core i7, 8GB RAM, and the fastest 256GB SSD around.
To cut to the chase: it's an excellent dev machine and is faster than my 4 year old quad i7 desktop. I'm seriously impressed. I'm now able to work on a single machine anywhere in the world without having to compromise by switching to a slower machine for travelling, and I have the added bonus that I no longer need a desktop for the office and a laptop for travel. A single unit does the trick.
The annoying bits
It's a Mac. Apple did not go out of their way to make the Bootcamp experience exceptional. The trackpad sucks in Windows, yet it's by far my favourite trackpad when in MacOS. It's brilliant. Trackpad++ sort of fixes this, though. [Update in Win10 the trackpad works perfectly]
I tried using parallels to create a VM from my Bootcamp partition in order to run VS while in the Mac environment. This was great, and you get the proper trackpad experience, but the big glaring issue was that I needed to use a USB DisplayLink adapater to hookup to an external monitor and installing DispalyLink drivers in bootcamp and then running it under Parallels causes the Windows VM to bluescreen. Parallels is aware of the issue and had no plans at the time to do anything about it.
So I stick to Bootcamp or MacOS and never the twain shall meet.
Docking stations became a big issue because I need a lot of screen real estate. I hate cable spaghetti, though, and tried a number of options before settling on an option that gives me almost everyhing for the (ironically) cheapest price: A thunderbolt display.
Thunderbolt displays are expensive. However, they come with a split thunderbolt / power adapter that plugs into the thunderbolt port on one side and provide a power cable to the laptop on the other side. Within the thunderbolt display are a pair of excellent speakers, a webcam, USB 3.0 ports, and gigabit ethernet. It's essentially a fully self-contained docking station built into one of the nicest monitors I've ever used, and with the 27" running 2560 x 1440, it allows me to run VS on one half and SQL MS or Chrome or anything else on the other half in the same manner that I'd previously been using two separate screens.
So factor in the cost of two 19" screens, a docking station ($250 - $300) plus speakers / external webcam + cables and you'll find that a refurbished 27" thunderbolt display is way cheaper, far more convenient and (for me at least) a much nicer experience.
The drawback is that Windows doesn't play well with thunderbolt and you may have to physically shut down your machine before unplugging the monitor if you have the monitor set as your primary display. Further, you need to plug the monitor in before you boot up a windows box because Windows only scans for thunderbolt on bootup. This is really, really annoying. [Update in Win10 the thunderbolt display works nicely]
The only other annoying bit is fan noise. I hammer that poor little laptop and in a quiet room at 2AM when you're building code and running a zillion unit tests then thing really winds up and gets a bit rowdy. I'm still waiting to see what Apple does with the 13" Macbook Pro since a quad core Haswell unit could have a little more headroom before it starts to get hot and bothered - or at the very least it'll be done with it's tasks sooner meaning noise for a shorter time. A retina display would be nice, but totally not needed, but the added weight is a real issue. Touchscreen - while something I've grown to love with the Ultrabooks - is a complete waste for me. The laptop sits by my monitor, closed, while I work. I have no desire to put finger prints all over my big display, and after my experiences with the Perceptual Computing Challenge I know how tired arms get after spending even short periods trying to navigate with your arms up.
Overall a 7/10. [Update 8.5/10 when used with Windows 10]
single machine whereever I am in the world
excellent setup with the external thunderbolt display
built in UPS. Love it.
Totally fast enough.
Windows issues with thunderbolt connections
Noisy when hot and bothered
Did I rally say a computer was "fast enough"? I lied. No such thing.
We skipped VS2013 / .NET 4.5 and jumped straight over to VS 2013 / .NET 4.5.1 because, y'know, it's far more exciting running your production servers on beta software rather than on the boring "tested" stuff.
We're not using any of the fun stuff explicitly, yet, but Matthew has already been eyeing off a bunch of code that can do with some async action. The thing that's most immediate to me is the multi-core JIT and startup time; all cores are actually getting used, CPU usage is up where it should be, and the site spools up much, much nicer than it ever has. Simply getting the advantages of the framework improvements is (almost) enough for me.
The other obvious timesaver is build time: much, much faster than VS2010, even when bogged down with all the other stuff I have open. I'm developing, testing, and running the site on my Macbook Air on Win7. There's the VS IDE, SQL Server Management Studio, IIS running the actual site, Outlook groaning under the weight of a 23Gb pst, various Word docs and spreadsheets, 6 remote desktop windows and half a dozen browser windows and it's all humming along nicely.
Although sometimes (especially during compile time) the humming sounds suspiciously like the Mac's fan is about to attempt takeoff. It gets disturbingly loud.
I'm still not taken with the new VS look - a little harsh, a little lacking on warmth, but it's way faster and, so far, more stable than my old creaking install of VS2010.
As to my experiment with moving my developer life onto a tiny, ultralight laptop: the jury's still out. 7/10 so far.
I got sick of typing URLs for members and so, well, I coded.
To provide a link to another member just use the tried and true @username syntax, where the username is the username generated from their name (or manually modified) in the form first-last. Everyone's profile shows the username just under their profile image.
So if I want to shout out to a ray of sunshine I can just go @Michael-Martin (no link - just type that literally) and when the message is saved the link is generated.
Obviously this is opening a can of worms and I know the next two requests. Yes, soon.
The mobile version of CodeProject has been updated a little to help those with fat fingers (ie me). It's by no means perfect, but we've aimed for a simplified UI, easy to read fonts and easy to touch buttons while still maintaining as much browsing and reading functionality as we can.
We have, however, limited some actions (eg voting) for a subsequent rev. We'd rather focus on providing a nice UI to read articles than worry that fat fingers (looking in the mirror again) will accidentally hit the down (or up) vote button while trying to read the next article.
The Essential Guide to Mobile App Testing[^] should be required reading for devs and those who pruport to manage devs who are involved in mobile app development. It's a rare, rare day that I promote a specific whitepaper but as part of our new Research Library[^] we've been working incredibly hard to find companies that have spent the time to create research material that helps you make decisions instead of simply showing you powerpoint slides of their product.
In the spirit over avoiding real work I've been playing around with an idea that is ridiculously simple but may provide a little entertainment for our members: Stylable member profiles.
Go to your settings page[^] and hit the Customisation tab and you'll see a text area for entering in styles augment or override our basic styles.
This is fraught with peril on so many levels. Firstly, you might break our page. Secondly, when we update our styles or page layout, we may break your styling. Thirdly, things could just get messy. Really messy.
But that's what life's all about, isn't it. So enjoy.
Secondly. we've introduced a new article type called "Reference". This will be fleshed out a little more soon, but for now we wanted to provide a place for things that I've wanted to post for eons: tables and reference sheets. What's ASCII value of X? What's the HTML entity for Y? Stuff like that. Let's start simple and work our way up.
This morning I had an experience that provided such a classic picture of the entire IT industry for me right now:
I went into the Microsoft store and was looking at an Acer Aspire S7. It looked nice and said on the blurb "128GB SSD". So I took a peek at the Computer's properties and saw "57.9GB free of 79.8GB" on drive C - the only drive visible.
I asked the sales guy where the 128 - 80 = 48GB was. He told me the missing space was used by the OS, which I politely disagreed with because the OS was currently on Drive C and was using about 22GB of space. He then tells me that the demo software they have installed that's using up the space (I again disagree), and then tells me it's the recovery partition that's using the space, so I ask him to show me this 48GB recovery partition. He hits Window-C, the (HD) screen totally fills with Control panel applets and he types in "Disk management" but nothing appears. He scans the list of applets briefly then gives up and then right-swipes to get the settings but again gives up, and after fumbling around finds a list of partitions, but is unable to get me the size of any of them. He then turns to me and says "this is really outside of a sales thing - I need to get you my tech guy".
He clearly didn't know what he was doing, but he had a good enough clue to be able to navigate around better than most people I've seen who have used Win8. Yet he couldn't answer a simple question relating to what the tag says and what's actually on sale, and said it was a technical, not a sale question. I left the store feeling the same way you feel when you leave a mechanics who tells you you need to get the air in your tyres exchanged at the beginning and end of Winter and that'll be $149.99, please.
I felt lost when he was going all over the place trying to answer the question (and I've used win8 an awful lot) and then I felt like my question was unimportant to them, that I shouldn't be asking it, and that the answers I got were made up (which they were).
It felt complicated, It felt confusing, and it was impossible to make a choice on laptops because there were no answers, and that the answers I would get I couldn't trust anyway.
I wander 3 doors down to the Apple store, look at the properties of a 1TB iMac and ask to see the actual size of the HDD. The sales dude does a single right-click, Get info and shows me that of 999.4GB, there is 978.7GB free. We're done.
There's 1 keyboard layout. You can have light (11" or 13") and medium powered with OK screens or heavier, thicker, more powerful with retina displays (13" or 15"). It's easy - except that I want a retina display on an Air. Not because any other laptop I've ever seen as a retina display: only because the Macbook Pro's have a retina display. I don't actually, in isolation, want a retina display, I just don't want to feel like I'm missing out on something.
When I look at Tablets I see the iPad, Android or Surface devices and they are all fairly simply to use. Phones, be it Android, Win Phone 8, iPhone or Blackerry are all simple to use. They are in fact simpler to use than ever, with only Feature phones being simpler (but many of them were tear the hair out annoying).
Yet Laptops and PCs seem to have increasing their complexity and choice and confusion making the buying decision complicated and intimidating. Windows 8 has made actually using a laptop confusing and complicated. Put these together and you have a sales nightmare: you don't know which one to buy and while trying to decide you don't know how to actually use the thing you think you need to buy.
And then you wander over to Apple and you think "My God this is so simple" and you have limited choice, and you feel you have a chance at making a decision.
Previously, however, the decision would come down to "Do I pay a 30%-50% premium on essentially the same hardware just to get an Apple". For me this has always been game over - I'm simply not willing to pay that much. Yet today I'm looking at a complicated Windows 8 machine that was more expensive than the simple Apple machine.
Buying a PC or laptop/Ultrabook is no longer easy or as cheap as it was a year or so ago. Win8 is (to me anyway) a technically better and more secure operating system than MacOS ruined by an awful UI. Apple has a still-maturing OS that is staring to acknowledge that security is important but still crashes, still locks up and still can't seem to work out how to handle network calls on a background thread. But it's simple, the machines will never offend anyone with their looks, you get what you pay for, and they are now in the same price bracket (or below) many of the Ultrabooks.
I can understand why PC sales have fallen, and for me it's not just tablets. What I don't understand is why Apple hasn't gone for the jugular like they did on Windows Vista.
This is the final post in the series on the Ultimate Coder Challenge: Going Perceptual. The applications are in and now it's do or die. The biggest challenge to our contestants at this point isn't the code, or the SDK, or their idea. It's our hardware. It's the installers. It's our ability to understand what they were trying to do and that has proven to be difficult in some cases.
Perceptual computing is a form of interacting with a computer in a more natural human way. You speak or you gesture, or you may even want to just swipe. It's not about the keyboard and mouse. The biggest problem is that what we think makes a good method of interaction (cue Minority Report) is actually a terrible way to interact. It's tiring, there's no feedback, and gestures are in 3D, not 2, and not constrained to a box or button and so can be ambiguous. Further, a touch gesture as a definite start and stop. It starts when you touch and ends when you stop touching. When does a gesture start and stop?
Further, and as has been mentioned elsewhere, there's no standard. We all know how to pinch-zoom, or swipe to scroll, or even push a button. How do you push a button in 3D space? How do pinch to zoom, without the initial contraction of the fingers being misinterpreted as a shrink action first?
So on to the challengers:
Lee's work I've written about extensively, usually while ruefully shaking my head at the madman. He shamelessly takes on way more than he can chew and after endless dead ends, delays, setbacks and possibly a broken keyboard or two he comes out with something a little special. His virtual conference is, for me, an excellent example of what's possible with perceptual computing: a UI that revolves around you. You're in a virtual 3D environment talking to others and as you turn, the viewport turns with you.
Except there's one huge problem with this approach: in a video conference you never turn. If you turn then you're no longer looking at the screen in front of you. Further, as you turn, the mass-tracking (not head tracking) turns the virtual camera in the direction that you're turning, meaning the image on the screen pans, meaning, well, that you don't actually need to turn to view it. Kind of like an anti-Catch-22.
It's very cool though. Very cool.
Sixense's puppetry demo is complete. It's the ultimate kids toy and allows two people (or one talented person) to host and record a virtual puppet show. It works, but I can't help but feel that so much effort went into things like scenes and a story that the creators forgot what puppet shows are about. Dialogue. Well, and violence, for those who fondly remember the Punch and Judy days, but mostly dialogue and often costumes. So why not ditch the fancy scenes and instead allow quick changes of clothes? Or have amusing sound effects when one puppet bonks another puppet on the head? That would focus the players on thinking about their puppet rather than trying to control a puppet that keeps flying all over the countryside.
It's an excellent example of 3D free form gesture based interaction with a computer, and for that they score highly. Most importantly their user feedback on interpreting hand gestures is by far the best of any contestant.
Code Monkeys Stargate application shows promise in (for me) the ultimate goal of a head tracking video game. Unfortunately the dream never lived up to reality and the sheer effort involved in trying to control the targeting quickly took away from the luster. I did get it to work, but the headache afterwards was not worth it.
Infrared5 followed a similar path to Code Monkeys with their Kiwi challenge, however they introduced an extra variable by having the control of the application be done via a smartphone application. The connection between the game and the smartphone was seamless and slick. I could not, however, control anything but the fire button from my iPhone which made the game unplayable for me. Restarting didn't solve the issue. Further, I often ended up with a black landscape in my viewport that no amount of yelling, tilting, clicking or swiping would get me out of, and closing the app via alt-tab (there's no close button I could find) was near impossible because no sooner had you popped out of the app then you were thrust back into it, black soulless void and all.
Pete has created an image processing application that uses a series of gestures to activate filters. His biggest problem is there is no defined, standard set of gestures one can call on to immediately dive into his app. There's no help button, so you're left guessing at gestures. Thumb up and down, swiping up and down, and in my case swearing like a sailor. An excellent attempt at making gesture input a natural part of the interaction with the application, but let down, I think, but the maturity of the platform.
Eskil has demonstrated an abstraction layer that allows developers to take advantage of perceptual computing without needing to write the boilerplate code. In fact, without needing to know anything about the nuts and bolts at all. This is a tremendous achievement given the short time available.
A short note on the perceptual computing camera: I'm typing this review on my Lenevo Yoga with the camera perched on the top of the screen. The camera is heavy, as I've mentioned before, but it's only now, after hours of clenching my fist, twiddling my fingers, bobbing my head frantically and yelling 'Engage' in various accents to try and interact with the game, that I've realised I'm getting tired always hiking the screen back up to the vertical position. The camera makes the lid sag due to the weight, and when it's not sagging it's trying to tip the laptop base over apex. It needs to be lighter and it needs to be way smaller.
Another general comment on the use of the camera as an input device is that onscreen feedback is critical. If you don't get feedback on what the computer thinks you're doing you go nowhere. You can't debug. I cannot overstate how important feedback is.
Also, I do need to also comment on the Lenevo itself. I love the feel of the keyboard and most especially the palm rest. So very comfortable. But please, to anyone who is thinking of manufacturing a laptop keyboard, DO NOT reduce the width of the right shift key and squeeze in the up arrow in the space created. It means I'm constantly hitting the up arrow. Constantly. It's doing my head in. Apart from that the screen is crisp, the battery life good, and the flip back keyboard weirdly useful. Propping this thing on a table with the keyboard folded back to watch a movie or flick through the news is brilliant.
As to perceptual computing, my overwhelming feeling is "it's coming". But it's not here yet. We are trying to make a computer be like the real world. You turn or look or speak or grab at something that isn't there in the hope that the computer will mimic or replicate your intent virtually. This, to me, is akin to skuemorphism: changing something to be like something else, like Apple making it's calendar application leather bound, or having ebook readers show an animated page turn. A computer is not the real world, and it doesn't have the limitations of the real world (so to speak) so why mimic it?
I think the future of Perceptual computing will most likely be subtle. The computer may recognise you or your voice, and will recognise when you are in front of the computer, when you're looking at it, and when you're not. The end of screen savers, really, since it would just go to sleep. Gestures such as brushing away an app to close it, or flicking it to move it to another screen would be intuitive, and voice recognition would allow utility commands such as searching or bookmarking to be carried out without needing to take your hands of the keyboard or, indeed, outside of the current application.
Gaze tracking is another incredibly important, yet not currently functioning (at least on the hardware I have in front of me) feature that has a myriad of uses. My immediate use would be for eye tracking in UI testing, but even simpler could things like auto scrolling, or even auto-hiding of elements when they are not being looked at directly. The eye does, however, jump around an awful lot so the smoothing algorithm will need to be heavily weighted.
The contestants achieved some incredible feats of patience, innovation, creativity and problem solving. I take my hat off to them for their perseverance and outright foolishness in taking on a challenge that many of them were so unprepared for, yet so willing to have a go and rise to the occasion. Rise they did, so well done, guys. Get some rest and have a beer. You've earned it.
A thoughtful and interesting conclusion to the series Chris. The one thing I really felt I was missing was having a decent graphic artist so that I could do some decent on screen tutorials. This is an area that really did need someone with more artistic ability than me.
I was brought up to respect my elders. I don't respect many people nowadays.
This week is the final week of blogs by the challengers. I won't go through individual entries since pretty much all of them are at the wrap up / polish stage.
The overwhelming feeling you get when reading the blogs is one of a challenge accepted and, almost, tamed. This is uncharted territory for the contestants and that territory is not paved smoothly: the SDK is in beta and the capabilities of the hardware are still limited. There's a lot of things that would be great if they worked, but they don't. Sizense wanted to have their Big Bad Wolf puppet blow the little piggy puppet's house down by blowing on the mic. "Blowing on a mic" is not a recognised word so it can't happen. Code-Monkeys (and many others) wanted head tracking - or even gaze tracking - but the hardware simply isn't up to it at the moment. Infrared5 simply want more grunt from the Lenevo.
It's close. Really, really close and while the contestants were not always able to achieve their first order approximation of what they wanted, they have done exactly what good developers do and focus on what the outcome they want is, and then work backwards. Instead of finger tracking you use thumb tracking, instead of eye tracking you use head tracking, instead of head tracking you use body mass tracking. Instead of tracking everything just do your tracking work on the object (eg hand) you want to track and ignore all other input data. Speed improvements came quickly, as did a usable (but maybe not perfect) solution.
The point is computing power is always increasing, the SDK will only improve, the hardware will become more refined and more responsive (and offload much of the software based processing) and we will get there. Quickly.
We get the final (final) versions of the apps soon and it will be then that we, the judges, have to dive in and ask ourselves two basic questions
1. What is perceptual computing?
2. Who knocked it out of the park?
Thanks for all your encouragement over the last few weeks Chris. Reading the judge's comments have been a real highlight over the weeks and I would have to say that last weeks was one of the funniest posts I've ever read, so kudos for that.
With the competition, I tried to stay with what was in the SDK, and I stand in awe of what the other competitors produced. As for point 2, the answer (to me) is easy - Lee. Clean out the park, and 3 counties of clearance.
I was brought up to respect my elders. I don't respect many people nowadays.