Intel has launched a new Ultimate Coder Challenge[^] that follows the same structure as previous years: multiple teams, 8 weeks, 1 challenge.
This year's challenge is IoT: Who will create the next great commercial solution? Using the Intel commercial IoT development kit the challenges must come up with an idea - and an implementation - that truly defines what IoT is and means to us all.
Out of the gate I will admit I'm already heavily biased to an idea I've been spouting to all and sundry for years: a local wireless network for cars that would reduce to near zero the chance of collisions.
Team one[^] aka Team Whirlwind are working with Intel Edison and Dell Wyse[^] to create a highly distributed adhoc network modelled after MongoDBs Master-Slave model.
They will need to overcome issues such as network latency, basic network issues, interference, speed, and ultimately interfacing with a car so that the system actually does something. I hope they succeed. It's about time we had this.
Team Two[^], aka Team Geras, has the goal of diving deep into the behavioural patterns of those in their twilight years to look for correlations between behaviour and undesirable incidents. Does a hot day and a bout of lawn bowls result in more falls? Does high humidity and certain social interactions result in a case of the vapours?
This is a lofty goal and I worry that gathering and analysing enough data within the time of the contest will be difficult.
Code will be written in ClojureScript running on JerryScript. Because they want to. I write way too much Javscript (badly) and I swear we're all going to look back on the twenty-tweens and think "what sort of drunken haze were we in to think Javscript was a good idea for everything?"
Team Three[^], aka Team Iot Vaidya, are looking to create a standalone solution meant specifically for people living in remote places where there is shortage of doctors. The idea is that there's a dearth of medical support in many rural communities so why not automate some of the more pedestrian tests that can be done to get an initial good/bad diagnosis? Team Three will focus on a person’s ECG and pulse rate plus other vitals like temperature, Galvanic Skin Response. "A lot can be said by proper analysis of these parameters."
As a bit of a chronic cyclist this sort of stuff is right up my alley. I ride with my eyes glued on my heartrate, left/right pedal stroke balance, power output, cadence, calories burned and occasionally oxygen saturation and heomoglobin recruitment. Sometimes I even watch where I'm going. Having access to a wearable that would include things such as ECG, temp and galvanic skin response would be brilliant. Selfish, in that it's all about my cycling and not about saving lives (directly) but the applications for a solution such as Team Three is proposing go far and wide.
Team four[^], or Team Proximarket, have waxed lyrical about the physical web but their introduction scares me a little. Smartcart is a proximity based technology for retailers that, among other things, will reduce the impact of customer oversight. Oversight meaning "Excuse me: it looks like you forgot to pick up toilet paper. It's in aisle 3, 7.5m to your left." or "Escuse me: it looks like you're trying to leave the store without spending enough. We've talked to your car's infotainment system and it agrees you're not going anywhere until that shopping cart is nice and full"
If they win then I'll be the first to welcome our new robot overloads.
But again maybe not. Team Agro Hacker is working to deliver a solution to crop disease and pest management using image processing and machine learning. You get the computer to take a peek at the leaves of a crop and have a good hard think about what could be wrong.
Admirable work, and increasingly important in a world rapidly growing. They have no, however, made it clear how this is an IoT solution. They'll need to clarify this to move ahead.
We've had a number of complaints that a member will spend a great deal of time crafting a response to a question in Quick Answers[^] only to hit the post button and find the question was deleted or closed while they were answering it.
In a perfect world everyone would agree on what's suitable for answering and what's suitable for closing, but our world is far from perfect. Effective today we've added a feature that will re-open a question that's been closed if someone posts an answer to that question.
We could have implemented question locks, but locks are messy.
We've launched a small change to Quick Answers[^]. When posting a question you get the usual "Subject" and "describe the problem" boxes, but we've also added a "What have you tried?" box that must be filled in before you can post your question.
It makes it harder to post a question (barely) by forcing the poster to think a little about what they've done so far. It allows those answering to avoid things already tried and suggest new ideas. It will also, possibly, act as a self-identifier of those too lazy to explain their problem to those eager to help.
Thanks Chris, Its really good move. It helps questions to become more self-explanatory and avoid same solution that already tried by OP, Additionally it avoid spam post and lazy questions. Most of the time in CP, I have seen questions with title "Please solve it urgently" or "error in c# application" with same description as title. (though I tried to "IMPROVE" it but due to less description, it remains unclear)
Members should take its benefits and can improve the question quality to help them resolve quickly.
Finally "More time to explain question will help to reduce time to resolve it"
As far as the links are concerned (copy/pasting the link). I would recommend, instead of sending a request to get the title. You should leave that to server when poster is done editing the post to update the content of that link to a title (if from CodeProject). Like Markdown! It would make it a lot better. Plus, it would give you an opportunity to add link titles for posts from other sites by reading their <title> tag.
For example, just to edit this messages for you, I have sent like 15 requests with 1kb+ size. For me, it doesn't matter, but for someone with a metered connection, it does. This size also increases with the increase of characters. (Right now it is 2.3kb and growing for each request).
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
The reason we do Ajax calls for formatting is because we want to ensure the colourisation is done properly. We could simply skip the formatting, or use something like syntaxhighlighter to do some rough colourising. It's been discussed many times.
If you're worried about postback size then uncheck the "Show a live message preview as you type (not available < IE9)" checkbox in your Settings[^] (under the Forums tab)
With regret we've abandoned using CommonMark to render our messages.
CommonMark is meant to fix the issues inherent in other Markdown implementations while being true to the core ideas of Markdown. Basically: it should just work, there should be no surprises, and it should work with existing HTML. Markdown / CommonMark handles the main gruntwork of text formatting and when you need some fine tuning just throw in some HTML and you're good to go.
Unfortunately CommonMark handles PRE (i.e. Preformatted) blocks in a manner that simply doesn't work for us. A PRE block should (at least in my book) allow you to enter text and have the formating maintained as-is. On CodeProject we cheat a little[^] and allow things like B, EM and U tags for those who want to highlight sections of code, but beyond that what is entered is what appears.
In CommonMark a PRE block that contains text that is indented 4 spaces will trigger the creation of a <pre><code> pair that wraps the indented block as if it were a code sample. Code samples are often are indented, so whenever you paste code into a PRE block then you'll more than likely get nested PRE blocks.
This just doesn't work for us. We love what CommonMark is doing to provide consistency, but that's just seems an odd decision. For now we're disabling Markdown in Quick Answers and reverting back to MarkdownSharp.
This makes me sad.
We announced the introduction of Markdown[^] into the forums and Quick Answers a while ago, but we were never truly happy with the implementation of the Markdown processor in use. Ambiguities, lack of standards, and poor performance of the Markdown transformer were niggling annoyances.
The syntax is slightly different[^] to that of Markdown, but the changes are small enough that it should, hopefully, not cause any problems. As always if you do come across issues let me know and we'll season to taste.
We now support Gravatars for your profile picture to help reduce the pain in maintaining your latest profile selfie (or professional studio shot - whichever). Just setup (or update) your Gravatar pic and then on your settings page select "Use my Gravatar", hit update, and you're done.
We'll add it to Quick Answers soon, too, but for now here's a refreshed on Markdown:
We use GitHub flavoured Markdown with a couple of minor changes. Here's the gist:
Heading (or use #Heading)
And a Sub-heading (or use ##Sub-Heading
#### Use #, ##, ###, ####, ##### for H1 - H5 headings
Paragraphs are separated by a blank line.
A single newline will not cause a line break.
Leave 2 spaces at the end of a line to force a
Text attributes *italic*, **bold**, ``code``, --strikethrough-- are supported, as is <font color=red>HTML</font>.
// To insert code, use ``` before the code and then end with a closing ```.
int length = new string("A string").Length;
Hyperlinks are easy: [link to CodeProject](http://www.codeproject.com).
- pears and stuff
Heading (or use #Heading)
And a Sub-heading (or use ##Sub-Heading
Use #, ##, ###, ####, ##### for H1 - H5 headings
Paragraphs are separated by a blank line. A single newline will not cause a line break.
Leave 2 spaces at the end of a line to force a
Text attributes italic, bold, code, strikethrough are supported, as is HTML.
I've loved Ace[^] forever. It's one of those pieces of code which, when I first saw it in action, I couldn't even begin to think how they managed to do it in a manner that didn't bring the entire browser to its knees. But it works and it works very, very well.
I'm happy to announce that after a cold, lazy evening, a few Google searches, some beer[^] and a bit of swearing I've added Ace as the Source editor to our online WYSIWYG editor for articles.
Editing articles is meant to be a WYSIWYG affair but it's never the case with HTML. Us control freaks always want to dig into the markup and make it just right. With Ace we now have that markup syntax colourised which helps enormously when your article's getting a little long. On top of that we get line numbers, tag matching, and real-time validation.
Of course, if it's just not working for you there's an "ace" button next to the "Source" button that allows you to deactivate Ace if it's causing problems.
Our article voting system has evolved progressively. From one person, one vote to a weighted system, to requiring comments when down-voting, to a system that statistically removed junk votes, and then lately to a system that recognised that voting patterns are not only bell curves, but sometimes, legitimately, bimodal.
We have, to a large degree, been successful at suppressing malicious down-voting. Too successful, it seems, and the article voting system is now massively weighted towards up-votes rather than down-votes. To up-vote you merely click the 4 or 5 rating. To down-vote you need to add a comment, and if your down-vote doesn't agree with the majority then your vote may not be counted until a sufficient number of other members have likewise voted the article down.
So while up-voting is great in that it rewards authors and gives readers a way to say thanks, up-votes are bad when the up-votes are not votes based on the technical merit of an article but instead based on being the author's friend, family or colleague. Make it 50 friends, family members or colleagues and the vote for a given article is hopelessly invalid.
Basically: you can have too much of a good thing. It's easy to up-vote, hard to down-vote, and so the average article rating goes up and the ability to sort the wheat from the chaff goes down.
Starting today we're removing a barrier on down-voting. You are no longer forced to provide a comment when down-voting. We have our historical-based expectations on what will happen but will be monitoring the results closely just in case.
The change is effective as of now. As always we're open to suggestions and ideas to make it even better.