The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
How exactly does solving an arbitrary algorithm test coding skills? How does it demonstrate good OOP practices, or DB architecture skills, or an understanding of Linq, or really much else other than "getting the trick?"
This is a great question. Is there any way for a site to determine your level of ability without having some human intervention? Can a test be given which is graded by a computer and accurately tests someone's skills?
There are only 10 types of people in the world, those who understand binary and those who don't.
I understand your frustration... is someone being tested on their ability to write clean code that works, or their ability to implement a particular algorithm? In this web based world, information is generally readily available and we are no longer required to remember the exact syntax of an expression; we ARE expected to know how to use it and where to find information on it, however.
I remember when I started in programming, all 'tests' were math based assignments that didn't test coding ability, they tested your understanding of the underlying math problem.
And then the test results are amusing. The requirements don't state what to do with incorrect inputs into the "solution" method, and they clearly can't handle exceptions being thrown -- I noticed my score in their example test dropped dramatically when I used exceptions.
And so it should Exceptions are just that, you shouldn't use them for things that are expected as they are very expensive to create. When it comes to user input you shouldn't rely on exceptions to check if the inputs are valid and you shouldn't throw exceptions when the input is invalid, you should handle these things through other means.
After looking at the kind of questions on that site I have to admit I wouldn't fancy being made to sit that test. It seems to be testing quite a narrow type of programming that not everyone does, and that not every job calls for. I see the merits in what they're doing though, they are often testing things you don't think are important in coding but are.
When it comes to user input you shouldn't rely on exceptions to check if the inputs are valid and you shouldn't throw exceptions when the input is invalid, you should handle these things through other means.
Of course, but this wasn't user input, it more like "are the parameters meeting the contract of the function" where I expect the parameters to be prevalidated.
But the telling point was, why they specified things like the array would be between 0 and 100,000 (inclusive) and the numbers would be between 1 and 100,000 (inclusive) they didn't say what the return value should be if the inputs were wrong!
Ultimately, I have no real beef with a test like that as long as there is a follow up discussion. Which there wasn't.
Maybe TopTal is looking for people to work on their chess app for (very low memory) embedded systems. Knowing how to play chess with the least amount of moves and bits is really important in such solutions!
That's really the only logical explanation I have for such screening processes anyway. That and "company policy states we screen at least x candidates every month."
My last coding test was done via codility that had 3 problems too. I solved the first one, and Codility reported it couldn't be bettered (performance, correctness). Solved the second one as well, but it nitpicked on some edge cases (wouldn't tell me what the test cases are) and as a result the score was a little less. I simply didn't have enough time to do anything with the third one.
I sent a detailed email to the hiring manager (who is a developer as well), and included my code in it. I was asked to come for a face to face interview and coding test later in the week, which was a very well structured code interview with two top nerds. I was offered after the interview, and I now work here.
But I do understand there may be companies who use sites like codility as their sole interview (or coding test) tool, which is rather sad. I have interviewed with a few such companies, and was told by one of them that I simply didn't make the cut. A year later, the same recruiter called me up to check if I was looking for a job change (I told him off) because they still haven't "filled that position". No wonder.
years ago I ran a group for telcomm. We did a lot of bit twiddling.
Q1) what is the largest unsigned number in a byte?
Many a candidate with a masters degree in Comp Sci just stared at me. After 60 seconds, I walked them through 2^8-1, etc. Oh, their faces brightened.
spoiler: 255 for all you java and vb dudes out there
Q2) what is the largest unsigned in a word?
clarification: yes, 16 bits, 2 bytes.
anyone want to guess how many times candidates said 510?
Algorithms knowledge should show some basic understanding of the tools available to you. But I've dealt with enough bullshit magic code from really smart people over the last 5 years I want to take that algorithm book and shove it some place...
If you're looking for a job, you need a network. I hope Marc learned his lesson. These questions are just stupid crap. Go look at the way Windows 7 does recursive dependency evaluation for patch updates. There's a guy that knows his algorithms - very useful for polishing a turd.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Remember our discussion on how developers are like players on a sports team? Here we go again!
Haven't yet found an objective way to measure a developer's abilities so the whole exercise of trying to test candidates is pointless.
Developers respond to different problems in different ways. There isn't always the 'right' answer. Developers have good days and bad days; their programming form comes and goes. Some are really good at parsing strings. Others are really good at implementing user interfaces. Some developers do well in a particular team. Others crash and burn because of a personality clash. Some enjoy being under pressure, or turning out a solution quickly. Others want to take the time to get the best solution possible. Some are good at testing. Others at finding and fixing obscure faults. Most are crap at writing good documentation. Nearly every one has their own interpretation of what requirements mean. Or what makes good UI design ....
The most corrosive aspect of testing, or trying to compare developers, is that it can erode a developer's confidence in their own skills. Just like a striker on a soccer team, confidence is a major component of success. You can't measure that.
I agree that TopTal sucks. I thought they had some kind of system or process of their own for evaluating the skill level of programmers, but relying on one automated test with no human oversight is pretty stupid.
However I see no problem regarding Codility. There are multiple things you would want to look for in a programmer. The things you mentioned, such as OOP practices, DB architecture and similar technical abilities are usually evaluated thoroughly in the technical interview. I believe TestDome may have some more technical questions than Codility, but then again both platforms have lots of test questions so they may have both kinds. But for some programming positions, evaluating the ability to think algorithmically and figure out the "trick" can be crucial. This can be necessary in AI programming for example.
The problem here is not the tests themselves, but the way they are administered, and to whom. A test and it's questions may be useful for some candidates applying to certain positions but may be useless in other contexts. Another important factor is to not rely solely on the automatically evaluated results of these tests, but to have an actual developer look at the code. Not to mention that these tests are not meant to be the sole proof of someone's skill, but are generally intended to be one part of the screening process, where the technical interview usually follows.
I took the toptal test as well and ended up failing due to "additional" requirements not shown on the test.
Of course the test itself is, as you point out, really just a series of questions that have no meaning related to programming.
It's too bad that companies fall into these traps of thinking that tricky math questions can show whether you can program or not. I myself tried this in the past with people I hired, and found out the hard way that you end up with people who show off well, but do not have good coding skills.