|
First of all, interesting article. I work in the defense-contracting sector and security (across the board, not just code, of course) is obviously of extreme importance so articles like these are definitely interesting to read.
I do have a few questions/remarks though.
Could you explain this 'Open Design' a bit more? I ask this because, per definition, a 'design' needs to be 'open', otherwise my analyst teams, development teams and testing teams cannot properly do the job. In the case of the end-user, only the end-users whom are part of the design process itself have the need to see the design(s), any other end-user has no business with the design of the software, they just want it to work and have it work correctly (some more on end-users below). Bottom line, I don't quite get what you mean with 'the design is not a secret' (heck, often, in my job, the designs ARE actually classified, depending on the software and its use(s)).
You speak of 'Physiological acceptability' and include 'help dialogs & appealing icon' as examples. Here I do understand what you mean but, to a degree, disagree with your statement. Appealing icons is nice, but they are nothing more than 'visual', they do not influence security in the slightest, nor do they influence the functionality of the software. I do not see how this would improve security. Help dialogs are a given, but only when the end-user requests them. Anything that 'distracts' an end-user from doing his or her job, is wasted time. Adding a plethora of help dialogs that come and go whenever the software feels the need to 'communicate' its design to the user is only going to prevent them from doing their job. Also, if the high security causes performance issues (some of your examples are definitely able to cripple performance) then a whole other series of issues appear.
In fact, (most) end-users don't care about how the software is designed or how it works internally, they just want it to work. Since 'software security' is an internal affair for software, (most) end-users are simply not interested in it beyond the usual logging in and out.
Basically, I am saying that pretty much all of your article needs to be entirely transparent to the end-user, invisible so to speak. Security matters are of interest to the analysts, developers and, to a degree, the testers (unless it is a dedicated testing team that doesn't contain any end-users, then it definitely matters a lot more since those testers need a much more in-depth view of the software than an end-user).
It all comes down to the usual in the end, money. The more time an end-user spends not working (using your example, looking at help dialogs with appealing icons is not considered 'doing work') the less money the business makes. In the end, the money is what makes the business 'care' about its software, with that I don't mean 'how much does the software cost', but 'how much money does the software save the business'. Bogging down end-users with all sorts of security related things that do not help them perform their duties, is not going to save the business money, it is only going to cost more money.
So, while I fully agree with your call for better security in software, I do not entirely agree with your approach.
End-users should not be involved in the development of software apart from initial design (e.g., the usual 'what do you need the software to do' and, 'how do you do what you need the software to do now', etc sort of involvement, as well as 'proof-read' the parts of the design that matter to them, all this pre-development, of course), and a degree of alpha/beta testing (functionality testing!) to make sure the end-user wishes/requests/dreams are implemented in a way they can deal with, and that the software does what it needs to do correctly.
Security can also be overdone, too much security can make the software sluggish, it can also introduce a whole host of bugs that are 'uncommon' to a normal developer. Highly secured code can also hinder your ability to 'smoothly' debug it. We've done a lot of software security research internally (we develop internal software only, since I work for a huge corporation that does not sell software but uses hundreds (heck, probably thousands) of software applications, many internally developed, across a multitude of platforms) and finding that 'sweet spot' between security and usability, performance, functionality, etc' is not an easy task, a task that needs to be repeated a number of times during development and is different for every project. In my experience, a task better suited to analysts than developers.
Thankfully, we have to work by VERY strict guidelines, which makes the above slightly more 'predictable' during the design phase (we already know what need to be secured and, usually, how). I can see that for a software house, such 'predictability' can be entirely gone, making the design and development of secure software quite a nightmare.
In summary, and in my opinion, good security is a direct result of the software design (assuming the design process was done properly and resulting in a good design), not (directly) the code. Step number one in developing a piece of software is designing it, this is where the security needs to be applied (depending on methodology, inside the technical design documentation, not the functional design documentation, with the exception of the 'log in/log out/related' processes).
It gives the developers clear guidelines on how to develop the software, without them having to worry about the potential security issues (although they do need to keep an eye on such things, in case something was missed during design). This can result in a shorter development time, it can also lower the required experience level a little. In the end, it creates more secure software, with a large potential to have the development process done cheaper and faster. Where you stress security through code, I stress security through design and organization, at the end of the day, the result is the same.
However, personally, I say, leave the end-user out of it as much as possible, they have a job to do, a job that in the end pays your salary, if they don't work, you don't get paid.
All in all, I look forward to any follow up articles regarding security. You did a pretty good job on this one.
|
|
|
|
|
First of all, I would like to thank you for your thought provoking and challenging questions.
It is beyond words to say how excited I am to answer your questions:
Starting with "Open Design", you rightly said; inherently DESIGN NEEDS TO BE OPEN but in reality most of the programmers (including me) love to live in their own land full of assumption; where they think that their OWN security standards are the most safest ones. Actually I see it more as wrestle between positive and normative principles. Open design in the context of securing means that programmers, architects & security analysts should build their security designs on the open standards used by an industry rather than casting their own. As security analyst this might be common sense for you but I have seen architects / developers ignoring controls during design. Thus the motive behind this article is to pull them out of the thinking of "nothing is wrong with my design".
Speaking about 'Physiological acceptability', in my opinion its about embedding the classical principle of HCI (Human to computer interface) into security, which will help overcoming threats like social engineering, phishing because the user is in strong Physiological bond with the system and any change in visuals should make him frown.
Furthermore, I am firm supporter of involving Business in every step of SDLC and working shoulder to shoulder with them and thats what methodologies like agile advocate. You might have faced very good and educated business owners, but the people which I have come across thinks that definition of security is confined to transfering money in secure vault surrounded by guards The first principle of any security procedure, policy in organization is the strong commitment of Big Boys towards Security & Risk. Working closely keeps them aware that Security is for Business and security is not to stop business from expansion.
Please bombard me with any of your questions and queries, it was an immense pleasure to write for you.
Thanks.
|
|
|
|
|
You are correct, a lot of developers do live in their own 'box' where what they believe is correct is, indeed, correct, even if it is not. I agree entirely with the notion that open standards need to be followed, and I approve of your effort to point out some of these standards, it is, sadly, sorely needed.
My personal problems are more related to 'internal standards' for security that were created before most open standards. Mixing those two is a nightmare to organize and manage but we can't just throw a switch and have hundreds of applications follow different standards. The vast majority of these applications (ranging from the 70's to the present in age, we still have some REALLY old mainframes here and there) 'talk' to their closest data center (all of which we own), which in turn propagates the information to the other data centers. Implementing today's (open) standards into applications that are 30, 40 years old, is simply not an option, in fact, we don't touch these applications at all (if we can avoid it), considering that they work and that due to their age, we probably lack the knowledge to fix issues with them (there was a time when the corporation did not document everything, sadly).
The only way of updating software under those circumstances is to phase out the old legacy system entirely. That means developing new software for new servers. Depending on the size of the application, the amount of data, the required transaction figures, etc this could be anything from a simple Windows server to a top of the line database oriented server in the realm of AS/400's and such. The larger non-Windows systems can be tough to develop today's standards into, since they usually come with limitations that an all-purpose server system like Windows Server does not have.
This is a slow process, my resources are limited and I cannot simply hire 200 extra developers . As a counter-measure to the security issues that potentially exist on a software level, we own the data centers, from building to all the equipment and personnel in it. We also own all the data lines that run between the data centers as well as the data lines that run to our facilities. Internet traffic does go through these data centers but are kept 100% separate from our internal communications. The only weak link there is the computers attached to both the internet and the internal network(s), so there are many layers of (hardware-based) security applied there. The cost is immense, but this is a requirement due to the nature of things we produce for our clients.
This extends further than just the hardware and software, we also depend on our employees to maintain security by simply following the rules set regarding classified information.
I don't think 100% security is obtainable, no matter what you do to maintain secure, on any level, from simple pen & paper to software to people talking.
On the 'end-user vs development' side of things, I do agree that end-users need to be involved, they are the ones who will eventually use the software. However, this involvement ends when the actual coding of the software starts. At that point the functional and technical designs of the software are set in stone until the first version is completed, and the end-users have no real reason to be involved with the actual code. Once the code is done for the first version (which could be a prototype, depending on the project), and the internal testers (technical testing) have done their jobs (and subsequently the developers have fixed whatever they found), then the end-users, or a selection thereof, become involved again, this time to perform functional testing, usually aided by the technical testers as well as people who observe the usage of the software by the end-user, the software itself also collects data regarding the 'how do they use the software' bit. This information is used to tweak the software for functional efficiency.
There is another testing step after that, but security rules prohibit me from discussing it, in any case, it doesn't add much to the above processes since it doesn't involve end-users directly.
While the above isn't exactly 'agile', it does involve the end-user quite a bit, in fact, they pretty much design the software, on paper. We deal with the implementation of not only their wishes but also how they wish to actually do their work with new software (to a degree of course, trying to find that fine line in the middle and all that). Once we're done with that side of things, they get to use it outside of production, comment on things (the bit where we can tell them that it was their idea in the first place, so don't blame us ) and come up with ways to make things more efficient (aside from our other efforts in that area). A 'little' tweaking later and we have the software in production and it is exactly what they want, done our way. They never see/hear/notice any security measures within the software apart from 'log in' and 'log out', they are, however, free to bring up security related ideas and comments during design and functional testing, after all, they are the ones who know how to do their job, not us. Nor is it up to us to dictate to them how they should do their job (albeit that there are situations where that is unavoidable, but those are rare).
In the end, the software we produce for our divisions, is straight forward and to the point. There is no additional 'fluff' besides what they need to do their job. Since we do code modular, any additional functionality is easily implemented, and will be implemented as long as it is not 'fluff'. As little distraction as possible within the software is one of our top 'rules' to development. Most of our newer (three years old to present) software can crash completely, including servers, etc, all the end-user notices is a slightly longer pause when a transaction generated, this pause being caused by one of a set of fail-overs, usually switching to other servers that have mirrored the databases, even if that means sending the transaction half-way across the world. The lack of client software (everything is browser based) helps with this of course, crashes on the client-side are either the browser, a bug in our software (extremely rare, plenty of bugs, but nothing that would crash entire systems), or the computer being used by the end-user. For any other situation, we have some sort of fail-over.
You may have noticed that I bring forth a lot of topics that seem to have little to do with 'security', however, most of the above involves security in one way or the other, in the 'fail-over situation', it is data security, but more importantly, allowing the end-user to continue working no matter the situation (you can't catch all situations, but we have most covered). While our software works decentralized on the surface, behind that is a highly centralized system solely intended to enhance security, attempt to keep the users working in almost any situation and make sure that at least three mirrors (at max, there are twelve, depending on the importance of the data) exist of whatever data they require. We don't back up data anymore for example, instead the most important data is real-time mirrored onto arrays of solid state storage that is not located at our data centers but rather at individual facilities, it is possible (and with extremely important data, highly probable) that data from a US based facility is stored on solid state storage at one (or more) of our facilities in Europe.
The real beauty lies in the fact that the end-user doesn't need to know this (albeit that it is no secret, so if they're interested, we'll gladly explain), they also don't really notice it even if problems occur (apart, of course, from severe problems occurring, for example total failure of power on their side will turn off their systems if the facility does not have a UPS system powerful enough to keep everything going (most don't, but some do)).
In the end, our job has become easier, the software is as secure as it can be, the backbone is as secure as it can be and the end-user never really notices anything beyond a bit of lag. Nothing is 100% secure, but you can definitely attempt to get as secure as possible, one way to do so is what we do, creating security that secures other security next to following simple standards during development. On top of that, all data is transmitted with a high bit encryption algorithm.
Keep spreading the word my friend, way too much insecure software out there. I can tell you stories of linking our internal networks to suppliers and the nightmares that causes on a security level that would scare your average security expert enough to change jobs.
|
|
|
|
|
I read this over a few times, and while I think it is a good start, it needs to be cleaned up a bit.
- I know not every one speaks English as their first language here. Heck, many of the members form the US can barely speak it let alone write it properly anymore. However grammar, case and tense are important to the understanding of the written word. Editing in these areas is required.
- The content is very terse. Maybe this is intended to be the start of a series? If it is than I think it would be proper to alert the reader to that fact so the expectations meet the deliver ables.
If the comments seem a bit rough, I am sorry. I don't mean to vent on YOUR article but I see a tone of borderline stuff lately and I guess yours is the lucky recipient of today's feedback from me Yeah, I know, I only have ONE published article myself... But I have a lot of opinions
One thing I will always say about a good developer is that they are always willing to accept feedback.
|
|
|
|
|
Comments / Suggestions are always welcomed. Actually this article is step stone for my further articles in the application security discipline. I am going to publish revised version of this article. Thanks.
|
|
|
|
|
No problem. It is great to see people here that take suggestions in a positive manner.
If you ever want someone to read over an article before you publish it and get a different view on the wording or content let me know, send me a messages.
|
|
|
|
|
Publishing revised draft of this article. You comments and suggestions are most welcome.
|
|
|
|
|