To be classified as a good developer is no longer characterized by merely writing manageable and well-documented code, but by today’s definition it also includes knowing how to write a secure code. This has priority over all other qualities assessed. Last year "Web Services" was the buzz in the technology field, but has since been replaced with "Security". Many job positions have been modified to include Security as a primary responsibility. On many levels of the Corporate Enterprise, changes are made to ensure that production systems are Secure and hackers cannot gain control of critical applications inside the businesses. When it comes to "Security", there are so many layers; it is like an onion in which every layer that is peeled away, reveals yet another layer and so on. It takes great effort and much invested time in order to expose the core of the onion and the same is true with Security. In today’s installment, I want to peel that initial layer from regarding the Security topic. I regret not being familiar with this technology a year ago, but have put forth much effort this year to learn, manipulate and implement solutions with this highly demanded skill. The outcome of my personal endeavor is my collective thoughts and learning’s that you will find below. My main goal was to decipher the complexity of Code Access Security (CAS for short) into simple, understandable English supplemented with colorful diagrams to reinforce the knowledge. A picture speaks a thousand words!
What is Code Access Security?
The world has drastically changed in these last few years. This week I was in Washington DC, our capital, and was shocked with the measures that have been taken through the city to ensure the security of the government and historical monuments. Even though some of the buildings were surrounded with huge flower pots, beautified barricades, I still felt a sense of exclusion as I walked through the streets. This is a perfect example of what security is all about. I looked up the meaning of the word "Security" and here is what I found: "Freedom from risk or danger". To be honest with you, I did not expect to see the word "freedom" anywhere in the definition of a word that at its core compromises freedom. After contemplating, it began to make sense after all. Traveling through airports a few years back did not require extensive scanning procedures on passengers and their luggage. Also, friends and family members still had the ability to say their last good-byes at the boarding gates. This is no longer the case; Security at airports has changed so dramatically. What we see here is countermeasures that have been taken by our government to prevent another 9/11 from happening. As developers and computer users, we’ve seen how a harmless computer virus can evolve into detrimental pieces of code targeted to steal our personal information, destroy systems and take over our computers. Of course, as our government and major software companies roll-out countermeasures for computer security, we developers need to be on top of the issue as well and this requires a proactive approach in the Security our own applications. Microsoft has launched the trustworthy initiative that includes not only developing secure code for their own products, but also a means for developers to write trustworthy code. In order to write secure code though, we need to be educated and the whole paradigm shift needs to fall into place before developers begin writing secure code. Is Microsoft doing a good job of educating us? You can answer this from your own perspective; surely there is much more that can be done. Recently, I attended the Security Summit where a number of tracks were presented. The reoccurring theme of the seminar was to engage developers to begin thinking about the actions needed to develop secure code. If security is a set of actions that ultimately prevents us from being exposed and vulnerable again and again, then we need to be aware of the techniques and tactics of the Hacker. Education is a critical role to make this fortification happen.
Role-Based security is at the heart of Microsoft Windows 2000/XP operating systems, but it isn’t enough to depend on the code itself and to neglect the skills and awareness of the user. This security model cares about user access secure resources and any code usually runs under the credentials of the logged on user. Here is a common scenario for Windows users: "John, the accountant, needs to file some information with the partner site, so he typed in the partners URL into the web browser. Next thing he knows, a message box pops up that reads: "In order to run this application we need to install ActiveX control on your machine, do you trust us? For John it means the following: "Do I want to be productive today?" Of course, he answers with a "Yes", which successfully installs this piece of the software, but John has no clue what secure resources this application has been granted to access it. This all happened just because he agreed to one pop-up message. What is wrong with this approach? Off the top of my head two things are wrong:
- Installed ActiveX runs under John’s user security permission set or it can do pretty much anything with the system that John can. (Delete, update files, etc.)
- John has no idea of what the ActiveX does and most likely it doesn’t cross his mind, but what matters to John is being productive and his computer being secure.
The recent virus "Sasser" does not require user interaction for the computer to get infected; simply plug an unprotected machine to the net and in a matter of minutes it becomes infected. So, Code Access Security picks up where Role-Based security falls short. It provides it with the mechanism of securing the code based on who wrote it and where it came from, or where it is executed (evidences). These evidences are mapped to the permissions (rights), which can be administered by four different policies, which correspond to the role user represents:
- Domain Administrator – Enterprise Policy
- Machine Administrator – Machine Policy
- Actual User of the machine - User Policy
- Developer - Application domain Policy
These policies are configurable after the application is deployed and can be modified at any point in time. One major concept was introduced with CAS - Partially trusted code is code that has been granted only access to the resources it needs to execute successfully and no more. Looking at the big picture, Code Access Security and Role-Based security both support the same Patterns, which I call "2AR" as demonstrated in the diagram below.
Security Identity Pattern "2AR" includes the process of determining the identity and then assigning it to a group, which corresponds to the permission set (rights), which can be performed on the secure resource. The key to this pattern is Reinforcement of the access to the secure resource. The process of reinforcement will not allow unauthorized access to the secure resource without going through the process of Authentication and Authorization. Common Language Runtime (CLR) accomplishes this by means of a Stack- Walk, which can be compared to the following scenario:
"Teenager Joe (19 years of age) wanted to drink some beer with his friends, but he could not legally go to the store to purchase it himself, so he asked his older friend if he could get it for him. Even though Bill (20 years of age) is older he is not old enough to purchase beer at the store, so he talks his dad into buying it for his friend. Bill’s dad knows he is breaking the law, but he still does it. The Store Clerk checks Bill’s driver license and finds he is over 21, so he sells a pack of beer to him."
The chain of request from Joe--> Bill --> Bill’s Dad to the clerk represents the software concept of the Stack-Walk.
In the real-world, the beer will be sold because there is only one ID check and all of the chain members of the request do not need to present proper ID to the clerk, just Bill’s dad. Joe and Bill represent partially trusted code and Bill’s dad represents fully trusted code as far as the system is concerned. That is exactly what is happening when viruses gain access to a secure resource by luring it to do its dirty work. In .NET Framework scenario CLR prevents a successful Stack-Walk from happening, because it requests proper ID on every level of the chain cell; therefore, if somebody in the chain does not have proper ID then it consequently rejects the requests on all levels. Thus, if CLR reinforcement rules were applied to a real-life scenario than Joe or Bill would never have the beer. The following scenario will fail by the time it reaches Bill. Realistically, some of the code that is partially trusted (does not have full access to the system) sometimes needs to have access to fully trusted resources and that is where modifications of the stack-walk comes into play.
In the diagram above the Clerk denies (Deny) anybody who does not have proper ID (evidence) to sell beer, but Bill’s dad permits (PermitOnly) his son and his son’s friend to drink beer because he has almost reached the legal age. Outsmarting the system neither makes it right nor legal, only possible. When Bill gets his beer he shares it with his friend (Assert or voucher) and they enjoy beer together on a sunny day. Code Access Security provides us with this flexibility, but we need to be aware that it also introduces greater security risk; therefore, design your systems in advance. I demonstrated how to use the Deny(), PermitOnly() and Assert() to modify Stack-Walk. This is not a complete list of modifiers. Please reference to .NET Framework SDK for more info (Overriding Security Checks). I just wanted to get your feet wet; the rest of the onion peeling is in your hands!!
This is as simple as it gets when it comes to Code Access Security and its patterns. Now, the complexity comes with understanding CAS terminology and the .NET Framework implementation of it.
Learning Code Access Security through ASP.NET implementation
This learning process usually starts with discovering how to use Code Access Security for the existing applications and then later applying it for your own projects. I have read a great deal of resources and nothing clicked for me, until I started working with Microsoft SharePoint 2004 that extends ASP.NET architecture and relies on CAS to secure its resources. Since not everyone is developing with SharePoint server just yet, Microsoft hopes to corner the portal market soon with this product. I chose to demonstrate and explain the main concepts of CAS through ASP.NET technology.
The most common scenario that demonstrates Code Access Security is smart client application. It has been downloaded over the intranet and has certain permissions (rights), but ASP.NET is installed on the machine and runs locally on the computer; thus, this scenario does not explain fully the CAS. We determine that code either can be executed by itself (smart client scenario) or hosted by Host Assembly or Unmanaged Code (IIS Filter). This is the main difference where the Evidences will come from.
Therefore, there are two ways an assembly can be loaded:
- User clicks on executable and code executes
- Host assembly loads your assembly by means of Reflection or unmanaged code initialized the CLR.
Most people do not realize that CLR is not native to Win32; therefore, it is hosted by unmanaged code. In the smart client scenario assembly executable runs by itself, so default evidence gathering process is invoked as demonstrated below:
CLR Policy evaluator will gather evidence automatically, and you can not supply your own evidences at this point. Application domain policy is optional. What has been gathered by the Policy Evaluator is a set of Evidences about the code. Policy Evaluator gathers evidence and grants a set of permission every time your code has been executed by JIT. There are 7 default evidence types, which can be split into two groups:
- Assembly Evidence – answers the question "Who is the author of the Assembly" For example, all of Microsoft CLR classes signed with the same private/public key pair, which allows CLR to determine that Microsoft developers wrote the code and grantsfull trust (full control) to the system.
- Host Evidence – answers the questions "Where did the Assembly come from?" If you started a smart client by referencing the URL location and than at a later date you move the executable to a local hard-drive, CLR does not track the history of its location.
Of course, like everything else it is possible to write your own Evidence class and provide custom Evidence object for your applications.
ASP.NET unmanaged IIS filter hosts managed ASP.NET process and passes needed information from unmanaged world into managed process. This scenario is an exception and most likely if you need to host an Assembly you will use managed code and use Reflection to load the assemblies. Rockford Lhotka business object framework has a nice utility (NetRun) for smart clients, which basically modifies machine polices for the hosted Smart Client Applications.
ASP.NET relies on the Application Domain policy to provide extra flexibility for configuring the applications. That is why I have ASP.NET and SharePoint on the slides above.
What is Policy? It is a configuration file containing information about what the code can do depending on the code evidence. There are four levels of configurations for the applications, which are based on administration needs. They are as follows:
- Enterprise Policy – default setting allows all code to have full trust.
- Machine Policy - configured by default to give the assemblies installed in Global Assembly Cache full trust and others.
- User Policy – if the user wants he can restrict its machine based on these settings.
- Application Domain Policy – security configuration for the application.
The process of evaluating permissions (rights) between different polices levels is known as "Intersection". Intersection is a complex algorithm to determine the final or Grant Set of Permissions. There are two things to remember about policies:
- Policies are based on the hierarchy structure; thus, if the top layer of the policy grants no permissions (rights) to the code, then the policy below it cannot grant permissions either.
- All of the policies have to agree on permission, before that permission can make a final grant permission set.
To demonstrate simply how Policies work together, I diagramed the following:
This diagram includes ASP.NET Application Domain policies levels that normal application does not have. There are 5 default ASP.NET Application policies:
All policies correspond to a physical file, with the exception of Full, which has a built in policy (full control). I can create my own Policy file by simply adding an entry to the Machine.config as follows:
<trustLevel name="Full" policyFile="internal"/>
<trustLevel name="High" policyFile="web_hightrust.config"/>
<trustLevel name="Medium" policyFile="web_mediumtrust.config"/>
<trustLevel name="Low" policyFile="web_lowtrust.config"/>
<trustLevel name="Minimal" policyFile="web_minimaltrust.config"/>
<trustLevel name="Minimal_Web" policyFile="web_minimal_Web.config"/>
<trust level="Full" originUrl=""/>
Did you know that ASP.NET by default runs under Full trust? You can change it by modifying trust level attribute of the Machine.Config file for the server or Web.Config for the virtual directory:
<trust level="Medium" originUrl=""/>
Here is a very simple scenario for you guys, to try out and learn about CAS.
- Create a WebService on a local machine with IIS Server and add a simple method "SayHi" that returns a string "Hello World"
- Create an ASP.NET application and Add Web Reference to your WebService
- Add a button, wire the click event to call the WebService method for "SayHi" and display return value into the label.
- Build and View in the Browser. It should work with no problem.
- Add trust level attribute and set it to Minimal
- Does it still work?
If you get a Security Permission Exception then you have done everything right. How can you fix it? The Policy contains information about what the code can do based on the evidence. By switching Application Domain Policy for ASP.NET from Full to Minimal we can change what the code can do and this is where the understanding of Code Groups --> Membership Conditions --> Permission Sets and Permissions begins.
All right, first things first; Code Groups are containers for Permissions (rights) that the code can have based on the Evidence, which are the XML elements within the policy configuration file. There are 7 defaults evidence types and they are mapped out on a one-to-one relationship to Membership Condition element of Code Group, plus one Membership Condition that maps all of the code.
Code can belong to a Code Group based on the Membership Condition, which reflects the evidence that has been collected or provided to a policy evaluator while assembly was loading. We know that in ASP.NET applications the code executes in the virtual directories; therefore we can create the following Code Group to match this condition.
Code Groups can contain child Code Groups; therefore, it is possible to create permissions across matched groups. Most commonly used Code Groups types are Union (AND) and First Match (stop when the match is found). It is also possible to use PolicyStatementAttribute to provide the mechanism to stop policy evaluator from calculating the remainder of the hierarchy of policies. Options are Exclusive or LevelFinal. Please refer to .NET Framework SDK for more info.
What is a Permission Set? A Permission Set is a combination of all the permissions (rights) that can be granted to the code. There are 6 Immutable (pre-built) Permission Sets that we can not modify, meaning we can add or delete them from the permissions collection:
If you noticed "Everything" and "ASP.NET "are Named Permission Sets but they are mutable and I can freely add or delete permissions to it. I also can create my own name for XML PermissionSet element and combined permissions if I want this Permission Set to have it.
Now, we are back to our earlier scenario with WebService. Do you still want to fix it or have you already given up on the idea?
Of course, you want to fix it. I know I did. What I found out was that in order for the WebServices calls to work properly we needed to allow WebHttpRequest/ WebHttpResponse classes to have WebPermission granted. If you look at web_minimal.config configuration file you will find the following entry for ASP.NET Permission set:
<PermissionSet class="NamedPermissionSet" version="1" Name="ASP.Net">
<IPermission class="AspNetHostingPermission" version="1" Level="Minimal" />
<IPermission class="SecurityPermission" version="1" Flags="Execution" />
This Permission Set does not give much rights for my virtual directory code (ASP.NET application).
Microsoft always likes to gives us many options to do the same thing and works in our favor.
Our First option will be to find out which ASP.NET Application Domain Policy file contains the WebPermission. You can do it by simply looking through the Policy files. You will find that web_mediumtrust.config has WebPermission as a part of ASP.NET Permission Set:
<IPermission class="WebPermission" version="1">
So all you need to is modify your web.config file to the following
<trust level="Medium" originUrl="http://localhost/.*"/>
Now, when you run the application it should work without a hitch. OriginUrl is used to grant permissions to the specific web server. What if you had references to more then one server?
Then simply make the following entry in web_mediumtrust.config:
<IPermission class="WebPermission" version="1">
Our Second option involves creating a custom policy. For example, if I really wanted to lock down my server and only allow running under Minimal trust level, then I need to create a custom Policy based on Minimal trust level and then add permissions (rights) for WebService execution.
Here are the steps for creating ASP.NET Application Domain Policy, which has minimal trust level and allows calling WebServices:
- Navigate to C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CONFIG or the location where .NET Framework is installed on your machine.
- Open Machine.config file and <trustLevel name="Minimal_Web" policyFile="web_minimal_Web.config" /> below default ASP.NET trust levels
- Create a copy of web_minimaltrust.config and name it web_minimal_Web.config or what you desire your policy file to be named
- Open web_minimal_Web.config and now we’re ready to make modifications needed to grant WebPermission to ASP.NET PermissionSet
- Add reference to WebPermission assembly between <SecurityClasses> Element:
Description="System.Net.WebPermission, System, Version=1.0.5000.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089" />
- Navigate to ASP.NET Named Permission Set and add
- Open your Web.Config file within ASP.NET application and add the following
<trust level="Minimal_Web" originUrl="http://localhost/.*"/>
- Build and View it in the browser. It should work like a charm
if you followed all of the steps you were successful in creating a custom policy and running ASP.NET application with a Minimal Trust level (permissions that are requiring running the application and no more).
Let me summarize the whole process quickly for you. Unmanaged Code loads ASP.NET assembly and provides it with a set of Evidences. There are five ASP.NET Application Domain polices that configured by modifying trust attribute of the Machine.Config or Web.Config files. Policy Evaluator performs "intersection" on four polices (enterprise, machine, user, application domain) and maps Evidence to a membership condition of the Code Group that supply permissions (rights). If all the policies agree between different levels then permission set is included in the grant set.
Declarative vs. Imperative
The main difference is where the information is stored in an assembly as shown below:
Manifest stores Metadata information that can be read without running the assembly; therefore if you used Declarative security to enforce security than I can simply run command-line utility (Permview.exe) to view what Permission I need to have to run your code. In comparision, Imperative is more flexible and is stored as MSIL code, which will be compiled in JIT and given a Security Exception at run-time.
At this point, I did not cover the issue of using CAS in your own code. I tried my best to provide you with a slightly different angle on how Code Access Security is used or can be used. I barely scratched the surface with this article. There is so much more out there as far as configuring and developing techniques with CAS. In my next installment, I will be talking about how to configure Sandbox for your ASP.NET applications. I also will be kicking off a one day, hands-on workshop on CAS coming this June for more info contact me.
Copyright © 2003-2004 Maxim V. Karpov All rights reserved.