Introduction
There is a problem connected with the .NET exception handling strategy; it has an effect on the reliability of managed code. In one sentence, the question is: Why C# does not have any defense against The Side Effect of Unexpected Exceptions? The phenomenon of implicit interruption of code flow (without an explicit throw
statement) can take place when some of the lower layers throw unexpected exceptions and some of the upper layers handle such cases as normal. As a result, some data in the program may become inconsistent, and further execution in such states may cause abnormal things. So, to terminate the entire application is better than to let this to occur and let the program stay alive.
Background
This article assumes you are familiar with exception handling in .NET and with C# (or a similar language). Also, it would be well if you know anything about checked exceptions in Java.
See the following references:
C# is not Java's clone, what about checked exceptions?
As we can only suppose, the well known exception handling strategy which we can see in Java (checked exceptions are implied) probably is unnaturally expensive for programming (and may be for performance too). Perhaps, that is why C# was made neutral to the problem of unexpected exceptions.
Java's compiler dictates approximately the following style: "to think about all errors (checked exceptions) in every possible place". But this is not easy, and it is not right to force a programmer to such a coding style. We don't want to catch those exceptions which will not actually occur. Also, such handling is very dependent on the exception hierarchy. It tends to use very generic handlers instead of a specific process where we must enumerate the problems (kind of our caught exception). That's too many negative aspects. In other words, the reliability of such a programming style is questionable.
It is very strange that no alternative has been invented! How can we protect programs against the side effect and guarantee the so called clean result (when we are absolutely sure that the obtained data is certainly trust worthy)? Let us change our customary sights a little.
Idea of the Clean Result
(Before we begin, you should skim through the following article: The Back Side of Exceptions, but do not dive very deeply into the details.)
Now, let us introduce a new abstraction, a special term related to code nature. This is characteristic of code which is responsible for its tolerance to implicit interruption; we will name it as monolithness. The radical idea is to forbid any implicit interruptions by default. We'll name all such code as monolithic; any unexpected exception (thrown from the lower layer or implicitly by the layer itself) causes the so called fatal error. Thus, all "transaction"-like sections are automatically protected from their incompleteness (with regards to unexpected interruptions).
On the other hand, in the middle layer code, we have to mark those places which are tolerant to implicit interruption, thereby making some gaps in “transactions” (in monolithic code) manually. We'll name such code as non-monolithic, and will denote corresponding blocks with a special hypothetical keyword: interruptible
.
So, we may gain full control over expected exceptions, and have appropriate defense against all unexpected exceptions. If some abnormal exception occurs, it will be intercepted somewhere in the monolithic code, and will cause a fatal error. Such unexpected exceptions will be handled at the so called top level (for the default application domain — this is an unhandled exception handler or the default exception handler). The top level, however, will receive a special MonolithicCodeViolationException
exception chained (via the InnerException
property) with the original unexpected exception, whose appearance has broken the monolithic section. (The stack trace of the MonolithicCodeViolationException
exception has its origin in the monolithic code, and the primary unexpected exception's stack trace begins at that place from where it was explicitly thrown. Thus, we can always completely analyze the cause of such a fatal error by the examination of these two stacks.)
Another hypothetical (opposite) keyword is used to specially denote monolithic sub blocks of code inside an interruptible region: the monolithic
keyword.
Here is an approximate syntax of the interruptible
and the monolithic
keywords:
- Two forms of the interruptible code block (block A is freely interruptible; B is partially interruptible form — a list of expected implicit exceptions is specified):
interruptible { ... }
interruptible (Exception1,Exception2, ... ) { ... }
- The interruptible code block combined with a function declaration:
... Type Function( ... ) interruptible [...]
{
:::::
}
- The monolithic code block:
monolithic { ... }
In terms of new rules, all catch
and finally
sections are always monolithic (even if the outer code block is marked as interruptible
). We should explicitly use the interruptible
keyword inside catch
and finally
sections in order to permit their implicit interruption. There are also some features related to threading and deterministic finalization, but we shall not touch them here. (The previous article has an introduction to those problems, and they have a solution of course.)
Further, you can see a sample listing, a demonstrational program written in a hypothetical extension of the C# language, where the new rules are used and special constructions exist. This is a simple console application responsible for some abstract XCalculation
. Dominant monolithness of its code ensures that any abnormal error thrown from the depth as an ArgumentException
object may not be occasionally caught by the catch
section in the Main
method. (So, we will not receive wrong reports about incorrect input values for XCalculation
.) A malicious bug in the CoordinateCoefficients
method is placed for test purposes; it throws an unexpected ArgumentException
exception from time to time. But in terms of the new rules, we are protected from such abnormality, because any unexpected exception will be intercepted at the middle layers (in their monolithic code) - instead, the default exception handler will report an error and the program will be terminated. On the other hand, in each method in the middle layers, we specify those places where we actually expect ArgumentException
— these places are gaps in the "transaction"-like code. (If you do not wish to allow such a gap, you should use the try
-catch
-finally
block instead. But it's redundant in our simple example.)
The CleanResult console application, an abstract XCalculation
based on the concept of monolithic/non-monolithic code (written in a hypothetical extension of C# language):
namespace CleanResult {
using System;
using System.Reflection;
static class Program {
static int Main(string[] astrParams)
{
Console.Title=string.Format( "{0} \x2014 {1}",
Assembly.GetExecutingAssembly().GetName().Name,
"Concept of the monolithic/non-monolithic code" );
double
alpha=45.1243,
betta=-84.21,
gamma=0.5841,
xresult;
if (astrParams.Length!=0)
{
Console.WriteLine("No parameters were expected.");
return 1;
}
try
{
xresult=CalculateXFormula(alpha,betta,gamma);
}
catch (ArgumentException e)
{
Console.WriteLine(
"ERR: Incorrect input values for XCalculation.\n"+
"REM: {0}", e.Message );
return 1;
}
if (double.IsNaN(xresult))
{
Console.WriteLine("XCalculation resulted to not a number.");
return 1;
}
Console.WriteLine("XResult = {0}",xresult);
return 0;
}
static double CalculateXFormula(
double alpha, double betta, double gamma )
{
double
coeff_1, coeff_2, coeff_3, coeff_4,
correction_1, correction_2;
coeff_1=Math.Sin(alpha)/Math.Cos(betta);
coeff_2=Math.Cos(betta)/Math.Sin(gamma);
CoordinateCoefficients(ref coeff_1,ref coeff_2);
interruptible
{
correction_1=CalculateFirstCorrection(
alpha, betta, gamma, coeff_1, coeff_2 );
}
coeff_3=Math.Sinh(alpha)/Math.Cosh(betta);
coeff_4=Math.Cosh(betta)/Math.Sinh(gamma);
interruptible (ArgumentException)
{
monolithic
{
CoordinateCoefficients(ref coeff_3,ref coeff_4);
}
correction_2=CalculateSecondCorrection(
alpha, betta, gamma, coeff_3, coeff_4, correction_1 );
}
return (coeff_1+coeff_2+coeff_3+coeff_4)/4 +
(correction_1+correction_2)/2;
}
static double CalculateFirstCorrection(
double alpha, double betta, double gamma,
double coeff_1, double coeff_2 )
{
if (alpha+2*betta+3*gamma>coeff_1*coeff_2)
throw new ArgumentException();
return alpha+2*betta+3*gamma - coeff_1*coeff_2;
}
static double CalculateSecondCorrection(
double alpha, double betta, double gamma,
double coeff_3, double coeff_4,
double correction_1 )
{
if (alpha+2*betta+3*gamma>coeff_3*coeff_4)
throw new ArgumentException();
return alpha+2*betta+3*(gamma*correction_1) - coeff_3*coeff_4;
}
static void CoordinateCoefficients( ref double coeff_1,
ref double coeff_2 )
{
if (Environment.TickCount%5==0)
throw new ArgumentException("Malicious bug exception.");
}
}
}
It's not difficult to guess how such a hypothetical compiler may automatically gather lists of all possible exceptions for each method in the module (similar to how it generates an assembly's metadata or XML documentation file from source code comments).
Conclusion
The article demonstrates that the problem of unexpected exceptions may be solved another way. (Java's checked exceptions are not the only possible variant.) Introduced here is the concept that monolithic/non-monolithic code does not dictate us a style which is based on a heavy exception hierarchy. Instead, we have to consider every piece of our code on the subject of its monolithness (thinking about its tolerance to implicit interruption), and decide whether to allow it to be interrupted implicitly or not. Such a program will not be greatly complicated. Moreover, its logic may become only better, because we explicitly designate all breakable places with the interruptible
keyword (as well as we use the unsafe
keyword, for instance, for marking every unsafe
method). Such novel style may be a vital solution for reliable programming with exceptions, at least for some types of tasks. (Imagine that new constructions are available in the real language, and you can switch default monolithness on/off by some C# compiler option, or control it in program code with the help of the following special directives: #monolithic
and #endmonolithic
.) Evaluate this idea and post your comments!
Related articles (by Sergei Kitaev):