When C was first invented by Dennis Ritchie in the early
1970s, his main objective was to create a language that was easy to read, fast,
powerful, easy to learn and most of all, extendable. Before that programming
required either knowing the byte codes for the processor you were developing or
using FORTRAN and COBOL. Now FORTRAN and COBOL are nice, human readable
languages like BASIC, but BASIC is interpreted and the other languages were verbose. When Dennis Ritchie invented C from BPCL, the keywords were concise, fast, flexible and portable.
When you write code in machine or byte code, you know
exactly how many instruction cycles or ticks it takes to execute the
instruction. Generally, it’s one. Assembly language is a way to write byte
code without knowing the actual number for an instruction. So the instruction
mov ax, 3 sets the value of 3 into the ax register. It costs one tick. Are
you lost? Ok, let me explain.
The processor of any computer has registers, or small spots
of memory directly accessible that it uses as it performs instructions. Some
registers are general purpose, others are special purpose but the values in
them are used by the processor to aid in its work. Want to add two values
together? The processor will add two registers together. Add ax,bx for
example adds bx to ax and stores the result in ax. Think of it (for the
C-style language initiated) as ax += bx; OKAY! Why is this important?
Ok that the instruction idx = 5; Assume
idx is an integer.
It’s fairly straight-forward. Set idx to 5. But what happens under the hood.
Well the data segment and an offset register (okay assembly language experts, I
am over simplifying but work with me) at set to point to the memory location of
idx. When the value of 5 is assigned to the memory location. Depending on the
processor, the “5” may be MOVed into another register which then gets
idx. But all in all, it takes 3 to four processor ticks to accomplish
this instruction. That’s the beauty of C. It’s meant to take the power of
assembly language and make it more concise. The compiler translates the
instructions directly into machine code and the instructions run.
In an object oriented paradigm however, things are not so
simple. The simple assignment I have been discussing can become amazingly more
complex. Because the assignment operator is overloadable (that is, for the
statement idx = 5, depending on what idx is, I can write a function that is
called when I say idx =), the amount of code increases. Add implicit copy constructors
and cast operators and what is a simple statement can turn under the hood into
a fair amount of code. Imagine if you will that I have a class called
Integer. Integer has an assignment operator and copy constructor defined. It
also has a typecasting operator (operator int) defined as well. So idx = 5 can
become a typecast 5 to Integer, creating another “implicit” Integer object that
then gets passed to the assignment operator.
We say garbage collection is a good thing, but if you use a
lot of memory, you still are asked to “plan” it properly. My argument is while
garbage collection can help guard against memory leaks it still doesn’t make a
good substitute for proper code planning and programming in the first place.
As processors have gotten faster and memory cheaper and more expansive, the
Windows OS for example has become bloated! I remember (back when I was a
teenager), using Windows 286 and Windows 3.0. I was amazed at how many
programs I could run with 640k of memory, two floppy drives, and a 40 MB HDD
(yes megabyte!). Older programmers in the time of Ritchie, Thompson, and Stroustrup
knew how to maximize the use of 16 bytes of memory. Today, we will throw away
16K with no thought.
The other thing about the newer Java and C# is both languages
are still interpreted. Yes, they get compiled to a bytecode, but since the
processor doesn’t understand any other instruction set but its own, the
bytecode has to be interpreted or recompiled to run. What do think the Java
Virtual Machine and .NET Just-In-Time Compiler are written in? They require
native code. Not managed code.
Above all else, what do you think your operating system is
written in? Major applications like Microsoft Office? Ever since DOS 3.0, C
and C++ have become the standard in writing operating systems (not discounting
C’s birth with UNIX and their continued coexistence and codependence. While
assembly language is still used to a limited extent, especially when wanting to
connect with a piece of hardware with no overload, C is the next best thing.
You can write the lowest code in assembly language and from C call the function
in question. You can set the address of the assembly language function to a
pointer declared naked so no prolog or epilog code is assumed if needed (functions
usually have code to set up parameters and local variables on the stack and
save the state of the system and then reverse the stack changes after the
function runs). An operating system kernel doesn’t have time to be recompiled
before execution. When you’re at the heart of a preemptive multitasking
operating system, every tick counts! You cannot have any unnecessary
When C was first developed forty years ago, it gave us just
want we need and little of what we didn’t. Now I’m not against the newer
languages. They serve a valuable purpose in the age of Rapid Development. But
let’s not forget what these languages and their tools are written in. Combine
that with the operating system and all our major applications and you’ll see C and
C++ are still very useful indeed.