As they all are translated to another language when they are build.
That's just it... interpreted languages aren't really translated to another language, they are kept in their original format until run by the user. When the user runs the program, he's really running the interpreter, which takes the script in and interprets what is to be done.
The definition is not always as cut and dry as you might think, there are a lot of languages nowadays that produce byte-code or some other in-between by-product that will be further interpreted at run-time.
For an interpreted language, the program is run line-by-line when you request it to run. If there is a problem in the logic or any other 'bad things', it will run until it hits one. Thus, if you misspelled a variable's name it wouldn't react to it until you run it and it happens to come across the misspelled version. Also, if you left out a BEGIN or END statements, it would run until that caused a problem, if ever. Some feel that these languages are easier for beginners to start with because they're not all-or-nothing, but not everyone agrees with that.
Now, for a compiled language, the entire program is converted to 'machine code' (often in several steps). This code has to take the entire program into consideration at once. If you have a problem it will not be able to figure out what to do and you'll get an error. The program will NOT be compiled. A misspelled variable is 'undefined' and would be a problem. Similarly, the compiler will look for an END for every BEGIN, in the correct order or it will be an error. Compiling a program is an all-or-nothing affair.
Because of this difference in when the program is turned into real computer instructions, the interpreted languages are much slower than compiled languages when running.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"As far as we know, our computer has never had an undetected error." - Weisert
"If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010
Many people are stating that let us c is old and hard but when I read it there was no problem rather than the ide(but I am already known how to use a ide, Code::Blocks) and it was also the one book from which I get the idea of the technical things cause it is easy to understand. So what I do change it and go to a new book or remain firm on it?
In programming there is no better or worse languages they are just tools and most of programming has nothing to do with the language, it is about the process behind the programming. In the commercial industry we refer to people who like to push a language in the derogatory term codermonkeys and generally at a job interview saying xxx language is a better language will see you immediately overlooked. If you make programming a career you will probably code fluently in a at least 3 or 4 languages and probably dabble and have limited understanding in 3 or 4 more.
C is indeed old and it can be hard but it and Java are also the most widely used programming language. In some sections of industries it is in fact the only choice and here I refer to industries like microcontroller industry. The manufacturers for those processors don't make programming tools and so generally the only options is assembler or C.
The current spectrum rating on most widely used languages is (1)Java,(2)C,(3)C++,(4)Python,(5)C# .... daylight to everything else
So most the bulk of us old commercial programmers can write in two of the three (Java,C,C++).
So that is the commercial world but that all said if you are just programming for fun feel free to use whatever language works for you. The language does not change the problem at all, if I ask you to write a bubble sort algorithm the language does not change the problem. Rosettacode.org actually lists the code for a bubble sort in 117 programming languages and none of them are better or worse than any other they all confirm to the same pseudocode.
The ability to Pseudocode is what seperates programmers from codermonkeys and you may care to read about it from wikipedia. Any real programmer can write what they are doing in pseudocode and that pseudocode requires no choice of language showing ultimately just how irrelevant the chocie of language is.
The choice of language generally comes down to availability and ease of use of the compiler and tools and your familiarity with it.
You're(Leon de Boer) extremely right at the answer. Well I am not discriminating any programming language and I also believe that every programming language has its own power and uses. I also decided to learn C cause I want to be a system programmer as Java is not suitable for many low level services but C/C++ are one that has the pride.
Well, there are a few considerations here. 'C', 'CPP' are examples of languages which produce code running close to the OS, and the Hardware. One works there on the coal face, dealing directly with actual memory locations, and de Operating System. the vagarities of the hardware components, etc. This can lead on occasion to hard to detect bugs, and crashes. A Strash is a famous example. Another consideration is that the Supplier of the OS can literally pull the carpet from underneath your feet, by deprecating your favourite OS. Another disadvantage is that you must maintain different versions of Source Code if you want toi write for more than One Platform.
On the other hand, just because you deal directly with the OS and Hardware, you can do all sorts of tricks that cannot be done in'Synthetic' languages, such as say C# or Java. These languages run on a 'Virtual Machine' in a 'Virtual Environment' When you find yourself in such an environment, you may forget about playing even the most innocent trick. That virtual machine knows nothing about memory, but talks in variables. The advantage here is, that this a far more friendly environment to write in, it tries not to allow you to write wrong code. Also, your code will probably run from now till kingdom come on every computer and OS.
Now, it should also be remembered, that as a society, we cannot ever dispense with languages such as C and CPP. Languages such as C# Java, and many others are actually written using 'C' and 'CPP'
I personally think that you could do worse than learning 'C' and 'CPP', in particular if in the latter you incorporate 'MFC'
Note: C# Java, vs 'C' and 'CPP' are very similar in syntax. The devil is in the syntactical detail!!!
auxDIBImaheLoad comes from the GLAUX library which is obsolete and no longer supported by Visual studio which is why it can't find it. Even adding include "GLAux.H" wont help as the library file has been removed as well as the DLL.
If you are just playing around the source code and precompiled GLAUX.DLL are available on the internet but anything beyond that do not use it.
All that function does is loading an image file as a texture one second let me fashion you a replacement it will take longer to explain how to do it that to do it. Can you tell me what lesson this is from on the NeHe site and I will post result to them to update it on next message
The _T is a macro provided by TCHAR.H for unicode/multilingual support. If you go to the project settings->general tab->Character set you will have that to "not set" setting. That macro allows you to use the other choices being unicode and multi-byte character sets making your code work multilingual like in chineese windows.
For you in ascii mode the macro actually does nothing (which you worked out) but if you select the other modes you will see you will get an error on every static text the _T tells the compiler to make the string in the correct mode and removes the error.
Being a commercial programmer and as Microsoft has made it so easy for doing it we generally try and use the multilingual code calls since Visual Studio 2013. This became almost compulsory when trying to write true 64 bit applications. The default setting of an empty project is actually to set for unicode character set.
Essentially TCHAR becomes a replacement for the standard char and its size varies in the compilation modes. They provide new string functions that match the old string functions in TCHAR.H but have different code for the different modes. Lets give you an example
strlen becomes _tcslen those calls work identical the difference being _tcslen will work in any language mode compilation, while strlen will only work in language "not set" mode like you have. Here is the link to what is going on from MSDN strlen, wcslen, _mbslen, _mbslen_l, _mbstrlen, _mbstrlen_l[^]
This code is designed specifically for windows (it uses the Win32 API), it is not general in nature like that could work on linux so there is no reason to write generically but we should try and cover the different modes of windows compilation, especially as it is easy.
So for me the changes are just habit.
There is a funny part of this that so many of us are writing in that style that the linux community is having issues trying to port our code. So if writing general code I would probably try to avoid this style of programming.