As I discussed in my last blog entry, there are two main areas in Computer Engineering, Software and Hardware. I thought it would be good to start a discussion today with the area that I know best,Software Engineering.Since the age of 9,when my parents first bought me an Atari 400 home computer (back in 1980 !), I’ve been learning to code.The 400 was a behemoth in it’s day,having a flat membrane keyboard, an 8 bit CPU with 8 KB RAM,and you had to use a detachable cartridge with BASIC on it to even program it to do anything.Those first years gave me the basic concepts of programming that still hold true today in the world of declarative or functional programming.I’ve been programming now for over 25 years, in many different languages,for many different applications,and I can say with all honesty that if you learn one language,it’s not too hard to transfer those skills on to the next.So it doesn’t really matter which language you learn (as long as it’s one of the more popular ones), it will teach you valuable concepts that will be applicable to most other languages.With that in mind, What are the languages to learn to program ? And what are the strengths and weaknesses of them all ? Obviously,there are many modern computing languages,most of which are free to download the compiler or the run time engine for, and most are also easy to learn. If you want to learn to code there are 4 good languages to start with. Let’s discuss the history of what are known as C based programming languages.
In the 1970’s everything to do with computers and computing in general was expensive. The first machines were programmed using punched cards or keyboards in the language of machine code. Machine code contained instructions that the CPU could understand directly. So for example to add two numbers was 3 mnemonics, the add command, the location in memory of the first number and the location in memory of the second number. This was an effective way to avoid any wasted cycles of the CPU,and thus any wasted computing resources,which for a machine being used between many people or departments was invaluable. The problem with this approach was the high skill level required, and also the propensity for those programming the computer to make mistakes or errors (the birth of the software bug).
So to make things easier, languages were developed that were more simple to use and understand. Code was written by programmers in these “higher level” languages, and then a tool was used to convert this code into the underlying assembly code or machine instructions that the CPU could understand. This tool is of course the compiler (now we also have the pre-processor).The birth of the modern day programming language was here. One of the first examples of this new type of language was C which is still widely used today, especially for embedded programming. C was easier to understand that raw assemble instructions, meaning it was easier to program, and it also had tools that the programmer could use to monitor the program as it ran (debugger). C soon became hugely popular, and remained the most popular industry programming language right up until the early 1990s. Programmers were developing new tools and new techniques all the time, and soon C became too unwieldy for larger software systems. In the late 1980s, as opposed to individuals writing their own programs, teams of programmers would be working on a single project. Machines were becoming cheaper by the month, enabling more memory and computing resources to be used,and even multiple processes running on the same machines, and soon even multiple threads running in those processes. So, a new kid arrived on the block, C++.
C++ was a much easier to use language,and it supported a modern technique based on an ancient concept, Object Orientation. OO or classes and objects, was a concept based on Plato’s Socrates Forms. In other words, code could be organized into classes or genres/types, and hierarchies or taxonomies could be constructed allowing code to be re-used/shared via inheritance and composition. The language also made it easier to manage memory in the program (a constant source of bugs in C programming).C++ was a language whose tool chain was written in C, and the C++ code itself was converted to C, before then being converted to assembly language (just like the 1970’s again !). So in reality it was a wrapper around C.
This wrapper effect is a common theme in computing. The machine sits in the middle of the onion, and layers or wrappers are wrapped around it making it a higher level machine and therefore easier for humans to understand, but also making it slower due to the layers that the data has to go through before the machine can process it.
About 10 years after this, the prevalence of the internet meant that there was an explosion of programmers both professional and amateur, and they were crying our for a more simple way to program, so again a new language appeared, Java. This was a higher level object orientated programming language than C++, and it wasn’t compiled per-se, it was passed through a virtual machine (like a run time engine). This meant that you could write your code, and it would run on any platform, for example a Windows program could run on Linux, or a Mac. Fantastic ! Agnostic programming. On top of this, programmers all around the world began publishing their code under an open source license, meaning that many libraries appeared that could be leveraged free of charge, so less re-inventing the wheel.Obviously there was a trade off, if the code is interpreted at the same time that it is running then it will always be slightly slower. This wasn’t a problem for most applications, only resource intensive or real time dependent ones – (imagine flying an aircraft and waiting for the busy cursor !). Java also provided really nice features such as a garbage collection, and referenced objects, making memory management easier again, and programming in general far easier. Seeing the threat to it’s user base, Microsoft soon realized that Linux, (its arch enemy) would take over,unless it offered some alternative to Java. So,Microsoft came up with C#. The differences between C# and Java in those early days was almost non-existent, and like Java, Microsoft talked about how C# could be run on any platform. Sadly this did not really come to fruition (how convenient for Windows !), and most programmers accept the fact that if they want to write code for multiple platforms they stick with Java, or use C++ with use an agnostic SDK wrapper layer like Qt.
So there are 4 modern languages , C, C++, C# and Java we have mentioned. If time and resources are critical when running your application, then manually managing those resources is best, so the C programming language comes out top. Then it’s C++, offering OO with polymorphism (late binding), inheritance, composition, and also offering better memory management new and delete. Lastly we have the current favorite languages, these are marginally slower when running, and they also require run time resources such as a run time engine or virtual machine, these are C# and Java.
To learn a language and programming, I would suggest choosing either Java or C#. Grasp the concepts of how a program executes, how it’s compiled, the programming paradigms it supports, the basic building blocks of iteration, condition, statements and functions. How they use resources with classes, objects, memory, references and values. Then once you have mastered one of these languages work backwards in time, soon you will be reveling in the low level power of C++ ! With C++ expand your programming into parameterized classes (templates), raw memory handling with pointers, and optimise your code with inline functions, direct memory access. Then if you really need it, get deep and your hands dirty with C, directly accessing resources and hardware and drivers.
There are hundreds of man years of effort required to learn everything, but don’t be daunted ! Chose a simple language to start with, work away at it, and you will find that most of them have more in common than you think. I for one love programming computers, and ever day there is something to learn or adapt to make it more efficient.
So what’s it going to be ? Java or C# ?