prose :: and :: conz


…And then there was code

While stumbling around through my Dropbox folders the other day, I discovered that I still have ALL of my college work saved from both my undergrad and graduate degrees. It got me to thinking about how I got to where I am today as an obsessive software developer. My first real exposure to writing code occurred while I was a mathematics undergrad at the University of Alabama in Huntsville. Undergrads in the College of Science had to take CS 102: Introduction to C Programming. I wasn’t really looking forward to the class. I generally had a bad attitude at that age (and it’s not gotten much better), and I felt that mathematics was more noble than computer science. In my mind, computers were this man-made thing and it was silly to have a science for it. Blow up all the computers, and there is no more CS. Nonetheless, I generally was a good student even when taking courses I didn’t want, so I showed up bushy-tailed and ready to roll.

To say this was my first exposure to software development isn’t entirely true. During early high school (late 90s) I got interested in building websites. I didn’t like all of the web hosting services out there that helped you build your content. I wanted to get down into the raw HTML and really understand what I was building. Looking back, I can only imagine how terrible my product was. I only did HTML. This was in the early days of CSS when it was hardly adopted. As for the content I wanted to put out there… Well I was OBSESSED with music back then. I wanted to post up lyrics, sound clips, and other copyrighted material that would have gotten me into trouble anyway if I had gotten far. I didn’t make substantial progress with my project, but it suited me quite well. I’m very introverted. I enjoy building things. Long hours alone at the computer with my headphones on zipped by as all of my cares were put aside while I escaped to the task at hand.

Step back a few more years into the days of Windows 3.11 and MS-DOS 6.x and we’ll see that I’m tinkering with coding even earlier. Back in those days, I loved playing computer games on our Packard Bell 486. I was into several games including the original SimCity and SimFarm, with the latter being particularly appropriate for this farm-raised southerner. I had a handful of role-playing games I enjoyed like King’s Quest, as well as war strategy games such as Dune II and Command & Conquer. First-person war games like Wolfenstein 3D and even Aces of the Pacific consumed quite a lot of my free time growing up. However, none of these games ruled like DOOM. I wasted hours and hours playing that game while listening to the mighty Pantera (angst, anyone??).

Ok, so what did that little nostalgic stroll down memory lane have to do with coding? Well there was one thing I really hated: Typing the DOS commands to start my game. It wasn’t the typing or the console. To this day I prefer the keyboard to the mouse. It was the repetitiveness. I don’t recall what all had to happen each time other than a cd to the game’s directory and calling the executable. But when I discovered what a bat file was, I wrote one for everything I wanted to do and saved them in C:\. Even better was when I learned how to run a bat file on boot. That computer would literally boot into DOOM. I loved making that computer work for me. I gladly did the hard work of figuring out how to write a bat file and set it up to run so I wouldn’t have to do all that again. This is exactly how I am today as a developer; I work as hard as I can to be as lazy as possible while the computer does all the heavy lifting for me.

Let’s fast-forward back to my CS 102 class. I’m sitting next to a guy who turns out to not need this course at all. He already contributes to open source and could have probably held down a good full-time position. Yet he felt he needed to get his degree, and had to suffer through these otherwise pointless courses. Up front is a really dorky (even looking back on it as a professional dork, still very dorky) lady who was gonna teach us some computer programming (because at the time I thought that’s what the “C” stood for). We’re all seated at a computer with Visual Studio C++ set up with a C file. She beckoned us to disregard all that “magic” stuff like #include, int main(void), return, plus whatever other ceremonious bullshit that crappy languages like C need and instead focus on what was happening in the middle. The introductory exercise is to read a character from the user, put it in a “box”, and print it back out with some frilly asterisks around it.

Behold… The first compiled program Joe Barnes ever touched.

/****************************************************************************/
/*                                                                          */
/* Name:                                                                    */
/*                                                                          */
/* Filename: intro.c                                                        */
/*                                                                          */
/* Class/Section: CS 102-01                                                 */
/*                                                                          */
/* Due Date:                                                                */
/*                                                                          */
/* Completed Date:                                                          */
/*                                                                          */
/* Purpose: This is a first program in C.                                   */
/*                                                                          */
/* Inputs: none                                                             */
/*                                                                          */
/* Outputs: A welcome message for the user.                                 */
/*                                                                          */
/* Assumptions: none                                                        */
/*                                                                          */
/****************************************************************************/

#include <stdio.h>


int main(void)
{

	char initial; /*This makes a "box" and names it "initial".*/

	printf("Enter you first initial: "); /*The box is preceeded by this
										 commentary to tell the user 
										 what to place in the box.*/
	scanf("%c", &initial);               /*This tells the computer to
										 grab the user's input and save
										 it in the box.*/
	printf("\n*****\n*   *\n* %c *\n*   *\n*****\n\n", initial);
                                         /*%c is where the box's contents
										 are placed.*/

	return(0);	
}

I scarcely edited the file myself, as I’m sure you can guess. Present-day me certainly would not write more comments than code! Here I appreciate the comments for their educational purposes, but this is the sort of thing they promote as good habits. Anyway… this is where it all really began for me. As the course went on, I began to realize what a great power was being bestowed upon me. I had labored years with pencil and paper doing my own arithmetic and algebra. Now I could tell the computer to do the math for me and be MOAR LAZY!

Initially I saw programming as a handy helper for me as a mathematician. Once I figured out I didn’t want to be a poor high school math teacher in rural Alabama, I declared a minor in computer science. I enjoyed it more and more until I decided I needed to pave a path to graduate school for computer science. Shortly after that decision I realized I would have to take so many courses after graduating to satisfy the breadth requirements, that I’d practically complete the major. Hence I made computer science my major. I was already beyond a minor in my mathematics studies, so I completed that too. Glad I did because I credit my ability to think clearly through software problems to the rigors of formal mathematics.

So that’s how it all started for me. Here I am on my 30th birthday, writing code all day at work, coming home to write more code and write blog posts about writing code. I count myself incredibly blessed to have a such a high-paying hobby.


Olde Comments
  1. Adam Holden says:

    Nice read, Joe. Happy birthday!

    I’m having to get back into the C/C++ mindset this semester, and it just kills me that college classes don’t properly distinguish HOW those languages should be used. Courses should teach you how to make applications that bind to C code through Python(etc.) for intensive tasks, instead of building the application in C itself. Starting students off with C++ gives the wrong impression that coding well is all about syntax, and not semantics. My wife is taking a Visual Basic(#yuck) course, and choice of language aside, I think it’s really cool that a lot of their assignments are about adding or implementing functionality to existing program shells, instead of reinventing the wheel.

    • barnesjd says:

      +1 for all of the above. I really appreciate all of my instructors, but I must say that the ones who impacted me most were the part-timers. The real-world experience translated to much coding wisdom.

  2. Ian says:

    C++ is one of my favourite languages, next to Python and Scala. It’s the language I use when I don’t want to compromise between elegance and efficiency (the other two are used when I need elegance even if it means sacrificing a little efficiency). My first program compiled in it was a simple program that calculated the radioactive decay of an isotope.

    By the way, mathematics kicks computer science’s ass every time. Not by much, though; not only is coding just fun, but I find that an algorithmic approach to advanced abstract mathematics is way easier to understand than just building layer upon layer of abstraction. Algebraic/analytic geometry and the study of varieties is the perfect example of this; the maths can get so abstract if you’re not constantly thinking about how to solve the problem algorithmically.

    • barnesjd says:

      I must admit I’ve made the full transition from love of math to love of software. However, I certainly couldn’t see myself being effective without the math love background. :)

      As for the efficiency of C/C++, I would argue that one should really only use it for drivers and junk where there simply is no getting around manipulating bits, bytes, registers, etc. Beyond that, I believe that utilizing a language that is most mentally palpable is of most importance. Where low-level interaction is unavoidable, mix in C/C++ with the higher-level languages as Adam suggested. The most inefficient resource in software development is the developer. Furthermore, while in grad school I became convinced that improvements in performance pivot on algorithmic advances far more than a little tweak here and there to the source. This is even evident in processor throughput where the greatest advancements are arguably the result of better algorithms/approaches such as pipelining rather than just making the damn thing crank out more cycles. Speaking of CPU cycles, we’ve really hit a wall in terms of clock speed, and now the number of cores is steadily increasing. We can no longer count on a faster processor to improve our application performance. Now we must write code which takes advantage of multiple cores in order to continue seeing gains. I think C/C++ is ill-equipped to remain relevant in the multi-core processor world, as this article suggests.

      • Ian says:

        Of course, I do a lot of coding for microchips and other embedded systems, which is one of my main sources of C++ use. One of the things that I really like about using C++ is that, like Scala, it’s multiparadigm. It might not combine functional and OO programming as nicely as Scala, and some FP features are missing from C++, but it’s very flexible in how to solve a problem; perhaps too flexible. In fact, arguably much of the strife about C++ comes from people “doing it wrong.” For example, even doing embedded systems work, I don’t have to get waist-deep in manual memory management half as often as the people who created Unix, with little to no performance penalty.

        As you and Adam noted, though, C++ is probably best used in combination with other, higher-level languages. This is, I think, where C++ really shines in today’s desktop computing world; use a high-level language by default, and use C++ for some of the really intensive number-crunching. I do this with Python all the time, and it makes me really wish that the JVM had a better native code interface than JNI.

      • barnesjd says:

        Very cool. The whole “doing it wrong” is what worries me. The same thing happens in JavaScript. I don’t think you can prevent all mistakes, but I really get a bad feeling when a language has a feature that is regarded as always a bad thing to do. There are always static analysis tools for scrubbing the code for these issues, but I feel that it simply shouldn’t be there if it’s a bad thing.

        I’m glad you do a lot of low-level stuff with C/C++. I hate those languages because I’m too error prone. I really respect guys like you who are able to get it done. My precious JVM and its OS would suck without it

  3. snemarch says:


    As for the efficiency of C/C++, I would argue that one should really only use it for drivers and junk where there simply is no getting around manipulating bits, bytes, registers, etc. Beyond that, I believe that utilizing a language that is most mentally palpable is of most importance.

    “Modern” C++ (avoiding new/delete, using RAII) isn’t too bad… “Really Modern” C++ (C++13 and beyond, lambdas etc) is almost enjoyable, even if the language/stdlib still has plenty issues, and “a bit” too much ceremony. A lot of people seem to be way too influenced by bad books, bad teachers, or the craploads of bad C++ code out there, though.


    we’ve really hit a wall in terms of clock speed, and now the number of cores is steadily increasing. We can no longer count on a faster processor to improve our application performance. Now we must write code which takes advantage of multiple cores in order to continue seeing gains. I think C/C++ is ill-equipped to remain relevant in the multi-core processor world

    I believe it’s better suited than a lot of other languages – unlike JVM languages, you have good control over memory layout… and locality of reference isn’t going to become less important. Good code requires a bit more thought in C++, though, and STL might not be your best bet – you definitely don’t want to pass containers around between threads. But C++ allows you some pretty interesting performance tweaks that aren’t really doable from your regular managed languages (deleting an entire heap of objects without the cost of a root-tracing GC, direct access to memory-mapped files, …)

    Haven’t read the Bartosz link yet (bed soon), so can’t comment on his thoughts – but in the right hands, C++ code can be safe, beautiful AND have really great performance. But it does require an investment, there’s a lut of crud out there, and the language seems to attract people who enjoy blowing off their legs with arcane constructs :)

    It’s not my day-job language, though, but while both C and C++ are wrong for a lot of the stuff they’re being used for (hi tons of exploits for all existing OSes), it irks me that performance is widely disregarded and lots of “professionals” seem to think GC is the only way to fly (while ignoring that their languages don’t help against leaking sockets, DB connections and even more expensive resources).

    Ah well, the above turned out to be a bit of a ramble. I’m not trying to argue that C++ is The Ony True language, because it isn’t – just that (a subset of) C++ isn’t such a Cthulhuid monster as many people seem to think… and that its performance is hard to beat. I’m just a grumpy 32-year old polyglot who’d like his overpowered smartphone to react a bit smoother and burn battery a bit slower :-)

    • Ian says:

      I agree with snemarch. C++ isn’t nearly as bad as people make it out to be, at least if your code doesn’t look like it was written in the 1980s. I still like Scala better, but C++ is still a pretty good language.

      As for using it with multicores, I’m on the fence there. Multithreading support has gotten a lot better for C++, but for the most part I’d rather use more high-level constructs. The exception is when the added control over memory layout allows you to make safer and more elegant code that still scales. Again, I really wish that the JVM had a better native code interface.

      “C++ is my favourite GC language because there is so little garbage to collect.” — Bjarne Stroustrup

      • snemarch says:

        Ian, I wouldn’t want to do multicore programming at the level that C++ (lang, stdlib) itself supports. That’s madness – you need to move to AMP or better, high-level constructs is the name of the game indeed. But you can do that AND get memory layout and other nice things at a cheap price in C++.

    • barnesjd says:

      Are there any subsets of C++ out there? I could give that some serious thought. I greatly agree with the Bartosz post with respect to backwards compatibility all the way to C; it’s a huge anchor for the language. I would support a good subset, tho. You made a very good point regarding the pollution of the C/C++ world with bad literature. I hadn’t though about that.

      I have to say I agree with you on performance. Perhaps I made too broad of a statement. My context is mostly web-development, so of course I think those things. :) I’m accustomed to powerful PCs and cloud resources, as opposed to smart phones and other less-powered devices.

      From my perspective, I find that the business problems alone are difficult enough. If I have to start worrying about how I’m using memory, then I’m screwed. I’d rather leave it to guys like you and Ian who can harbor all that information simultaneously while solving problems.

      Thanks for the ramble. I enjoyed it. They’re always welcome on my public ramble board!

      • Ian says:

        I found that as well. When I started programming microcontrollers, and kept crashing it because I was used to desktop computing, I found that I definitely was not in Kansas anymore. It took a while before I was able to really be able to quantify costs down to the microsecond and byte (and, actually, I’m still not all that confident about it). One thing that helped was studying how the compiler generated machine code.

        Actually, I recommend that for all C++ programmers: not necessarily knowing assembly language, but knowing how it’s generated from C++ code can really improve one’s skills with C++. I imagine a similar principle holds with JVM programmers knowing how bytecode is generated from their respective compilers.

        But yeah, in desktop computing I usually don’t care much about performance unless something becomes noticeably slow, or while doing something really intensive. Something about Optimus Premature.

  4. barnesjd says:

    BTW, keep your eyes open for a future post I’m going to entitle “My last two C programs”. Y’all are going to enjoy that one.

  5. […] into type programming in the kiddie pool. The product of this blog post will be no more useful than the first program I ever wrote. The education gathered a long the way is the value more so than the artifact produced. If all goes […]

Tagged with: polyglot (7), mathematics (3), c (1)