Tag Archives: programming

My Favorite Vim Plugins

It’s been a long time since I wrote about Vim. In this post, I just wanted to give an example of the plugins which make me love Vim as much as I do. On a foreign machine, I certainly prefer Vim over any other command-line editor, but since I’m so used to my settings, I am usually pretty slow at getting (back) used to vanilla Vim. One of the greatest strengths of Vim is well known to be its extensibility. You can really do a lot to make your life easier with plugins. Below are a sample of the plugins that have come to quite strongly define my experience with Vim.



This is a pretty useful plugin but I really don’t put it to good enough use. Basically, the name describes a good deal of what the plugin does. It allows you to very quickly switch between open buffers. (Remember to turn on “hidden” so that when you close a file, you don’t actually kill the buffer, therefore allowing you to quickly open up the file with cursor position, folds, etc. still intact through the course of a session). I know the whole thing about using buffers over tabs and yeah yeah, okay, but I still use tabs. I think in my brain it makes more sense to me and I’m going to stick to it. However, whenever I do need to switch buffers, I can just do “:b” and let it auto-complete and switch to any buffer I want quickly. Has saved me time pretty often.

Conque Shell

Another one of my favorites but still pretty underused for me. This plugin fills in a huge need: the ability to open a shell session inside Vim. This can be invaluable for many things including checking on the progress on a task while working on another task or allowing you to edit, compile, and run the application in one window (although this can be accomplished with “:!” commands, this is more natural and allows you to see the code at the same time. I usually work in Gnome-Terminal so I use terminal tabs for this, but if you’re in a PuTTY or SSH session, this can be useful. Very straightforward to use and it works somewhat decently well, although not perfectly.


This is an extremely popular plugin from the NERD* family and I use it daily. This smart little plugin allows you to very quickly comment out a line of code or create a new one. The kicker is that its the same key binding regardless of what language you’re working in, which improves muscle memory and saves a ton of keystrokes (I’m looking at you, HTML). The built-in bindings seem random and difficult but I’m used to typing “,cA” (with <leader> mapped to “,”) that it’s second nature to me. If you code a lot, I can’t see why you wouldn’t use this plugin.


This is possibly the most famous Vim plugin of all time. For good reason: it’s very powerful and complete and does a lot to boost the built-in file explorer (:Explore). However, I do not put it to enough use because of my use of wildmenus, which I consider faster and more intuitive coming from Bash. However, there are times when the file I want to open is a few directories away and I’ll break open NERDTree in a split and find it manually. There are many, many plugins that do this, but NERDTree is very polished and nice. I like it a lot.

Obvious Mode

I can’t live without this thing. It’s a very simple plugin that changes the color of the bottom status bar to indicate when you’re in Insert Mode. I kept making the mistake of trying to edit the text of a file in Normal Mode (tsk tsk, n00b mistake) so this was extremely useful. I set the color to a bright orange so now I’m never confused. I guess it’s a crutch of sorts, but it improves my productivity so I like it. This is something which I think can be built into Vim seeing how useful it is (maybe there is, not sure). Note that you really don’t need this plugin if you customize your colorscheme file, but this is colorscheme-agnostic.


So this is a very popular port of TextMate’s code insertion feature and it’s very handy. I was recently writing some code and came across a TON of situations where I wanted to print a status string like this: fprintf(stderr, "Some text message.");. That gets annoying to type after a while, but with this plugin, I can just do fpr<TAB><TAB>Some text message. and I’m done. So very useful. And don’t get me started on for-loops. The very nice feature about this plugin is that tabbing allows you to move between “fields” that hold particular content. The annoying part of this is that it conflicts with the next plugin, which makes me sad :(. Thus, another great plugin that I fail to use enough.


This is THE most used plugin out of anything I could ever do in Vim. Basically since moving to this editor, I’d estimate that I actually type roughly HALF of the characters in a source file. It’s so, so much faster to just code-complete partial strings that I never make the mistake of misspelling a variable or function name and it saves a whole lot of time when you have long function names. It’s also pretty smart. Just a few days ago, I was writing code that used the AES encryption and decryption functions provided in OpenSSL. I just included the appropriate header file in my file and I was able to tab through completions of function names in OpenSSL! Awesome and so useful. To be clear, all of the functionality of this is available natively in Vim, except that this plugin seamlessly switches between different kinds of completions with ease and takes it out of your hair. The one disadvantage is that when you have a very large (a few thousand lines) files with a ton of header files that include a lot of standard library headers, kernel headers, etc., this can be pretty slow. Apart from that, I love this thing.


This final plugin is the newest one to my collection. I just got it a few weeks ago and I use it a lot already. Basically, it provides a common feature in GUI IDEs that lists all the function prototypes, global variables, etc. This is nice because it lets you very quickly switch to a different function in the same file. It also gives you the function prototypes at a glance so you can know exactly how to call the function. It’s also pretty smart – you should see it at work in a header file with structs, extern variables, function prototypes, macros, etc. Very cool. And it works in a ton of languages too which makes it very useful.


This is just a very small sample of the kind of power that Vim (and its third-party developers) has to offer. There are a few other very popular ones which I omitted here, like Pathogen. Surprising as it may be, I haven’t actually used that one. I guess I didn’t realize I had these many plugins that I actually actively use. Perhaps it’s time to look into that one too. I’m always looking to improve my workflow so if I add many new ones, I might another post about this in the future.

1 Comment

Posted by on June 3, 2011 in Uncategorized


Tags: , , , ,

Tasky: A CLI Google Tasks Client in Python

I’ve been looking for a nice and fast Google Tasks client for a long time. There do exist a lot of them but none that I have encountered matched what I was looking for in terms of functionality and ease of use. Given that I have a bit of free time this summer, I decided to write one myself.

This task would have been really hard and clunky a month ago. Luckily, Google introduced the Tasks API about three weeks ago, a hugely requested feature. I hadn’t worked with Google APIs before so I took it up as a challenge to figure out how it all works and put together my client in a few days’ time. I also decided to go with Python this time, as I’d like to get more practice with the language (which I this is really amazing, by the way). The result of the past three days is a very simple and fast client for Google Tasks that works in the command line.


First, I had to define what I wanted to accomplish with this project. I wanted a way to retrieve tasks from one or more task lists (I currently have one but have used multiple in the past), have a way to quickly add, remove, and check/uncheck tasks. I wanted the whole thing to work in the terminal, as that’s where I spend most of my time working. And I wanted the commands to be as short as possible, as to minimize the amount of typing necessary. Specifically, I did not want to have to type an entire task’s name in order to perform some action on it. And I wanted the whole thing to take as little time as possible between subsequent operations – I didn’t mind a possibly shutdown if I could have fast removes and adds and toggles while the application was open. Finally, I wanted to have two modes: Normal Mode and Quick Mode. In Normal Mode, the application is opened and kept running, allowing me to make a series of changes to it before closing. In Quick Mode, I perform a single operation fully specified as arguments in the command line. The changes are made (with ambiguities resolved when necessary) and the application closed immediately. And I wanted to have it do authentication in order to access account information from any computer I choose to run the application on.


I was able to accomplish all of these goals by loading all task lists into memory upon startup and committing all changes to the server when closing the application. The Tasks API is still in Labs, which means it could change from the time of this writing, but as of right now, retrieving everything requires N+1 API calls, where N is the number of task lists in a given account. So that was fairly quick. I used Google’s sample code almost verbatim for OAuth 2.0 authentication (yay security!) and that’s usually the slowest part of the application. It does cache credentials, but still takes a bit of time to establish a secure connection. Once all the data is retrieved, I decided to store it in memory as a dict of dict of lists of task objects – which are just dicts themselves. That’s a lot of dictionaries, but it made sense since I was able to get to fast accesses, insertions, and deletions. As part of the way I implemented the code, the dictionary of tasks had to be an OrderedDict; if you look at the code, it’ll make sense why. I then performed changes on the in-memory task lists as the user made changes, but those changes were only reflected via flag statuses and not actually performed. Then, when the application is closed, I sent back just the changes, minimizing the number of total API calls.


The result is a little application that I’m fairly proud of. It does what I need it to do and I actually do use it. There’s currently no documentation beyond what’s in the source code and the code is still a bit rough around the edges. The link to my repository is here. As mentioned on that page, this application requires Python >=2.7, Google API client for Python, and the Fabulous library (used for having bold and strike-through text in the terminal). I have not included the Client ID, Client Secret, or Developer Key values in this code, but you can get those values by registering this application for yourself via the Google API Console. The code itself is fully open-source and you are free to use or modify it as you wish. It’s still in development, so look out for changes in the future.


Posted by on May 31, 2011 in Uncategorized


Tags: , , , , ,

The Art of Donald Knuth

The other day I had a few extra minutes before heading off for work so I decided to take a trip to my Engineering Library to check out a book I wanted to study over the summer. The book was called Pattern Recognition and Machine Learning, by Christopher Bishop. It’s preparation for a class I’m taking next semester called Advanced Probabilistic Modeling. Anyway, they didn’t have it. So instead I just figured I’d kill some time by checking out the usual sections of the library. I stumbled upon a book called the ACM Turing Award Lectures: The First Twenty Years: 1966-1985. Seemed interesting enough to pick up for a quick glance and I happened to check out Knuth’s Turing Award lecture. I really enjoyed reading the lecture and came across a lot of ideas that I’ve been trying to say before on this blog, although not nearly as eloquently as that of Knuth. To give a brief summary, the title of the lecture was Computer Programming as an Art, a theme that no doubt very closely resembles Knuth’s thoughts in writing his AoCP series. Anyway, I really should have read a lot of this before because I found it very inspiring. I’m going to just list a few nice quotations from the talk below:

Implicit in these remarks [that computer programming needs to made more into a science as opposed to an artistic, or imprecise, process] is the notion that there is something undesirable about an area of human activity that is classified as an “art”; it has to be a Science before it has any real stature. On the other hand, I have been working for more than 12 years on a series of books called “The Art of Computer Programming.” People frequently ask me why I picked such a title; and in fact some people apparently don’t believe that I really did so, since I’ve seen at least one bibliography reference to some books called “The Act of Computer Programming.

I thought this was quite interesting. It seems pretty clear to me that Knuth was a revolutionary in his thinking regarding programming, even when the notion of telling a computer a list of instructions to execute was something very new. I guess the process that the world took to master (if you can call it that now) the techniques of programming pretty nicely resembles, almost fractal-like, the process of learning of a single individual learning it for himself. That is, it starts by taking a bunch of seemingly random words, characters, numbers and putting them together in some haphazard fashion until the compiler stops yelling at you. Then the individual learns that in fact there is more rigor to the process than just throwing everything at the wall and seeing what sticks. There is the evolution of the learning process from chaos to some order; a science if you want to call it that. But that’s very much only the first stages. Because beyond the beginning stages, everyone pretty much gets how syntax works and where to put your semi-colons and whatnot. It’s the style which is where the fun really comes in. And you may be able to define how many spaces there are in a hard tab or where to put your braces in some document, but the approach by which one tackles a problem is much more innate. That’s how we have neat little algorithms to do our bidding instead of trying to crunch out everything the brute-force way each and every time. Herein lies the transformation from a rigorous science back into something of an art form. I would argue, therefore, that the art of computer programming is an evolution from the science of computer programming, not its predecessor. Expanding on this a bit further is another quote:

It seems to me that if the authors I studied were writing today, they would agree with the following characterization: Science is knowledge which we understand so well that we can teach it to a computer; and if we don’t fully understand something, it is an art to deal with it.

I kind of like this definition actually. It ties in with my previous post about potential functions and how there is an element of creativity necessary to actually do something of substantial value in this field (as with any!). But therein lies the fun of it. To put it shortly, there’s gotta be a reason we still have humans instead of computers writing computer programs. Related to this idea is the following quotation:

The field of “automatic programming” is one of the major areas of artificial intelligence today. Its proponents would love to be able to give a lecture entitled “Computer Programming as an Artifact” (meaning that programming has become merely a relic of bygone days), because their aim is to create machines that write programs better than we can, given only the problem specification. Personally I don’t think such a goal will ever be completely attained, but I do think that their research is extremely important, because everything we learn about programming helps us improve our own artistry.

I totally loved the above quote. I think it does a great job of highlighting exactly what this whole post, and probably this blog, is about. The point here is that reading this speech has provided me with quite a bit of more inspiration. There is something quite nice about reading these words.

Finally, in closing, here’s the quote that explains, in part, the inspiration for the title of this post:

One of the first times I was ever asked about the title of my books was in 1966, during the last previous ACM national meeting held in Southern California. This was before any of the books were published, and I recall having lunch with a friend at the convention hotel. He knew how conceited I was, already at that time, so he asked if I was going to call my books “An Introduction to Don Knuth.” I replied that, on the contrary, I was naming them after him. His name: Art Evans.


Posted by on May 25, 2011 in Uncategorized


Tags: , , , ,

Spring Semester Update

Today marks the beginning of Spring Break for us and the midway point of the semester as well. This is most definitely the hardest semester I’ve ever had so far and although it’s very stressful at times, I’m really enjoying it a lot. I guess in some ways this semester was a way for me to determine how much I actually loved computer science at the its core – to see how much I enjoyed it if I fully immersed myself in it. The results so far are better than what I had expected: Even though the great majority of the my homework and assignments are about computer science and even though I am studying or attending class on some CS subject nearly every single day, my interest in it has only increased. I feel very at home when learning about the theoretical aspects of computation and complexity, or knee-deep in the analysis of some algorithm. Even though the workload is just so very barely manageable, I’m glad I am taking what I decided to take and I’d like, to the extent possible, to complete all six courses I am enrolled in this semester. Six courses is a bit on the high side for an engineer here, but I guess it’s more about how interesting you find them. As I’ve said before, it only feels like work if you can actually distinguish it from play. In any case, most of the work that I have fits comfortably in the category of play, so it’s basically an excuse to do something I’d probably be doing – or wishing I could be doing – during that time anyway.

I had a few goals for the year. One to take at least one graduate class before the end of the year; to study lesser known data structures and algorithms; to learn at least one more popular language and one more obscure language; and to get an introduction on quantum computation. Of those, I am in the process of accomplishing the first few of them. Thanks to my class on approximation algorithms, which is both tough and very enjoyable, I am satisfying the first one. In my Theory of Algorithms course, we are covering quite a few different interesting data structures as well as algorithms. Although these are not exactly “lesser known” in the greater sense of computer science, they’re also not the ones you usually learn in an introductory course. Specifically, I was very happy to see that we covered Fibonacci heaps, rank-pairing heaps, AVL trees, and RAVL trees. These, especially the first, are among the more advanced data structures used for very large data sets in industry. If I recall correctly, I believe Google Maps uses some implementation of F-heaps in storing data. In general, self-adjusting data structures are extremely interesting and it’s so neat to get a pretty bound on their performance regardless of how convoluted the implementation and procedures may be. As far as the other goals, I’m making (slow) progress on picking up Python and we’re using C++ for our computer graphics course. And as far as lesser known languages, our compilers course is taught strictly in ML – a functional language. I am yet to fully wrap my head around those exotic things. More thought is necessary on that front.

Although I’m satisfied with the coverage of algorithms for trees and graphs so far, I’m now also interested in learning about probabilistic algorithms. I was turned in this direction after seeing the power of probabilistic rounding techniques in solving LP-relaxation problems for approximation algorithms for very common problems such as vertex cover. Although they may not always provide an optimal bound, they do sometimes make the code and analysis much simpler and offer better worst-case performance. And they are everywhere too. From the example I just mentioned to an implementation of MST in linear time (!) in the number of edges, randomized and probabilistic algorithms play a huge role. And looking at it from a more pure sense, it’s also particularly interesting to see how the nature of random numbers can reveal insight on the structure of computation. I’m starting to develop some ideas of my own; perhaps the better word for them would be “questions” or maybe just situations and combinations of known elements in a way I haven’t seen anywhere. Hopefully I will soon be able to gain the mathematical and analytical machinery to actually see if these ideas make any sense at all.

In other news, my copy of Knuth’s classic Art of Computer Programming Vol. 1-4A finally arrived this week! As you may know, the fourth volume was just released after something like 38 years. I’m very excited to check it out. It’s currently sitting, still shrink-wrapped, on the table beside me. During the next week I’ll see if I can make sense of it. On another note, I am very pleasantly surprised by the quality of CLRS, another very excellent algorithms book. The analysis is surprisingly clean and the writing is precise, which makes it a pleasure to read. And they tackle complicated concepts with ease, making it look like it takes hardly any effort, which is impressive. Over winter break, I had started on Michael Sipser’s Introduction to the Theory of Computation, another classic text. It turns out this gave me a very nice edge when we were covering DFA/NFAs, regular expressions, and context-free grammars in compilers. Although I had had an introduction to these concepts in a previous course, I accredit Sipser’s book for teaching them to me in a rigorous sense. Once again, I see the same ease with with Sipser explains these concepts and proofs and it’s quite impressive. It makes it as painless as possible to cover these proofs with sufficient depth. So I will have to continue on with that text, seeing as it coincides very closely with what I am interested in. And as for the last goal, it is yet unfulfilled. I need a whole lot more knowledge before I go near that subject. Perhaps next semester or next spring. There’s just so much learn.

Leave a comment

Posted by on March 12, 2011 in Uncategorized


Tags: , , , , , , , , , ,

Facebook Hacker Cup Qualification Round

So. I have a final later today and I’m too bored of studying so I decided to write up a post on the latest Facebook Hacker Cup competition problems. As you may know, this past weekend, Facebook had the Qualification Round of their Hacker Cup. Basically, it’s a programming contest very similar to that of Google Code Jam that I mentioned a while back and follows a pretty regular format.

In this round, we only had to answer one question out of three correctly to qualify. I did qualify so I know I got at least one right. Strangely, they only tell you the results after the competition is over, which is both weird and annoying. Oh and they messed up sending out the emails to the people who qualified. I made an incredibly dumb error in that I accidentally downloaded the input file before I was done writing my code and so the 6-minute time limit expired before I could submit my result. Fortunately, Facebook lifted this six-minute restriction due to bugs in their system so my (eventual correct) solution was actually accepted. The problem description and my solution is listed below:

Double Squares

I can’t actually access the original question right now so I’m paraphrasing here. Effectively, we are given an input file with the first line being N, the number of lines that follow. Each of those lines contain an integer between 0 and 2147395600, inclusive. (Not 100% sure if that’s the right number exactly but its within that range). The solution should contain one integer per line of input containing the number of ways that the number can be written (order doesn’t matter) as a sum of two squares.


There was a very nice little hint in light gray at the top of the problem page that effectively said:

too hard for brute force, switch to dp

Nice. Gladly, I saw that and immediately switched to a smarter algorithm (I wasn’t sure how hard these problems were and I wasn’t going to waste time writing a sophisticated algorithm if it was unnecessary – although my solution is pretty straightforward for this problem).

First, I used a calculator to compute the square root of that number. It was in the order of about 46,000, which is still small enough that pre-computing a data structure of that size is not out of the question. Further, since that number fits comfortably within the range of double-precision numbers, there was nothing special I needed to do. I started off by generating an ArrayList<Double> of each of the numbers from 0 to 46,342 squared. This actually executed faster than I thought it would and so I continued along this idea. (Note: For use in the DP algorithm I will describe, a HashSet would be a faster data structure although since I couldn’t perform a binary search on it, I decided to use an ArrayList. Since this is performed only once per input line, I didn’t stress too much about it.) Next, having performed the binary search of the input number on my giant list, I computed and stored the index of the greatest value less than or equal to the number itself.

With this index, I created a second array which contained numbers only less than or equal to it. The logic here is that since all our numbers are non-negative (indeed, the squares of all real numbers are non-negative), each of the addends must be less than or equal to the number (remember, 0 is on the list too). Now having generated this sub-array, I traverse it and subtract each element of it from the input number itself. This is effectively subtracting one of the squares from the number. What is the reason for this? To check if the remaining quantity is a perfect square and if so, to note that the resulting pair is one that we are seeking. I used some quick and dirty counting scheme of adding a half for each time I saw this (since we will be hitting upon commutative pairs) and adding a whole number for the number itself. Then we simply return the count that we have accumulated.

Very simple algorithm and runs pretty fast. We do a constant amount of pre-processing so total time for that \mathcal{O}(1). Now if our number is K, then binary searching for it runs in time complexity \mathcal{O}(1) time (remember, the size of the big array is fixed). Creating the sub-array and traversing it and subtracting it from the original number runs in \mathcal{O}(\sqrt{K}) time and traversing the list to search for numbers in the original list runs in \mathcal{O}(\sqrt{K} \cdot 1) = \mathcal{O}(\sqrt{K}) time (remember again, the big array has a fixed size!). Thus, our total running time complexity for each query (input number) is also \mathcal{O}(\sqrt{K}). One optimization would be to not search the big list for a match for every number in the sub-array but rather just until \lfloor \frac{\sqrt{K}}{2} \rfloor indices and then multiplying the count by two. In addition, if I had used a HashSet for the big array instead, we would enjoy amortized constant time searching which would give us a time complexity of \mathcal{O}(\sqrt{K}), which is the same big-Oh time complexity but much faster if you write it in tilde notation.

import java.util.*;

public class Main
    int SIZE = 46342;
    ArrayList<Double> squares;

    public Main()
        squares = new ArrayList()<Double>;

        for (int i = 0; i < SIZE; i++)
            squares.add(Math.pow(i, 2));


    private void solve()
        Scanner in = new Scanner(;

        int N = in.nextInt();

        for (int i = 0; i < N; i++)

    private void output(double i)
        if (i <= 1)         
        double num = 0.0;
        int index = Collections.binarySearch(squares, i);
        if (index > 0)

        index = (index >= 0) ? index: Math.abs(index) - 1;

        ArrayList subList = new ArrayList();

        for (int j = 1; j < index; j++)
            subList.add(i - squares.get(j));

        for (int j = 0; j < index - 1; j++)
             if (Collections.binarySearch(squares, subList.get(j)) >= 0)
                num += 0.5;


    public static void main(String args[])
        Main app = new Main();

On my computer, running Ubuntu 32-bit and OpenJDK with Core 2 Duo processors at 2.53GHz, the total time taken to run this program on the input file given was 0.264s.


Posted by on January 12, 2011 in Uncategorized


Tags: , , , , ,

Programming Puzzle #1 Solution

This is solution to Programming Puzzle #1

So there’s a couple things going on here. They include trigraphs, strange pointer arithmetic, casting, single and multi-line comments, and general compiler abuse. Let’s take this apart step by step:


Trigraphs were used several decades ago when keyboards then didn’t have all of the characters that we now have, such as curly braces, backslashes, etc. So, in order to encode these, three-character strings were used. Typically, the original trigraphs of C at least had a “??” prefix. (In C99, the notion of a standard prefix for trigraphs was removed). The ones I used in this program were ??/ (backslash), ??)(left square brace),  ??((right square brace), ??!(pipe), ??> (right curly brace), and ??< (left curly brace).

The first one is the most interesting. In C, the backslash character can be used to write strings that are too long to fit on a line (the only thing to note is that the continuation of the string must occur immediately on the next line, without spacing. Since this breaks code indentation, other techniques are generally used (such as string concatenation either through strcpy, strncpy, memcpy, or less traditionally, two literal strings right next to each other. (There are other ways too). However, since trigraph (and digraph) substitution is done in the pre-processing stage, this concept of breaking onto new lines works for anything! So, let’s take the original source code given and substitute all of our trigraphs in and fix indentation:

Caveat: GCC4 turns off trigraphs by default as they make reading code much more complicated if used obscurely. That’s the reason behind the “w/ appropriate flags” condition on the problem statement. To turn it back on, compile with -trigraphs.

#define define include
#define include define
#include <stdio.h>
#include <stdlib.h>
main(/*int, char*/)
   char* c;c = calloc(10, sizeof(*c))//c = "hello, world" ; #define int char
   ;int x=(sizeof(int)<<sizeof(c))+(sizeof(int)>>sizeof(*c)), i=* c;do c[++i]="0123456789abcdef"[x & 0xf];while(x>>=sizeof(i));

That cleans up a lot of things. Now we see that there are actually comments in this code! In fact, I just threw several clearly visible phrases (like the string “hello, world” and main() arguments) to trick a casual viewer into thinking that the functionality of the program was different from what it actually did.

Macro Definitions

Interestingly, the first two lines do nothing at all. To realize why, note that the pre-processor only traverses the source once. So, if we had actually made changes, they would not be reflected fast enough (on the first pass) to be properly dealt with by the pre-processor. The compiler would then complain about these unknown words (if the pre-processor didn’t fail first, which it almost certainly would). So we can just remove those lines. Below is the next revision of the source code, with comments and dead code removed and further indentation fixing.

#include <stdio.h>
#include <stdlib.h>
   char* c;
   c = calloc(10, sizeof(*c));
   int x=(sizeof(int)<<sizeof(c))+(sizeof(int)>>sizeof(*c)), i=* c;

      c[++i]="0123456789abcdef"[x & 0xf];


Now this is looking much simpler. Let’s see if we can clean this up even more and then analyze the essential core of the program.

Sizeof Operator

This is a very simple part. The need for me to specify is a 32-bit OS is clear here: the size of pointers will be either 4-bytes or 8-bytes based on 32-bit or 64-bit operating systems. Now, I was sloppy in this program for assuming the size of ints and chars to be 4 bytes and 1 byte, respectively, but I don’t feel too bad since this was just a small puzzle, I specified the compiler and language specification, and the functionality of the program does not depend on those values except in one place..

Let’s substitute the numeric constants too and see where we arrive:

#include <stdio.h>
#include <stdlib.h>
   char* c;
   c = calloc(10, 1);
   int x=(4<<4)+(4>>1), i=* c;

      c[++i]="0123456789abcdef"[x & 0xf];


Now we’re at the last step.

Pointer Arithmetic and Bit-Shifting

Now by using a piece of paper or a programming calculator, we can trivially see that 4<< 4 is 64 (0b100 => 0b1000000) and  4>>1 is 2 (0b100 => 0b10) to give int x = 66. Nice. Now on line 11, we are setting the value of x to be value of x right-shifted by 4 bits. The reason for this will become apparent in the following paragraphs. Finally, if we follow the logic of the program, we see that in the do-while loop, we are setting the number to be a number definitely smaller than itself each time. The exit condition for the loop is when the clause is false. In C, this is a zero. So, we see that the loop will break when x = 0. So what’s with that last for-loop? We are doing a binary OR with a zero value. It therefore has no effect and we are just checking if i is 0.

Now let’s deal with the main part of the program. What the hell is line 10 doing? This is where understanding the meaning behind the []-operator plays a role. To put it simply. this is often called “syntactic sugar” since it’s just a way to write an otherwise slightly more complicated expression. It’s important to realize therefore that a[i] == i[a] since both are just equal to *(a + i), in pointer arithmetic. However, indexing also works for literals in C. So, realizing that the expression x & 0xf masks off the last four bits of x, we now understand that all this is doing is setting the next character of the c array to the hexadecimal representation of the trailing nibble (four bits) of x. Thus, noting that we are actually printing the array backwards at the end since the representation is backwards, we see that all we’re doing in this program is printing out the hexadecimal representation of decimal 66, which is conveniently 42. Note that there is not newline character printed after this.


If you’re paying attention at this point, you’d realize that there’s no return type listed in the function header of main() nor is any value actually returned. Wait but isn’t main supposed to return 0? Well, the usual approach is to return 0 on success, although any number can returned on error. This is where we have abused our benevolent compiler. All functions with non-listed return types in the function header have return type int and if no number is returned at the end of main(), we automatically return 0. So everything is taken care of for us nicely.

This concludes the solution of the first programming puzzle. Maybe I’ll put up more of them in the future.

Leave a comment

Posted by on December 11, 2010 in Uncategorized


Tags: , , , ,

Programming Puzzle #1

Consider the following program written in pure C89 on a 32-bit OS:

#define define include
#define include define
#include <stdio.h>
#include <stdlib.h>
*int, char*??/
/)??<char* c;c = call??/
oc(10, sizeof(*c))//??/
c = "hello, world" ; #define int char
;int x=(sizeof(int)<<sizeof(c))+(sizeof(int)>>sizeof(*c)), i=??/
* c;do c??(++i??)="0123456789??/
abcdef"??(x & 0xf??);while(x>>=sizeof(i));for(;x??!i;--i)putchar(c??(i??));??>

Two questions (solve without computer):

  1. Does it compile (w/ appropriate flags)? [Tested on GCC 4]
  2. What is the output?

Just a few tricks, but shouldn’t be too hard to solve. I’ll give the answer in the coming days.

1 Comment

Posted by on December 6, 2010 in Uncategorized


Tags: , , ,