Thursday, November 15, 2012

A new adventure

(This is a long overdue post).

I remember when I used to have homework, then for a few years in college I had no work. Last month, after a 4 month vacation, I began ‘grown-up life’. Unlike other giant leaps, mine didn’t start with a small step. It started with a 13000km trip to Silicon Valley.

I am very happy to have started a new (and first!) job at Mozilla. After last year’s incredible internship, it feels just like home. I have joined the Platform team and begun working on some upcoming APIs which I’ll blog about as they shape up. Being able to work with extremely talented people and to work on a class of problems that very few engineers in the world get to toy with is very satisfying. It does come with its share of problems, like having to get to grips with a decade old, sparsely documented code-base, but damn, when that code executes correctly after hundreds of iterations, there are so many snacks to celebrate with :) But seriously, joining Mozilla at a time when Web architecture and APIs are undergoing massive upgrades, and we get ready to launch a mobile operating system, is very exciting. Oh and employees get to be guinea pigs test it.

Couple this with managing your whole life for the first time. No longer do parents or the dorm automatically pay electricity bills or ensure your Internet connection is working. House hunting, paying rents, moving, buying furniture, managing finances, cooking, everything has been a first. Smooth for the most part, but far too salty sometimes.

Couple this with the constant technical, cultural and recreational melange that is the Bay Area and the regret about not having enough hours in the day and it’s overwhelming. In a month I’ve discovered rock climbing, seen a trapeze act, won second prize in a hackathon and gone to various meetups. Every day when I snuggle into my comforter-sleeping bag combo (you see, I still haven’t bought a mattress) get into bed, I’m exhausted, but every day is also a very satisfying adventure. I can’t wait to have more of them.

P. S. The libuv book has suffered due to this, but I’ll get back to it soon.

Friday, November 09, 2012

A poor man's Notational Velocity on Linux

I use Notational Velocity on my Mac all the time. It holds all my notes, lists and any other snippet of text. I love the interface and simplicity, and most of all I love the simple use of text files in Dropbox as a store. This way I can access my notes anywhere, without needing NV to be installed. I also love the global key binding feature so that I can quickly raise it with Cmd+Shift+N.

At work (more on this soon) I started using a Thinkpad x230 running Archlinux. But I sorely missed NV. I experimented with nvpy, but it didn't cut it for me at all. The tkinter UI looks bad in a Qt/GTK desktop, notes are saved in JSON by default, and the text file option is a sort of hack which stores the title in the first line, messing up the notes in NV. So rather than write my own version, I got an almost as nice, and definitely more powerful NV equivalent in Linux.

I am going to assume you use a standard desktop environment like KDE or that your window manager is EWMH compatible. You'll need:

  • To know how to define custom global shortcuts to run a command. For KDE this is System Settings -> Shortcuts and Gestures -> Edit -> New -> Global Shortcut -> Command/URL
  • gvim
  • wmctrl (available in Arch community repo).

Create a new shortcut which should launch the following command string

gvim --remote-silent +':lcd %:p:h | :au FocusLost * :wa' \
'/home/nikhil/Dropbox/Notational Data' && wmctrl -a 'GVIM'

You should edit the path to point to your Dropbox/NV directory. Now whenever you press the global shortcut combination you should see gvim with a list of all files (notes). Press Enter on a file to open it.

We use remote-silent to make sure that gvim uses an existing window if it is already open. The :lcd %:p:h option sets vim's current working directory to the NV directory. This will be useful later. We use the autocommand FocusLost to save the file whenever the gvim window loses focus (simulating NV's autosave feature). Finally wmctrl raises the window to the top by matching the string to the title. If you use gvim on a regular basis (I use terminal vim) and have other windows open, you'll have to tweak this.

So this setup is completely like NV, except for one divergence. Whereas NV searchs the note title and content together, our system will treat it as two flows. To search note titles/file names use / when in the main view. As part of my standard vim plugin set I have ctrlp and ack.vim* which will serve us well here. To always have access to note titles use ctrlp. I map it to sf so that I get quick fuzzy find. Similarly to search note contents I map sd to trigger ack.vim. This is where setting vim's current directory is important. Both plugins will use it as the base search directory.

This NV approximation is fast and works almost as well as the original, although without a slick interface. But nice fonts and a good vim colour scheme come pretty close.


* You'll need ack installed to use ack.vim. ack does not include text files by default. Put --text in ~/.ackrc to do so.

Saturday, August 25, 2012

Automatic Github Pages generation from Sphinx documentation

Sphinx is a very common documentation tool which gobbles up ReStructuredText and other free-form markup formats and outputs great HTML, PDF and other formats. It is meant for reference manuals and API documentation due to its good integration with source code (especially Python). The libuv book is written using Sphinx so that it looks so good with minimum effort.

Sphinx uses make to generate the HTML, which is great. The only problem is that deploying this to Github Pages requires multiple commands to switch branches to gh-pages, pull in the source text, then cleanup the working copy and switch back to master. This is boring after about one time, so I automated it, and I think other projects can benefit from it as well. Once you follow the instructions, running:

make gh-pages

will take the latest commit, switch to the gh-pages branch, generate HTML, push it to Github, then clean everything up and switch back to master.

NOTE: You need to commit or revert any working copy modifications before running this.

One-time commands

These steps only need to be run the first time when you want to generate Github Pages. First setup the branch to have no parents. Let me stress again the importance of making sure all changes are committed! Otherwise they’ll be lost.

$ cd repo
$ git checkout --orphan gh-pages
$ git rm -rf .
$ echo "First commit" > index.html
$ git add .
$ git commit -m "Just to create the branch."
$ git push origin gh-pages

Now the gh-pages branch is setup. We can start generating the actual pages instead of the current index.html.

Set the source files

Edit the Sphinx Makefile. Add a variable GH_PAGES_SOURCES. This should be a list of the files/directories that contain the documentation sources. This will usually be only source which contains the Sphinx reST docs, but if you are embedding external code or images, those directories have to be listed as well. In addition the Makefile has to be in the list. For the libuv book it is:

GH_PAGES_SOURCES = source code libuv Makefile

Add the target

Create a target gh-pages with the following commands (remember to use TABs in Makefiles):

gh-pages:
    git checkout gh-pages
    rm -rf build _sources _static
    git checkout master $(GH_PAGES_SOURCES)
    git reset HEAD
    make html
    mv -fv build/html/* ./
    rm -rf $(GH_PAGES_SOURCES) build
    git add -A
    git ci -m "Generated gh-pages for `git log master -1 --pretty=short --abbrev-commit`" && git push origin gh-pages ; git checkout master

Here is how it goes. The checkout simply switches branches. Then we remove all the old data to prevent any rebuilding artifacts. Since the gh-pages branch won’t have any of your original data, but only the HTML output, we need to pull the sources from the master branch. Then we generate the HTML. We move these from the build folder to the top level. Then we remove all the sources and the now empty build folder. We stage all the changes. Finally the last line generates a commit message for the gh-pages changes which is the first line of the latest commit on master and pushes to Github. The reason the git checkout master command is on the same line and semi-colon separated; if the push were to fail for some reason (network error, DAG inconsistency), I want to be returned to master in my working copy. If you don’t want this to happen, feel free to move it to a new line on its own.

Done!

You can now get back to your main task - writing great documentation. Whenever you have an urge to show it to the world, simply run make gh-pages and your latest documentation is served fresh!

Friday, August 17, 2012

Installing node.js on the Raspberry Pi Archlinux

My Raspberry Pi arrived a few weeks ago and I had some problems with getting node.js to build on it. This post is specifically about building node.js from source on Archlinux.

All activity and commands in this tutorial are run on the Raspberry Pi, either via SSH or physical keyboard. It is possible to cross-compile on your laptop/desktop for ARM, but I preferred not to in this case.

Prerequisites

Have the base-devel group installed, you should have a working gcc and friends, along with openssl and zlib. Also install python2-virtualenv and git-core.

Get the source

I usually stay on node bleeding edge so:

$ git clone git://github.com/joyent/node

Use Python2

Since Archlinux uses python3 by default, the configure and build scripts screw up. You can edit the individual files, but I find it easier to just use virtualenv, which will setup the shell environment to use python2 and its libraries.

$ virtualenv2 env
$ source env/bin/activate

Here env is the name of the directory virtualenv will use to store scripts. You can use some other name. Sourcing the activate script will set up the environment so that python will actually be python2.

Patch the source

Geoff Flarity has created a patch to tweak the build parameters a bit. You can either follow the instructions to apply the patch, or just comment the 12 vfp3:on... lines yourself, which is what I did. Also set the environment variables to build for ARM as in the README.

export GYP_DEFINES="armv7=0"
export CCFLAGS='-march=armv6'
export CXXFLAGS='-march=armv6'`

NOTE: All commands should be run in the same shell that has the virtualenv environment and the above variables set.

Use system zlib and openssl

Attempting to compile the included openssl failed on my Archlinux setup with some ARM assembly errors. Using the installed libs will also reduce the compilation time. So

$ ./configure --shared-openssl\
--shared-openssl-includes=/usr/include/openssl\
--shared-openssl-libpath=/usr/lib\
--shared-zlib\
--shared-zlib-includes=/usr/include\
--shared-zlib-libpath=/usr/lib

Build

$ make
$ su -c 'make install'

The build will take some time (30 minutes to an hour), so be patient. Once it is done you will have a fully working node runtime on your Raspberry Pi. Use it to power your home automation control server or whatever else. I use it to experiment with my DHT implementation.

The use of node.js to write low-level systems services or networking code over traditional languages like C/C++ is very interesting, because it provides a safer, garbage-collected runtime, with efficient I/O and a less verbose language. Inexpensive devices like the Raspberry Pi allow experimenting with multiple devices or peer-to-peer configurations rather than being bound to only localhost testing for those on a tight budget. So get a Pi and have fun.

Friday, July 13, 2012

Europe2012: Helsinki

I landed in Helsinki on the 26th of June from a connecting flight via Frankfurt. As you’ll see in the rest of the journey, food is a very integral part of my life, and I can’t resist describing airplane food either. So in Lufthansa, on the BOM-FRA flight, they served this decent croissant and omelette with spinach and chicken for ‘breakfast’. Fruits in planes are always shit, and this was no exception.

This was my first time at Frankfurt as a transit point. They make you wander a lot to get to passport control. For a second I thought passport control wasn’t going to happen at port of entry. Then the immigration officer gave me another scare by being very scrutinizing, checking all my credit/debit cards and my dad’s permission letter. Yay! for having all my documents in order.

Finally being let through, the FRA-HEL flight was again Lufthansa. I love day flights for their take-offs and landings where you get amazing views of the land. This was no different with the plane descending as it left the shores of the north sea near Germany and then approached Helsinki-Vantaa International Airport.

DSC02709

In flight meal was a delicious ‘American club sandwich’ which was bacon and chicken with mayo in rye bread. I’ve had rye bread every time I’ve been to Europe since 2010, and I love that stuff. It has great texture, a sweetness that develops in the mouth and complements meat very well.

HEL is located quite some distance from the main city as it serves the greater Helsinki region. The first thing I do when I land in a European city is to buy a pass for public transport for the number of days I am going to be staying. A citizen of Mumbai, I love public transport and passes turn out to be very cheap all over Europe. So just blindly buy one if you plan to explore cities. In this case the machine didn’t have change so I bought a one day pass then bought another one the next day. Passes printed by a machine are on paper. You can buy travel cards or seasonal passes from major terminals, tourist points and most conveniently from R-kioski general stores. These green cards are very useful and easy to use. They are available in single region (Helsinki only), or 2 and 3 region varieties, although the price difference isn’t much. Overall, I ended up paying EUR 36 for 4 days of unlimited travel in 3 regions. If you think that is a lot, consider this. The pass allows travel on the entire HRT network, bus, tram, metro and rail. In addition, the Suomenlinna ferry operated by HRT is included. In EUR 36 I roamed about the city, visited a major tourist attraction and went all the way to Nuuksio national park in Espoo! But I’m getting ahead of myself. Back to the airport. The regional bus 615 leaves every 15 minutes and will take you to Helsinki Central Station from where transport lines go all over the city.

Bicycles are common in Helsinki and bike lanes exist but aren’t sharply delineated, so make sure you don’t treat a bike path as the pavement. While riding bikes on the pavement is frowned upon in San Francisco, it is fairly common in Europe to ride on the pavement when a bike path does not exist. It also isn’t so much of a hindrance with the super-wide pavements that are so common in Europe.

I stayed in Hostel Erottajanpuisto, which is on Uudenmankatu (katu is Finnish for street). Take the 3B tram from Helsinki Central Station and get down at Frederikinkatu or Iso Roobertinkatu and walk to the hostel. It is a nice, clean place. There are 4 toilets and 4 showers on the hostel floor. My room was a 6-bed dorm. Everything was clean and the staff were very helpful. Prices were EUR 27 + 30 + 30 due to the European Athletics Championships which began from June 27th.

DSC02716

That evening I was pretty tired and just wandered the city. First I went to Verkkokauppa, which is one of Europe’s largest electronics store. I didn’t find much there that I could afford :P, so I just went upstairs to the viewing gallery on the 7th storey which gives great views of Helsinki, especially the West harbour. It also has a real MIG-21 plane on display.

DSC02725DSC02726DSC02728

From there I came back to the city centre, and wandered down Esplanadi, which is the cities main promenade. Halfway down Esplanadi is a cafe and the area across it has a stage for performances. A seniors band was performing that day though I only stopped for one song. At the end of Esplanadi is Kauppatori, an open air market which would be the source of culinary delights. Sadly I arrived late (it closes at 18:00) and missed it the first day. Slightly disappointed, I wandered around Keskuskatu which is the central square. The area around Helsinki Central Station is full of high-end shops and departmental stores and is fun to walk about. I ended up eating at some place called Chilly’s (salmon and fries with salad) near the station (I can’t find it on Google Maps now) when I got really hungry and couldn’t find something nice. Tired from the flying and walking, I came back to the room and crashed. Well after the daily routine of backing up the photos.

DSC02732

On day 2, I was all eager, it was time to go to Suomenlinna! But before that I needed breakfast and a 3-day pass. R-kioski sells both. I had a nice chicken and bacon sandwich and mint-flavoured coffee sitting on a bench at South Harbour.

DSC02735

It isn’t everyday that you get to have such a beautiful view in the morning. Now the day pass (You can get multiple days too). The HRT has a green colour card for tickets. The card is coded with information, and gray readers on all modes of transport do their job. Each reader has a quadrant, with 0/L, 1, 2, 3 marking each quadrant. Now, the numbers represent how many regions you want a ticket to be valid in. If you are a resident who has a balance on the card you have to do some button pressing for zone selection. But with a day pass we don’t need to bother with that. If you bought a 3-day, 3-region pass, then you can go anywhere in 3 days. All you’ve to do is hold the card to the reader when you get in. If everything is fine, the green light will switch on, accompanied by a beep. If things are not right, well I guess you’ll have to find out. Its very simple, so don’t get confused by the numbers. But make sure you aren’t travelling in a zone where the ticket is invalid. If you just want to see Helsinki, a 1-region pass is enough, it covers Suomenlinna too. But it does not cover the airport, so you should buy a separate ticket for that. I bought a 3-region pass as I would be visiting Nuuksio.

Suomenlinna, literally the ‘Castle of Finland’, is a island to 2km to the south of Helsinki, part of the large Finnish archipelago. The Suomenlinna ferry leaves from South Harbour every 20 minutes. On the way there are several smaller islands on which lucky people have houses.

DSC02746

You can see the attractive but short skyline of Helsinki as you leave the mainland and be amazed by Scandinavian cleanliness and efficiency. On Suomenlinna there are a few cafes, a few points of interest and the fort at the end. On the way to the visitor centre at the middle of the island you’ll already pass the church and its gardens. The blue path on the map (get one from the Visitor’s centre, or follow the blue sign boards) is the ‘suggested path’, but I wandered of in 2 directions not suggested. If you take a right as soon as you cross the bridge near the visitor’s centre you’ll reach a building in which they are building a wooden gunboat using traditional tools and techniques. It’s not open yet but you can go in through the door, just don’t creep out the workers by taking too many pictures. Continue on and you’ll reach the marina, which has a very nice collection of small boats, and the church in the background. On a sunny day it is a very nice sight. Which is why the Finns have put a cafe there as well, because in the summer you can enjoy that sight all day. Return to the blue path, but stick to the right after leaving the tomb, and take the route between points 6 and 22 on the map (the little gate in the wide building) and go towards 4. Here you’ll see the cannon emplacements, ducks swimming in the pools and a small (about the size of a big truck) artificial beach tucked away. I was lucky to also spot a lone paraglider riding the nice updrafts generated by the sea breeze.

DSC02787DSC02838

By this time you’ll have walked a lot and must be pretty hungry. But, with a budget of EUR 20 for a day we aren’t going to be dining at any of these sit down cafes. So I just went to the restroom (probably shouldn’t be going into this much detail), then sat for a while at the King’s gate.

DSC02876

There is some very nice fort design at work near the Gate to ambush ships. On the way back you can explore the little park, or not. All the good planning of Suomenlinna was due to its architect, Augustin Ehrensvärd, who cared about his soldiers. You’ll learn this if you watch the short movie screened in the visitor centre (English at 12:30). That and the museum together will set you back EUR 6.50. There is also Wi-Fi in the visitors centre and some nice souvenirs though I don’t buy any. This is because most souvenirs are just Made in China these days.

So… keeping the idealogy aside, I’ve managed to give a run down of the place in one paragraph! What do I do now!? Well, the bid to save money does not stop me from being able to pass up lemon-liquorice icecream. So I had one at the shop near the church and got a huge scoop for 2.80. Now I could hold on till I made it to the mainland.

DSC02915

A note about liquorice. Liquorice is like, well, something. You either love it or hate it. I first had it in 2001 and I’ve loved it. In 2010 I was unfortunate to try salmiakki, since then I stick to the less salty variants :) The Finns (call it Lakritsi) and the Dutch (call it Drop) love liquorice and you can find it in every decent general store. Buy some, try some. My favourite is the plain black unflavoured, slightly sweetened cylinders. I got mine in some yellow packet (400g for EUR 3 or so), but there are a lot of brands and flavours.

While you were thinking of liquorice, I finished my icecream and the ferry was back to South Harbour. Behold the wonderland that is Kauppatori.

DSC02737

Kauppatori is an open market similar to most European cities, with local food stalls. In Finland that means Salmon, Vendaces, Calamari, Reindeer sausages and Moose sausages. Kauppatori was to become my lunch spot all three days. On the first day I had salmon with blue cheese, with a side of vendaces, calamari and veggies (EUR 15).

DSC02923

I thought that was expensive, but it was so huge that I kept half of it and had it for dinner. I also got half a litre of cherries for EUR 3. They were the best cherries I’ve ever eaten, much bigger than the cherries in India. They were also extremely sweet and I had great fun munching on them while walking the streets.

After lunch I went to Linnanmaki amusement park. Going to a amusement park is not something you may do in a foreign country, except perhaps Disneyland. But, India has no decent amusement parks, and this one looked good. To reach there take the 3B tram to Alppila stop and walk about 500m in the direction the tram will go. A day pass is EUR 37 and gets you on all the rides as many times as you want. I just went to the 6-7 most extreme rides. Rain prevented me from going on the best one - Ukko - so I paid EUR 5 to extend my pass to the next day. Completely bushed by now, it was time to take the 3B back to my room.

On day 3 it was time for something completely off the tourist map. On the outskirts of Espoo is Nuuksio National Park. It has some nice trails through the wilderness of conifer forests and Finnish lakes. With a 3 region pass, you can go all the way to Nuuksio for ‘free’. You can take the S, U, L or E trains to Espoo then take bus number 85. Get down at Nuuksionpaa or Kattila (last stop). Once in the countryside the stops don’t have their names written on them, so it’s best to just get down at Kattila. From there you can see the trail marked. There are two, red and blue. Red is about 4km and goes to the main park entrance at Haukkalammentie. The blue one is 8km and I don’t know where it ends up because I didn’t take it. I had fun traipsing through the park for about 2 hours.

DSC02978DSC02948

Finland is a pretty flat country so there aren’t any hikes, just walking. There’s silver birch, spruce, alder, aspen and lime trees and two lakes on the trail. Three-fourth of the way through, you’ll come to a fork, one going left and the other right over a wooden bridge. The signs here have fallen into the lake and the trail is unmarked for a while. So take a left to stay on the red trail. At Haukkalamppi there is a info cabin with details about the indigenous wildlife and vegetation if you are so inclined. There is also drinking water and, surprisingly, a gift shop. Walk 2 km along the paved road to reach Nuuksionpaa from where you can catch the 85 back.

From Helsinki Central Station it was back to Kauppatori. This time I tried the sausage mix (EUR 8).

2012-06-28-010

I particularly loved the reindeer, which is dense and similar to beef. This was topped off by fresh blueberries (EUR 5). From there I went to Linnanmaki to ride the best roller coaster and try the other rides again. The Ukko takes you up five storeys, then sends you straight down vertically, to go all the way up the opposite face, then go back down in reverse and up five storeys again (in reverse!) then down again and up again where the hydraulics kick in and slowly lower you to ground level. It is super and there are correspondingly large lines, but it is well worth the wait and the rain.

The final day was a bit tempered. I had seen everything I wanted to see (or so I thought) so I just wandered around Kauppatori, Kauppahalle (the gourmet product hall south of Kauppatori), Stockmann and co. Picked up some gooseberry jam, some Fazer chocolate and a pack of plain liqourice. I accidentally wandered onto Sofiakatu and Helsinki’s main square.

DSC03026

Just before that is the museum which had a Finnish films exhibition. It was free so I saw some of it then took some photos of the main square. It was surprising that I forgot about this, of course there is nothing very interesting, they are just nice buildings. Then I went into Asematunneli complex but there wasn’t much there and I left soon. After this I was really but had a while to go for my ferry, so I just spent some time sitting on a bench at Esplanadi, nibbling on chocolate, had reindeer meatball lunch and then went back to the hostel, took the bags and walked to Makasiiniterminaali where the Linda Line runs to Tallinn in 1.5 hours. Ferries run regularly, with fares being in the range of EUR 25-45 depending on season, day and time. My ferry on a Friday at 14:00 was EUR 44. Book online since on the spot booking has a EUR 5 surcharge.

2012-06-29-019

Helsinki had been amazing, it was the first stop on my tour and Scandinavia was well worth it! Next stop Tallinn, Estonia to attend two days of Akademy 2012…

Wednesday, April 18, 2012

Lessons in 4 years

(This article was written as a piece of advice in the final edition of Entelechy for the year, now that I will be graduating)

Never give advice — a wise man won’t need it, a fool won’t heed it.

So rather think of what follows as a crystallization of my opinions, which may serve you well. The last four years of my life have been a period of discovery and extreme change. In that I’ve either realised or read and agreed with certain qualities that will prove indispensible.

The first is self control. In college, nothing else matters! If you want to get things done, if you want to stay on the right path (subjectively of course), then nothing else will matter more than self control. Not your intelligence, your wealth, your environment or your friends. Only your will-power. You and I are living in a world where addiction is easier than ever to fall prey to. Information bombards us from everywhere, a plethora of online services crave for our attention, and in the physical world, chatter, video, liquids and powders demand our attention. These distractions will stop you from giving your time to the things that matter the most to you. Those who can stick to their course will fare better.

If you are the sort of person who compulsively says yes to everything, learn to say NO. If you are a doer, people will ask for your help for the most trivial of things, or sometimes for the most important things. Decide if you have the time to devote to it, don’t immediately leave what you are already doing. If it is not worth it, learn to say a firm, direct, NO. Having your attention over forty different things means that all of them will remain incomplete.

The second thing that will prove invaluable is self learning. You are in an information technology college, and being autodidacts is most relevant to us than to anyone else. Information Technology moves far too fast, much faster than academia, than law, than social conventions or the speed of light. An inability to teach yourself new things will quickly put you out of the competition.

Being able to learn things on your own also opens a universe of experiences, each of which can be used to replace the boredom that leads to addictions. Self learning is not just watching video tutorials of the Internet, it means being able to evaluate yourself, setting new challenges and goals and keeping yourself on the path to achieve them.

Learning on your own will also help guide you towards your passion(s). Because anything on which you are willing to spend time without anyone else telling you too, is clearly something you love doing. Choose a job you love, and you will never have to work a day in your life. 1

Stay fit. You come to college and forget all about play. You spend evenings either poring over books, or playing computer games or wasting time lounging about. In these actions you are setting the pattern for how you will live your life. You might think, ‘I’ll just eat less’ or ‘I don’t mind getting a bit fat’, but few things are as joyful as sweating it out on lush green grass, day after day. Revel not only in the improvement of your mind, but in the miracle of your body. It may not get you a job, but it will ensure that when you walk into office, heads turn. Finally, like everything else, don’t over do it. Learn to listen to your body and your body will listen to you.

Have a relentless pursuit to rise above the mediocrity that is encouraged by our social system (‘our’ here applies all over the world, I’m not dissing on India). Are you scared of what you’ve to do to achieve your dream? Start small, a little step here, another one there. Find the right level of challenge. But always aim higher than you know you can go. Have a bucket list, and an ideas list. Both lists will start to fill up like mad and you will never be able to tick all of them off. You are far too small, and the world too big to be able to see it all, but just by breaking out of your comfort zone, you’ll have seen more of it than most people can imagine.

Finally, go and Create!

If your daily life seems poor, do not blame it; blame yourself, tell yourself that you are not poet enough to call forth its riches; for to the creator there is no poverty and no poor indifferent place

                                -- Rainer Maria Rilke

There is nothing more anti-human that the lazy consumerism that is prevalent today. I do not mean you should write a book, or create software or write a song. The smallest creations make a big difference to your happiness. True happiness can be found in a football move created on the ground, or the smile created on another face by your actions. Let your creations, and not your tastes, be what define you. Tastes only narrow and exclude people. So create. 2

I’ll leave you with 50 more things.


  1. Confucius said that

  2. Paraphrased from why the lucky stiff

Monday, April 16, 2012

Demystifying JSCrush

Some of you may have seen Philip Buchanan’s award winning. Autumn Evening entry for JS1K 2012 Love. While skimming the source I saw that he had used Aivo Paas’s JSCrush to compress the code. The JSCrush website is intriguing with the source code minified and then passed through JSCrush itself (so it can be submitted to JS1K). The JSCrush version used in the page, in <script> tags is the JSCrushed version. As a weekend project I tried to ‘reverse engineer’ JSCrush and understand it. It took me about 4 hours. What follows is a walk-through of the process.
JSCrush is a very interesting JavaScript program, liberally abusing eval(), global variables and insane levels of nesting to achieve a sort of compression.
Remember that your browser’s web development tools are indispensible for activities like this. I made extensive use of Firebug.

The de-obfuscation

I chose to start with the compressed version in the <script> tag rather than the plain text in the upper text field.
The syntax is clearly wrong for JavaScript, and it is all stuck in a string assigned to _. The part at the end is interesting though (properly formatted).
For every character in $ it is splitting _ on the character, using with to make the resulting array the scope. Then joining the pieces using the last piece and reassigning to _. For example:
_ = "HelloRWorldRCrushedRAB"
$='R'

var temp = _.split($) // => temp = ['Hello', 'World', 'Crushed', 'AB']
var last = temp.pop() // => last = 'AB',
                      //    temp = ['Hello', 'World', 'Crushed']
_ = temp.join(last)   // => _ = 'HelloABWorldABCrushed'
Remember this step, it is key to how JSCrush works. These steps are repeated for every character in $, after which the ‘decompressed’ output is the minified source code:
You can see this for yourself by putting a console.log(_) just before the eval(_).
Now we’ve a fair idea of how JSCrush is doing decompression. Compressed scripts are stored in _, decompressed using the loop and then executed using eval().
The next thing I did was to un-minify the source (manually):
One change I’ve made is the call to setTimeout(). I’ve converted it to a function to make it easier to read, and directly used the script tags innerHTML, since I had the decompressed source in the tag. The JSCrush code generates the textareas and button as part of it’s run and assumes body.children[9] to be the <script> tag with the JSCrush compressed source. Hence it replaces the eval call with the program source itself so that the inner eval() call in setTimeout() extracts the decompressed source and puts it as the value of the first textarea. It then calls L(), the JSCrush crushing function to compress the original code back, so that you get the compressed version of JSCrush in the lower text field. Mind-boggling.
The setTimeout() without a time simply causes the code to be executed after the script has finished evaluating completely.

Understanding

Now that we’ve decompressed code, it still has the scars of minification – single letter variable names and no comments. Time to start reading the code. Line 1 is just setting up the HTML for the user. Lines 2-4 is the first interesting piece. The array Q is being populated with all the ASCII characters, in reverse order! The characters \n, \r, \\, ' and " are excluded, as are \0 and DEL, so that Q has 121 characters. Rather than using a readable if statement, Aivo is using the fact that && is ‘short-circuiting’ in JavaScript. Much space saving here.
Next we come to the definition of L. Line 12 just removes blank lines, whitespace and single line comments (except those following code). It also escapes backslashes so that the code is ready to be put into a string later. This is assigned to the letters i and s. Be warned from here, in the goal of smaller size, variables frequently change their meaning to promote reuse. s is always going to point to the code, but i is used as a counter all over the place.
Next, B is half the length of the program, m is the empty string. Line 15 is where it starts getting interesting. The pattern:
encodeURI(string).replace(/%../g, 'i')
occurs thrice in the code. Its task is to get the byte length of the string rather than the number of characters that string.length gives. In ASCII there is no difference, but if there are Unicode characters, they may occupy 2-4 bytes. encodeURI will replace each byte with a ‘%xx’ code with xx being the hexadecimal byte value. Replacing this with the single letter ‘i’ will get us one ‘i’ for every byte, so that the length of the resulting string is the byte length. This was one of the many clever tricks present in the JSCrush code. They might be well known, but this is the first time I came across it.
The initialization in the for loop is only to save a byte, it does not affect the loop itself in any way. Similarly the m = c + m call can be moved to the end of the loop body. This construct will generate the decompression sequence contained in $. This for loop is then actually an infinite while loop.
Line 43 is again a trade-off of readability for size. Here it is in a cleaner form:
c = 0
i = 121
while (!c && i) {
 if (! (~s.indexOf(Q[i])))
  c = Q[i]
 --i;
}
~ is binary NOT. If Q[i] is not found in the source, then indexOf will return -1, NOT -1 = 0 and !0 === true, so that this code is actually saying:
For every character Qi in Q in reverse:
    If the source does NOT have the character in it:
        c = Qi
Or, c is set to an ASCII character that is not present in the program. Initially it will be ASCII 1, then perhaps 2 and so on. This ‘c’ is now the character that will be used to join the pieces obtained in Lines 20-32. This is one round. When all the characters have been used up, compression stops (Line 18).
Lines 12-32 basically try to find long, repetitive strings that can be replaced with a single character, to get the best compression. JSCrush follows a brute-force approach to find these segments. With single variable names, the code is a mess, so here is a cleaned up version which makes things much clearer:
Lines 9-28 try to find segments which repeat atleast twice in the code. Longer segments will give better compression, so we try all of them. For segments of length 1, we try every character in the string, for segments of length 2, we try every pair and so on. If it repeats we keep track of the segment count.
The segmentLengthBound = longestSegmentLength (B=Z) bit is interesting, and it took me some thinking to figure it out. It relies on the following facts:
  • The longest segment in the current source is longestSegmentLength.
  • Splitting by something, and then joining by a character not in the source will not lead to creation of longer segments.
So we can restrict segmentLength of the next round to segmentLengthBound.
Lines 32-41 choose the best segment to substitute in this round. The expression (R=o[i])*j-R-j-1 may seem cryptic, until you look a little later in the code where the split and join is done, and you remember how JSCrush works. R * j is the number of bytes we will remove by replacing this segment. But to join the split, we’ll need one character for every repetition, followed by one character to separate the segment suffix itself. The conditional asks if this leads to actual, and better, compression than what we already know of. If no such segment was found, we are done compressing. Otherwise we split by the segment, join the pieces by the join character and tack on the segment at the end. One round done!
Once multiple rounds have been done, the script is compressed, and only some trivial things remain. The value of B is now changed to store the quotes (double or single), based on which are fewer. Since the compressed program is stored in a string, using the quotes that appear less times means less ’\’ to substitute, each of which costs a byte. We then prepare the boilerplate, setting _ to the now compressed source, setting $ to the decompression sequence m and adding the evaluation code. The savings accomplished are announced too.
One trick I picked up in the code is forcing a certain digit view precision.
i/S * 100
would give a float percentage with many digits after the floating point. Instead multiplying by 1000, gets us two digits in the integral positions, bitwise OR-ing with 0 casts to an integer, losing the floating point digits, then dividing by 100 gets us the two digits we want.

Summary

JSCrush works by:
  1. Finding the first unused ASCII character to act as the join
  2. Finding the substring of the program text that gives the best space savings if its repetitions are all replaced by the ASCII character from 1.
  3. Splitting the source on 2 and joining the pieces using 1, tacking on 2 to the end. This string replaces the original source.
  4. Repeating 1, 2 and 3 until no more savings are possible or we’re all out of ASCII.
  5. Wrapping the compressed source into a string, then using the list of join ASCII characters to unroll the string.
  6. Unrolling is performed by splitting on every ASCII character used in 3, extracting the original repeated substring 2 from the split and joining the parts.
I hope this (long) post was interesting and educational. If you have feedback, a comment would be great.

Friday, April 06, 2012

Playing with Go

With Go 1 being released, I’ve been playing with the language once again. As a long weekend hack, I created a clone of the literate programming Docco tool in Go – quite naturally called Gocco.

This blog post chronicles my feelings about the language, and some rough spots I got stuck in.

I started out with a direct translation of the original CoffeeScript to Go. Go is remarkably unlike C and more like high level languages. Much of the translation was automatic and is not very different from the CoffeeScript source, except the pervasive use of []byte rather than string.

What I loved about Go was the ability to directly import and install packages from various hosting services like GitHub, Google Code and Bitbucket. This combined with the go command makes package management a breeze. What I would really like though is for Go to switch to default local-to-global search path semantics, like npm. You can do it using GOPATH, but making it the default would be better. I am also not sure how you can specify which version of your dependencies to install.

The inbuilt templating support is also a great tool, I didn’t have to hunt around for a templating library, and the syntax was very clean and similar to Jinja2 or Mustache. There seems to be some bug in the ParseFiles method though, as it kept complaining that the template was incomplete or empty, but when I manually read the same file into a string and called Parse, it worked.

The tooling support is excellent, with gofmt and go ensuring everything is ‘standardised’ and makes others code much easier to read.

Coming to the slightly wonky/undefined/undocumented/(may actually be my fault) parts…

I still haven’t really figured out Go’s package system. The go/build package says it allows introspection over packages, like figuring out the installation path (Similar to Python’s __file__). But I couldn’t get this to work with my code, especially when it was not installed. I wanted it so I could keep the HTML template and CSS in separate files, and use the package path to figure out where they were kept. I finally got very frustrated and just stuck them into a go file as multi-line strings.

Syntactically, multiple return values are a good way to signal errors without exception handling, there really should be a way to ignore the second, error, parameter. Not being able to do that, makes composing functions impossible. Similarly the unused variables and imports errors can get annoying while developing, when there is lots of stub code. There is a workaround but still! :)

Still, it was a lot of fun using Go. I originally wanted to implement Docco in C, but the lacking standard library and types support was scaring me. Now I will reach for Go when it comes to systems programming.

A thanks to Russ Ross, for an excellent Markdown implementation.

Monday, April 02, 2012

Great Indie-an Acts

(This was originally published in Entelechy Edition 34, March 2012)

With Synapse a few weeks old, the constant Raghu Dixit songs are starting to fade away from the hostel corridors. Except when we hear bands live, or are aware of an upcoming concert, Indian artists aren’t given much ear time. Bollywood dominates too much. But a growing independent music scene is flourishing in India, mainly due to rising economic levels and more people willing to pursue their dreams. Here are some great Indian artists that I’ve recently liked. Vishal Shah and Indian Music Revolution are instrumental in introducing some of them.

Advaita

I first heard of Advaita and Swarathma in ”Hindi Hein Hum”. Advaita term themselves as a ‘Rock/Eclectic/Organic Fusion’ band from Delhi and are similar to The Raghu Dixit project. Finding their music is extremely hard, both legally and illegally because they haven’t launched their music on Flyte, and everywhere else is too expensive. I’ve been resorting to streaming from Spotify, but finally caved in and ordered one of their CDs from Flipkart (low-tech maybe, but also the cheapest). Not much is known about them, but they are an octet (geeky?) who had their breakthrough when John Leckie selected them (and Swarathma) as one of the four Indian bands in the India Soundpad project.

Their debut album is Grounded in Space, released in 2009. Ghir Ghir is an amazing number on this album. Advaita makes very good use of some Indian classical vocals and raagas, while injecting western instruments and sound structures with the tabla and sarangi. In fact, Ghir Ghir has parts which wouldn’t sound out of place coming from Dream Theater.

Perhaps their best song is Mo Funk from their latest album The Silent Sea (2012). Vocals (by Ujwal Nagar) and tones that would make my (stuck-up) school music teacher very happy, layered on a background score that is like trance or house music make this 6 minute piece a must-listen. 4 minutes into the song, western vocalist and acoustic guitarist Chayan Adhikari takes over and culminates the ‘Funk’ part of the song.

Other songs to start with are Tremor (SS) and Drops of Earth (GiS).

Swarathma

I think I’ve already introduced Swarathma in the previous section :) Swarathma is a 6-member ensemble from Bangalore (out of which 2 crack PJs). They’ve featured on The Dewarists Episode 7 with Shubha Mudgal.

One of their SoundPad project compositions – Yeshu, Allah aur Krishna – goes “Sant Kabir aye dharti pe, …, jo socha tha woh reh gaya sapna …” and seems to reflect the band’s philosophy. Techies, B-school grads and other achievers, pursuing their passion and making their dreams come true. Their only album is self-titled and released in 2009.

Leader Jana Kahan Hai Mujhe is about “choices we face at each step in our lives, and we are frequently at a loss when called upon to make a decision”. In contrast to Advaita, Swarathma has restrained instruments, a slow, low chord guiding the vocals, and a splash of tabla here and there.

My other favourite is Ee Bhoomi. Kannada must be a happy language indeed, to produce gems like this one and Raghu Dixit’s Lokada Kalaji. This upbeat song describes the transformation of the Earth to Paradise (… bhoomi swarga …) as you let Swarathma wash over you.

Peter Cat Recording Co.

In a case of last but not the least, PCRC is the band I’ve actually been listening to the longest. They remind me of the 60s and 70s when Rock was being born and could be happy and everyday, not having to force melancholy or abstract ideas to be appealing. I’ve been a fan before they had a released album and were tracks on SoundCloud, and these guys doubled that by distributing both their albums ( Sinema and Wall of Want) free for download. The New Delhi quartet describes their music as “Gypsy Jazz to Ballroom Waltz to Midnight Moonlight car chase music”.

Sinema is their older and better album. All the songs have a grainy distortion throughout as if playing from vinyl, although whether this adds to their charm or not is a personal opinion. To quote Helter Skelter:

That it screams ‘SEX SEX SEX’ right under the ‘Free Download’ link for their album Sinema on the Peter Cat Recording Co. web site should be argument enough for you to go ahead and download it.

The Clown on the 22nd floor sets the mood for the album, with old hindi movie clips tacked on to the end. The album crafts a series of love affairs that end badly. Suryakant Sawhney’s rounded drawl adds warmth to the songs, while clever lyrics mingle girls and philosophy. Coming from a band obsessed with humour and subtlety closer Tokyo Vijaya’s lyrics just go ‘AAAaaaaaaaaaaaaaaaaaaaaaaaaaaaa’ against a dense instrumentation of drums and dragged out guitars ending the story inconclusively. PCRC is a band that aims big and delivers funny.

Sunday, April 01, 2012

Apache Cassandra: Iterate over all columns in a row

Recently I have been using Cassandra for one of my projects, and one of the needs is to iterate over all columns of a row. Each column represents an individual data, of type identified by row id, and keeps changing. So I can’t simply use a set of known column names. Using the setRange call on a SliceQuery and setting a large count is also not an option, since Cassandra will try to load the entire set of columns into memory. Instead I’ve written this iterator which takes a query on which row key and column family has been set, and will load columns as they are requested. By default it loads a 100 columns at a time. You could make it take the count as a parameter and all, but this works for me for now.

The one ‘problem’ with this is the removal of the last column to ensure that there are no duplicates, but still having a start point for the next query. This is because each column is independent, so you cannot ask a column who it’s next neighbour is and start the next query from there. If anybody has a tip to make it more elegant, I’d love to hear it.

Tuesday, March 20, 2012

I'm scared for the Internet

(This article was written for Entelechy Edition 33 (February 2012). The news is slightly old, but posted here so I have a public permalink to the article)

When the Internet began over 30 odd years ago, it was an ideal of democracy. Born in universities, where only meritrocracy ruled, it was used by hackers whose ideals were very egalitarian. In its very protocols, the Internet encodes equality. No piece of data is considered more important or more dangerous than any other.

Even as the World Wide Web exploded some 20 years ago, it retained the ideals of the Internet, although in some sense these very ideals led to corners of the web which were just evil.

Of course, as the Internet went global and required physical infrastructure and government permissions, what you could and could not do became much more controlled. Child pornography was plain illegal, but freedom of speech was not, often because much of the core architecture and servers were based in democratic countries who were benefiting massively from the Web.

Of course the web came along and made piracy much much easier. As Paul Tassi states in Forbes, piracy is a 4 step process, while any conventional means of media consumption from the Old Big Media Houses is way more inconvenient. With laws differing across borders, delayed releases, abominations like DeCSS and so on, it was far easier to use a general purpose computer to circumvate all these measures.

In the last few years however, piracy and freedom of speech has come back to bite us. Rather, it has come to haunt the old bastions of power, governments, religions and media publishers. Faced with a series of attacks, the Web is now caught between the devil and the deep sea. On one side is the battle for intellectual property. On the other hand are repressive governments (even so-called democracies) where certain factions wish to curtail the one medium that is impossible to truly shutdown.

There are also the silos of Facebook, Google and a thousand walled App Gardens that are eating mouthfuls of our data and keeping it behind closed doors. They aren’t the worst part though.

I’m not scared that the Internet will die, but I’m scared that it could lose its essence. The essence of freedom, of equality, of opportunity and communication that has brought hope to many and improved countless lives immeasurably. If we the common people keep lying down as our freedoms are taken from us, we will lose the one chance to truly step into something new, to fully embrace human potential, and go back to watching the shiny rainbows on Blu-ray discs whose only use is as coasters. It is not as if we are powerless, we just need to be educated. After all in the case of SOPA/PIPA, mass protests were effective in sending the bill for reconsideration.

It is no secret that politics is financed by capitalists. Much of this money flows in from media conglomerates and thus politicians are puppets when the MPAA or RIAA decide to take out their battle axes on pirates. What the old media does not realize is that the Internet has levelled the playing field. It is now possible for anyone to publish high quality content. Rather than focusing on making it cheap and easy to spread their content, so that most users will be willing to pay for it legally, they continue to impose draconian laws on sharing, copying and accessing their media. The solution – piracy. When technologists like Netflix, Hulu and Amazon try to make it ever easier, they crawl back into their shells. A perfect example is HBO’s Game of Thrones.

There are fundamental problems with new bills that are tabled to deal with piracy. These laws are often framed in secret in a nexus of politics and old media houses, most notably ACTA (the Anti-Counterfeiting Trade Agreement), rather than in public scrutiny where rights groups and the general public can see what is happening. It was only as ACTA started to get ratified around the world that people realized what is happening. The result – plenty of EU countries have seen protests that have led to ACTA being put on hold. What is laughable is that the United States actually rejected Right to Information requests saying that it could be a threat to national security. The second problem is that all copyright laws have always focused far too much on tightening copyright regulations, and what it means for something to be copied, what requires royalties and what requires permission. Copyright was initially a way to give creators sufficient returns for their works, not how can I get the maximum money out of this. By tightening the noose, it is getting harder for artists to re-use other’s creativity to create even better works. In addition every anti-piracy effort has put costs on innocent third-parties, taken huge cuts from taxpayers money, and pushed technology that had non-infringing uses out of the market. For example, BitTorrent is a fascinating technology with huge potential for better Internet services, yet it has achieved negative connotations due to its use for piracy. In the aggregate, they reflect a disproportionate focus on the interests of a handful of large companies. It’s hard to think of a single example during this twenty-year period of copyright restrictions being repealed, relaxed, or any in any meaningful way liberalized. Finally, there is a great paradox between the Western world’s constant demands for the right to free speech on the Internet and the principles embodied in stronger copyright laws. For example, SOPA’s feature of allowing the shutdown of arbitrary websites without judicial hearing, ACTA’s removal of safe harbour protections and the outlawing of circumvention technology leads to the internet quickly toeing the line with new regulations. Ironically, even as SOPA seeks to outlaw technology like Tor, the US Department of Defense actively funds its development to help activists in repressive regimes like Iran.

The Internet has now reached its prime, e-commerce is commonplace, startups for media distribution are blossoming all over, our identities are now better known by Facebook and Google than by our Governments. Power and opinions flow over wires, free of the meddling of higher-ups. This is leading to politicians and other old bastions of power (cable TV, telephony) feeling lost, and so they are taking concrete steps to clamp down on what they consider a menace. They then repackage it and sell it to the public as ‘stealing’ or ‘content that can cause public unrest’. Caught between these two forces, the Internet could become a nanny state in the next few years.

Over 20 years after an international agreement that deregulated the Internet and led to a meteoric success story of capitalism and free markets above all else, the United Nations plans to establish “international control over the Internet”. In December 2012, Russia, China and others will push for:
  • allowing ISPs to charge ‘international’ fees for Internet traffic - this is just absurd. By its very nature the Internet is not supposed to have ‘boundaries’. It goes completely against Net Neutrality
  • Subsume under governmental bodies many of the tasks of the Internet Engineering Task Force, the Internet Society, the Internet Corporation for Assigned Names and Numbers and others. These bodies currently operate solely on merit, technical competency and democratic votes, and as private bodies are free from governmental interference.
among others. Censorship is emerging stronger and stronger. Censorship across the world has been well covered, and I will not go into the details. Two things are relevant though. One is #IdiotSibal’s attempts at getting social networks to take down ‘offending’ content. It seems in our bid to compete with China, we’ve decided to one-man up them in censorship too. Shivam Vij has very correctly stated in Kapil Sibal doesn’t understand the Internet,
So Sibal and Tharoor think social media can cause riots, but it hasn’t actually done so yet. Now that Sibal and Tharoor are telling us there’s stuff out there that could make us kill each other, some of us will go looking for it out of curiosity and…
and
In neighbouring Pakistan, every Tom, Dick and Harry with complaints of online hate speech approaches the Lahore High Court. In India, Kapil Sibal wants to be the high court.
He wants to be judge, jury and executioner. And he wants to do it silently so we don’t get to know.
Google’s Transparency report on India clearly identifies politicians lack of ability to accept criticism.
In addition, we received a request from a local law enforcement agency to remove 236 communities and profiles from orkut that were critical of a local politician.
In the private sector, Reliance Communications has taken it upon itself to be the moral and economic guardian by obtaining ‘John Doe’ orders from a local court to ban file sharing websites in the days around a movie release. In sheer violation of Indian law which states that only the Department of Information Technology may request censorship, they then went and blocked websites country wide. It was as if SOPA was already passed in India. A John Doe order is the type of insanity that you think can only happen in movies until some lawyer actually dreams it up. Multiple, unknown, offenders can be acted against. Interestingly while Reliance stated that they were within law, no actual complaint was recieved. The court order and ban was based on speculation. I sent a Minority Report Precrime in action.

In retrospect there are 3 things (amongst others) that threaten the future of the Internet as a ‘commons’: Net neutrality, draconian copyright-laws and censorship. There is only one thing stopping that from happening - YOU.

Further reading:

Saturday, March 10, 2012

The TT Incident

This incident is about those moments of pure magic that sometimes happen when two individuals have light bulbs go on in their head at the same instance based on a series of earlier shared experiences and context. No, not the soul mate sort. To convey the expression requires introducing you, the reader, to the back story. In essence this post is about me and Naman’s library adventures. It is a set of confessions which I hope will not get us kicked out of the library now that there is a change of guard.

Naman’s animosity with the library began sometime in the fourth year after Mr. K’s incompetency started getting on his nerves. Then, quite justifiably, he got banned from using the internet in the library for 5 days for stealing Mr K’s IP address for one full day and causing his work to stop. Although Naman claims he did not know whose IP it was he didn’t mind the happy co-incidence. That was the prick in the balloon! I have always been somewhat annoyed with the library’s arbitrary rules, how the staff is a stickler for them even under special circumstances and how easy it actually is to circumvent them. This was also, then, an experiment in how far we could go without being caught, although all these are just excuses to deny the fact that we wanted some childish fun and narcissistic attention. Our library violations begin with this SMS from Naman to me, dated August 31, 2011, 11am:

Who is the most fucking awesome? Bet #1. Who can get the most outrageous food items in library till they get caught. In other terms, who can smuggle food inside rc the longest before getting caught.

It didn’t take too long for Naman to start. He arrived a few days later into the library, all smiles, pockets bulging. Out of one packet same a sandwich. Mr. V. wandered in and set a crumpled plastic cup in front of Naman. I have to admit that I wasn’t too impressed yet. But then out of the second pocket came a plastic bag with tea, and he poured it into the cup and coolly had his breakfast in the library. In all his smugness he went and publicised his achievement :) Not to be outdone, I convinced certain people to go eat pizza with me that very night (this is a lie). When Naman wanted his delivered to him, I took it seriously and delivered it in the library. One Margherita eaten right under the watchful eyes of Dr. K.

Score TFA 1 - 0 library. (We are called Team Fucking Awesome because we were at that point inspired by Julien Smith’s The Short and Sweet Guide to Being Fucking Awesome and took it very seriously)

In time our food escapades fizzled out and we returned to the standard supply of biscuits, chocolates and chips to keep our hands busy while our minds (sometimes) churned over (a bit of) academic knowledge. Our dream still remains incomplete though: cook a packet of instant noodles in the library and consume it there.

Another day, a group of third year girls walked into the library after having celebrated someone’s birthday. They came armed with those annoying birthday trumpets (sometimes sweetly called a bhoppu). We had a hoot blowing raspberries loudly in the library, where silence is a law of nature. Hardly content with this, we stepped outside and (amongst raucous laughter) recorded this sample:

Then we went back to the library and forgot about it. While leaving we ensured this was played loudly and completely, but acted as if we didn’t know what was happening. Thus we left, taking a bit of everyone’s stress with us.

TFA 2 - 0 library.

Every one of these incidents have been during an exam, when the library is filled with students studying for their exams. I am not sure how it reflects on our character that we perform these acts when we should be hitting the books or letting others study in peace. We like to think we’re just injecting some fun into the otherwise serious atmosphere.

Towards the beginning of the final semester – Feb 7, 2012, 8:20pm to be precise – I had come for dinner and was going directly to the library. Naman had been playing table-tennis and directly arrived for dinner. He was using my rackets and ball, so he handed them over hoping I would lug them along everywhere I went. After a bit of discussion over who should keep them, he made the (rational) point that I should carry them and put them in my bag.

There was a flash moment, the kind of epiphany that was experienced by Einstein or Newton, when all their years of thinking collapsed into one beautiful result instantly. I had table-tennis rackets and a ball, I was going to the library… I had to just utter ‘Hey’ and it clicked instantly in Naman’s head too. We high five-d each other and let out a huge laugh, reveling in how the perfect next library prank had formed. The whole café was staring at us as we started in jubilation. The idea was set up, time for execution.

The equipment was rolled in, the surveillance was set up and the stunt done that very night. Too bad the library was filled mostly with geeks at that time. Hardly anyone raised their heads to pay us any attention.

TFA 3 - 0 library.

Posted via email from nikhil's posterous

Sunday, January 29, 2012

The IT Crowd?

(This article was originally published in Entelechy, edition 32, Jan 2012. It is being published here in full with some annotations.)

It has continued to surprise me over the last 3.5 years how few information technology students actually bother to use the innovations of information technology to improve their productivity in any manner. More importantly, they are usually unaware of the products themselves. Recently the issue was brought to the fore when Skish Champi pointed out that Zimbra Collaboration Suite had great calendar integration (and I agree), and we as a college are still struggling around with sending meeting emails and reminders.

We are all experts at using DC++ to share files over our network. Yet, when it comes to sending the same files over the Internet you still stick to (gasp) e-mail. If you send me a 50Mb file over email in 2012, I’m going to knock on your door with a gun in my hand. Use Dropbox or a thousand other such services. You only need to upload once, your multiple devices can continuously sync the files, individual folders can be shared with individual people — this is great for working on projects and such. In addition Dropbox keeps old versions around. Heck, using the Public folder you can even host a complete static website without paying a paisa! Once your file is on Dropbox you can go trigger happy with the public link and send the link in the e-mail instead. Less load for the e-mail servers, and the link can easily be shared via any other medium too.

Similarly, if you are in charge of planning a lot of events (looks at the committees and clubs and faculty), create a new Calendar event in Zimbra and send it to all batches. That way a student simply has to accept the invitation, and he will also occasionally get reminders. Synced with a desktop Calendar application, you can even have your computer play sounds or show messages even when webmail is not open. Now you have no reason to forget or be late for a meeting.

If you are working on a software project, and your way to ‘work as a team’ on the code is to ship around newer versions of the files to everyone via e-mail, your project is already dead. Why? What happens when you suddenly need an older version? Or one feature from Amrita and the other one from Rajni? Spend your time copy-pasting? Let a version control system do it for you!

I’m going to mangle The Unix Philosophy slightly to suit my purposes:

  • Use software that does one thing, and does it well.
  • Use software that does not lock your data in, or modify it so no other software can use it. A hyperlink should always be available.
  • A software which allows its output to be the input for another program, is usually better than one that doesn’t. An API can do wonders.
  • Understand your work flow. Don’t hesitate to throw away the clumsy parts.
  • Use tools in preference to manual labour, even if you have to detour to build the tools.

So why do we continue to use legacy technology, mouse around and generally not maximise our use of the computer?

One reason is of course inertia. Our biological brains are always trying to survive and if one thing works we aren’t willing to go improve. Another is a lack of curiosity about wanting to hear about new technology.

But it is also an attitude problem. Technology continues to be the poor step-daughter. If someone shows you how to skin potatoes faster, you’ll quickly adopt the technique. But as soon as the metaphor moves onto the computer, there is immediate unwillingness. From the very beginning of our computer education we’re asked to treat the computer as some magic box, and restricted from doing anything outside of that ‘education’. “Usse haat mat lagana nahi to kharab ho jayega.” is what your parents/teacher says. (English: Don’t touch that, you’ll break it). We carry over this belief that computers are frail creatures even when we become more responsible. The second reason is that the internals of computers continue to be treated as unknowns for much of the population. Most people who drive a car have a qualitative idea of how an internal combustion engine works. They also know how to change a tire. But the same people don’t know how to add RAM or the basics of a processor. Worldwide, computer literacy focuses on office suites and the like. Even when you do eventually study the internals, they remains something from the textbook, so that whenever your C program is malfunctioning you don’t once bother to think in terms of how that program is being interpreted in a certain way, and how that can help solve the problem. This disconnect is alarming, I’ve seen computer science professors being unaware of basic computer features. The funniest example I can think of this is when Windows users keep right clicking on the desktop and hitting Refresh when things aren’t moving.

The media is also responsible. World news is quick to highlight the policies and laws that will affect say, retail, or corruption or some industry. Technology media however is focused on product reviews and the next revolutionary technology (which is usually some old concept re-hashed), and sidelines actual problems which will impact privacy, ownership and other fundamental rights in the digital age, so that the notion of computing in itself is never given enough attention by the general public. If car manufacturers specified which brand of fuel you could use in their cars, there would be a hue and cry over anti-competitive behaviour and not giving choice to people. Yet mobile phone carriers fleece people every day with locked, underpowered cell phones. The war on computation keeps going the wrong way.

I also believe, that just as in mathematics, computers require significantly more complex mental models to be manipulated in the mind. A car engine is heat and metal and chemicals and can directly be observed doing something. The levels of abstraction between the electrons racing through the wires and the point and click metaphor of daily interaction are numerous in comparison. As a user you do not of course need to have any inkling of them, but even at the software layer, a computer desktop is much more congested and much less tangible. Most normal people seem to keep missing subtle user interface clues that convey meaning. To those with the mental ability to hold these models and think rationally and logically (the only way in which the computer can think), the paradigms are much clearer.

But as ICT students you are expected to have that mental ability. The first thing to do is to lose fear of the computer. Modern operating systems and applications are robust enough so that they won’t go down just due to clicking in the wrong place. Experiment with preferences, try buttons which have only icons, many a feature can be discovered this way. Second, internalise the knowledge as much as you can by always trying out new things yourself, until they become second nature. Third, think through what you are doing rather than just clicking as your friend told you to. Fourth, remember that like in all engineering disciplines, convention is implicitly followed in computing as well. So concepts you learn in one application (say drag and drop) can be generalized and applied everywhere. Fifth, start thinking of computing tasks as recipes. In cooking, you apply a series of tools, to transform various inputs (ingredients) to the final outcome. The output of the knife becomes the input of the frying pan. On the other hand, most software normal people use tends to be monolithic, one tool that will do all the steps. Good software on the other hand has separate knives and separate frying pans and allows a great deal of flexibility. The Unix command line is the most pervasive and powerful example. Sixth, keep your eyes open for new ways of better using technology. Blogs like lifehacker are an excellent source of such tips. Seventh, pay attention when governments and companies try to cripple your right to compute and the right to information.

With a little active effort, you streamline your computing experience, so that you can devote complete attention to the fantastic things you create.

Posted via email from nikhil's posterous

Monday, January 09, 2012

Toggling Proxy settings quickly with Pentadactyl and Firefox

If you use the excellent Pentadactyl plugin for Firefox to get vim super-powers to the browser, here is a quick and painless way to toggle between a direct connection to the internet or using the default proxy settings. Add this to your ~/.pentadactylrc
command proxy -nargs=1 :set! network.proxy.type=<args>
nmap up :proxy 1<CR>
nmap np :proxy 0<CR>
Now pressing up (Use Proxy) will enable Manual Proxy Settings while pressing np (No Proxy) will use Direct Connection. network.proxy.type can take other values which might be suited to your setup. You can change the key bindings too.