Backups

I found today that Amazon S3 has a really cool one-click backup, where you can set things to back up regularly to Amazon Glacier.

And Amazon DynamoDB also has this thing where you can set it to automatically back up to a table in another region.

You can also set DynamoDB to back up to S3.

Glacier is apparently like a substitute for magnetic tape, without the inconveniences of tape. Takes a while to restore, as well. Pretty cool idea. I wonder what competition exists in this space, currently. A cursory search suggests none.

Glad this is there. It’s pretty essential.

HCI and Swedish Medical Center

I had to go to the doctor recently. So the patient sits opposite to the doctor, maybe a little to the right, and the doctor’s in front of a computer, and is keying in things into the hospital management software. The doctor has her back to the door. What I noticed this time was, there’s a mirror at the back of the door. So the patient, from where she sits, can actually see the doctor’s computer monitor.

A bulb of recognition went off.

A long time ago, I’d attended a talk at UCI’s HCI seminar series. I think it was Dr. Yunan Chen’s practice talk for her presentation at CHI. Her research is mainly about device use in the medical field. This particular talk was about an ethnographic study of patients’ perceptions of doctors’ device use.

One thing that the patients had an issue with was doctors typing into a computer as the patients spoke. They wondered what the doctor was typing, whether the doctor actually was listening, and if the doctor was doing something like checking mail or Facebook. And that led to a lot of lack of confidence.

Looks like Swedish has taken into account that research. A mirror at the back of the door is a simple solution. You can be sure the doctor isn’t on Facebook, even if you can’t read what they’re typing through the mirror. And doctors also take time to show you what they’ve written and inform you they’ll be printing it out for you anyway, and that you can have online access to this information as well.

Pretty good, huh, to see something go from research to implementation :)

Back to being back here

I don’t remember the last time I posted here. I don’t even think anyone remembers this place exists.

Irrespective.

I’ve grown a lot careerwise. This blog was supposed to help me along that journey, but somehow got ignored by the wayside. Also there’s this overreaching guilt of not doing enough to post here. My big plans still remain. But every time someone asks me about them, I chuckle sadly.

So what’s been on with me? I graduated from UC Irvine under Dr. Ihler in 2011. After that, I was doing NLP for the finance industry for two years. It’s quite an interesting field, I must say. I had one class that covered insider trading and EBITDA and Mergers and Acquisitions and I found all of it enormously interesting. I didn’t unfortunately keep up with my financial knowledge though. I didn’t really need it in what I did on a day to day basis.

And what did I do? I worked on a whole bunch of interesting things. So you have a ginormous quantity of documents coming in in so many different forms, and you need to parse them all and extract data from them. So you end up doing all these extremely basic things. You use OCR to convert image PDFs to text. You parse PDF in all its ugliness and convert it to a simpler format, while taking care to preserving some of the PDF-ey things about PDFs. And then it turns out there are 90 languages and your clients speak English. So you translate 90 languages into English. Some of it’s easy, especially European languages. A lot of it is painful. But we aren’t looking for high-quality translations…. just enough for the numbers in the financial documents to make sense. But then you run into a lot of unique problems. You don’t want to translate Yuan to Dollars. You find that most off-the-shelf translators are built for general language, not finance-specific language, so all the translations are different.

And then you do other interesting stuff with all the stuff you’ve processed so far. You try Named Entity Recognition. You try recommending similar documents. You try identifying series in document streams. You try creating summaries.

All of it was mighty interesting. On a given day, I’d code in C, C#, Perl, Java and Python and it’d all be no big deal. I learnt what MVC and MVVM meant. I began taking a real interest in software design. I learnt how to write maintainable large code. And the benefits of version control.

And then it was time to move.

I work now for a large online retailer’s Search&Discovery division. And that’s all I can say about it. Maybe some day I’ll reminisce fondly on what text mining challenges I face here, the scale of what I work on, and other things that would have by then become old hat. But not now.

I’ve had other interesting experiences with data in the meantime. Facebook NY came up with a Data Science round table. The invitee list looked like Chief Scientist, Head, Data Science, Asst Prof…. you get the drift…. and then me, with less than two years of work experience. It was insanely interesting to meet such people and have them treat me like they had a lot to learn from me. I learnt so much that day that though I’ve forgotten all their names, the discussions are still etched in my mind. It isn’t everyday that you have MCMC sampling explained to you over beer and fries someone else is paying for.

And then I tried a hack I’m not allowed to talk about, and I learnt there’s a feature in POS Taggers called the Gazetteer, where all you do is give it a set of phrases and the POS they belong to, and bam, any occurrence of those phrases (exact matches) is tagged thus. It’s insanely useful when you have your own new part of speech, like say, Celeb Names or Book Titles or some such.

So that’s been what I’ve been upto. Let’s hope I keep up this pace of blogging.

 

Getting back to machine learning

I got done with my Master’s in Science in Computer Science. I graduated with a thesis titled ‘Graphical Models for Entity Coreference Resolution‘.

Since then, it’s been a long break from all things hardcore machine learning and data mining and natural language processing. I have a nice day job which pays for my essentials and still leaves me with enough time and money to do a lot of other stuff. My team does a lot of ML, but that does not include what I’m working on at the moment. It might involve me writing some code which learns stuff from data and predicts on some other data, but I don’t know yet.

It’s been a good break. I needed this. I’m a much more confident person now. I have more confidence in my abilities to write and maintain large bits of code. I think it’s about time for me to get back to learning all about machine learning and graphical models with no stress of deadlines and enough opportunity to explore, and most importantly, no feeling intimidated. Also, going through material the second time over would be a good way of absorbing all that I missed the first time.

I’ve been cleaning out my hard disk in order to make conditions ideal for me to do this. A messy filesystem is really hard to work with. Especially with no version control or anything. Things get messy and when it’s crunch time, it only gets worse, not being able to find what you want because you haven’t labelled anything right.

I cleared out all my backups off of my external hard drive. Then I moved my entire pre-NYC-move photographs to the external HD. Going over which individual images to keep and which ones to delete was very cringeworthy – I had been quite camera-happy before 2009, and had clicked a lot of pictures. They say your first 10,000 pictures are your worst. Believe me, mine were. So overtly cringeworthy. More so since back then I didn’t even used to pay attention to how I dressed or how I did my hair or how I maintained my skin. Now those issues don’t exist anymore, so the cringing isn’t coupled with embarrassment and helplessness in my head like before.

I then uninstalled a lot of unnecessary software. Multiple builds of Python, with crazy sets of plugins on each build. Outdated versions of Eclipse. And oh, so many datasets. Deleted what I could, shifted the rest to my external HD. Tried organizing all my music, tagging them appropriately and attempting to put them into the right folders. Wasn’t so easy, so gave up midway. But I discovered that Mp3Tag seems to be a good app to do this.

I then organized my huge collection of ebooks using Calibre. I seem to have a lot of crap I downloaded from Project Gutenberg back in my young-and-foolish days in the infancy of the Google-powered Internet. Somehow, I just can’t delete classic books, no matter how I’ve never read them. So they stay for now.

Turns out, I have tons of movies stored as well, which I’d downloaded off of Putlocker back when I couldn’t even afford Netflix. Organized them well. I also seem to have a small collection of stuff downloaded off Youtube – clever and rare Indian ads, rare music videos of indie Indian pop/rock/movies. I need to upload them back to Youtube someday, for the originals I downloaded from seem pretty much deleted off the face of Youtube.

I even found all the original Stanford Machine Learning Class videos with Prof. Ng. Heh, with Coursera and Udacity, and Khan Academy now, you don’t need any of those like I did back in 2008-09. It was a different time back then, really.

I installed Python 3.2 after that. And Eclipse Juno. Followed by PyDev and the Google App Engine plugin for Eclipse. A windows installer for SciPy exists which is compatible with Python 3.2. However, MatPlotLib’s official Windows installer releases don’t yet support Python 3.2. Thankfully, unofficial ones exist here (oh yay, look, it’s from UCI).  I can of course build everything from source, but I want to keep this as hassle-free as possible.

I also need to get started with version control on Google Code or some such, so that I keep all my code somewhere I can access from everywhere.

Now next on the agenda is to go through a machine learning textbook, or an online course and slowly build my own libraries for machine learning from scratch. Maybe I’ll try building a Weka replica – uniform interface for training and testing each algorithm.

After that is to work on probabilistic graphical models and build those from scratch as well.

And in the midst of all this, I want to publish the work I’ve done in my thesis, which will mean trying to replicate those results, in a new and improved way, taking into consideration all the ideas I didn’t have time for, and those which I could have implemented better.

Let’s see how it goes :) I hope to keep updating this place with all the stuff I do :)

Update: I found an ML textbook best suited to my needs finally! Machine Learning: An Algorithmic Perspective. I’ll start tomorrow, will see how much I learn.

[Webapp Idea] Twitter Link Browser

I use Twitter quite some. A lot of the people I follow share quite a lot of links. When I browse twitter on my mobile in the morning, I can’t check out all the links. I usually ‘Favorite’ the links that seem interesting and then browse them later. I’d actually prefer a better interface to this, which enables me to tag these links privately so that I can look for them later as well.

I found one such webapp whose name I now forget. The problem with it was it had a sucky interface and didn’t let me preview all the links properly. Then there’s also Tweetree which offers previews of shared links. I also like the Google Reader/Gmail sort of interface which keeps track of new links and already read links. And also, when multiple people share the same link, I’d like to see it all collapsed as one with “X, Y and Z shared this” next to it. Or something.

So this is one thing I’d like to build using Google App Engine.

The steps to do so would be as follows:

  1. Find a nice Twitter API interface for Python which can preferably be integrated with Google App Engine.
  2. Write code to get tweets from your Twitter timeline.
    2(a) Learn how to use Twitter OAuth.
  3. Detect tweets with links. When they do, extract the unshortened link.
  4. By now, you have a set of links, and can choose to display them as you wish.
  5. Use the App Engine datastore to store previously viewed links. Possible attributes to be stored along with link can include users who shared this link, timestamps of tweets which shared these links, viewed-or-not (when dropping into database after extraction, this attribute should have the value ‘No’), title of linked page. Also store time of last login.
  6. Workflow: On login, extract links from timeline and drop into database until the timestamp of the tweet you’re reading is lesser than the time of last login. Then display those links with ‘viewed-or-not’ value as ‘No’ as ‘Unread items’ and the rest as ‘read’ items. On clicking each link, mark them as read. Also provide checkboxes to mass-markAsRead.
  7.  Basic interface: Gmail HTML sorts. Previews and stuff can come later.
Components to build a basic version:
  • OAuth
  • Tweet-getter
  • Link-extract-and-drop-in-database. This in turn includes Link extractor, unshortener, title-getter, database interface.
  • Database queries to view links and mark them as read/unread.
  • User interface.
Anything missing so far? Loose ends? Anything can be done better? Are you working on this? Any advice on getting started or any of the individual components?

RIP, Reader

Yeah, this is yet another one of the funeral dirges for Google Reader. And I post it here instead of on my personal blog because I need to get into the habit of writing about technology here. Google Reader is hardly ‘technology’ as I intend it to be… I want to use this place for research updates and paper summaries.  But the anxiety about ‘not being good enough’ when it comes to all that is so much that I don’t want to write anything even remotely geeky. I need to snap out of that. And it’s NaNoWriMo, it’s about quantity more than quality. So here we go :)

So basically there are two main arguments against Google Reader’s integration with Google Plus. First is about how the user interface is sucky. And the second is about how the removal of sharing has killed the whole spirit of Reader. A third, if I may add, is that the platform/API is so bad, and everything is so messed up at first look that I can’t seem to wrap my mind around how to write a wrapper that makes things better.  Oh wait, there’s a fourth as well – the ‘stream’ format, as opposed to the folders-and-tags format, is the very antithesis of what Reader is supposed to be.

Let’s start with the appearance. Yes, white space is good. It makes things look ‘clean’. But that’s only when you have very specific things you want your user to see on your page. It works great for the Google.com homepage, for instance… all you want is a search bar. But when it’s a feed reader, it doesn’t work at all. When I log in, I don’t want to see half my screen space taken up by needless headers and whatnot. The bar with ‘Refresh’, ‘Mark as read’ and ‘Feed settings’ are needlessly large and prominent instead of being smaller and not taking up much space. They aren’t used all that much, to start with, that justifies their large font size. The focus here shouldn’t be on the options, but on the thing I’m reading. Fail.

Then everything’s gray, including links. If something’s not blue or purple, my mind doesn’t consider it a link. Sorry, but those are unwritten conventions on the Web. There’s no reason to change that now, and gray is a horrible color to show that something’s different from the rest of the black text. And the only spots of color on the page are a tiny dab of red to show the feed you’re currently reading, and a large button on the top left that says ‘Subscribe’. Dab of red, seriously? I much rather preferred the entire line showing the current feed highlighted instead of that little red bar. And I don’t add new feeds to read everyday that I need a large ‘Subscribe’ button. And when I do add feeds, I don’t add them using google.com/reader… I’m on the website I want to add, usually, and add feeds by clicking on the RSS icon, and then adding to reader.

Then the UI for sharing. It’s a lot more clicks to share something now. And yeah, the gripe is that whatever I share will be shared only on G+, but we’ll get to that in a moment. My problem with having to pick what circles I share with each time I share a feed is that it’s too much decision making too often. Atleast give me a set of check boxes of my circles so that all I need to do is two clicks instead of having to start typing my circle names.

It turned out, if you wanted to share something without publicly +1-ing something, you’d have to go to the top-right corner and click on ‘Share’. Well, how is that intuitive? And why would anyone design it that way, especially when the previous way to do that was by clicking on ‘share’ right below the post? Surely, it could have just had the Circles thing appear when you clicked the ‘share’ button, and +1-ing it could be a different button? And keep the top-right Share button if you like?

Now about sharing. I can share something with folks from Google Reader, yes, but they can only read it from Google Plus. Someone said that’s like retweeting something on Twitter from your client, like say Tweetdeck, but those who follow you can see your RT’s using only twitter.com. How retarded is that? I want a one-stop shop where I can do all my reading instead of having it spread over a zillion other places.

Due to which one of the things I wanted to do was build a wrapper website that integrated links shared on your G+ stream with your Reader feeds. I can’t seem to wrap my mind around how exactly it would work, but that’s one thing I certainly want to do.

The ‘stream’ format sucks for reading shared links. I have this problem with Twitter too, but on Twitter, you can ‘Favorite’ tweets which contain links and then read them one by one later. In fact, I was wondering about a platform that takes links on your Twitter timeline and puts them together for easy reading, feed reader style. Google Plus however has no such feature which you can use to tuck away stuff for later. If you’re too busy, you skip over a shared link and it’s lost. I much preferred the model where your feeds would all accumulate and if it got too much to handle, you could always mark all as read. Even better when your feeds would be properly organized.

And then Google Plus does a bad job of displaying shared links. It shows a small preview, but that’s more often than not insufficient. Buzz was better in this respect… atleast your images could be expanded, and posts could be expanded so that you could read it right there. Ha, one positive of this would however be that people would get a lot more hits on their websites. And it is not immediately apparent as to how inconvenient this sort of a visual format is, because people don’t share so much on Google Plus yet, and they don’t yet use it as a primary reader or such extensive use that it gets on their nerves.

And finally about the thing that has had the largest impact. Sharing.

Previously, in 2007, when Reader didn’t yet have sharing, we’d all come across nice links we’d want to share with our friends, and then either ping them on IM with it, or mail them the link. Needless to say, it was irksome. For both us and our friends. But somehow, when you shared it on Reader, the intrusiveness of sending links went away. It was just there, and if you liked it, you said so on the comments or by resharing it or referencing it in conversation. It stopped feeling like you were shoving it down someone’s throat, or someone shoved it down yours.

Sharing was also a nice way to filter content. For example, I loved reading Mental Floss’s feeds, but couldn’t stand the feed-puke that were feeds like TechCrunch and Reddit, whereas it was the other way for some of my friends. So we just followed each other, and I read the TechCrunch and Reddit content they deemed good enough to share, while I shared the interesting tidbits from Mental Floss.

Google Reader, I remember feeling, was a nice incubator for observing social network dynamics and introducing social features. It was my first first-hand exposure to recommender systems, before I moved to the USA and could actually shop on Amazon or watch movies on Netflix. It was interesting seeing how the recommendations incorporated stuff from your GTalk chats, your searches, stuff you ‘liked’… I remember freaking out about how after chatting often with a friend in LA my recommended feeds included a lot of LA-related blogs. And there was a search engine based treasure hunt at my undergrad college, and a friend and I remember saying “Oh man, googling stuff for this contest is so going to affect our Reader recommendations”.

It was also where I was recommended tons of blogs on ML and NLP and IR, due to which I went to grad school where I did, and did my thesis in what I did.

Also fun was the ‘Share as a note in reader’ bookmarklet. That way, I could share stuff from anywhere on the Internet with people who I knew would appreciate it.

Now it seems as if the Plus team wants to go and prove right that ex-Amazon Googler who said Google can’t do platforms well. Instead of providing services which can be used in a variety of ways to provide ‘just right’ experiences for a variety of people, Plus is trying to do it right all by itself. And failing miserably at that. The reason for Twitter’s success is the sheer variety of ways you can tweet – from your browser, from your smartphone, from your not-so-smart phone using Snaptu, from your dumb phone via text, your tablet, your desktop…. and I just don’t see that happening with Plus yet.

Maybe I wouldn’t be so mad if all the folks I share with on Reader were on Plus, but actually, hardly anyone is. And I don’t check my Plus feed on a regular basis either. I wouldn’t mind going on Plus to just read what everyone’s sharing, but the user experience is so bad I wouldn’t want that.

Google should have learnt from when it integrated Reader with Buzz and a lot of people found that irksome and simply silenced others’ Reader shares from their Buzz feed, that the Reader format doesn’t go well with the stream format.

There’s so much quite obviously broken with the product that you wonder if the folks who design and code this up actually use it as extensively as you do. Dogfooding is super-important in products like Google’s where there are a wide variety of users and user surveys can’t capture every single aspect.

But given that doing this to Google Reader seems just like when they cancelled Arrested Development, you begin to think they are probably aware of everything, and just don’t care about you the user and your needs anymore.

PS: Can anyone help me get the Google Plus Python API up and running on Google App Engine? I want to play with it, see what it does, and am not able to get it up and running.

PPS: Does a Greasemonkey script to make G+ more presentable sound like a good idea?

PPPS: Check out the folks at HiveMined. They are building a replacement for Google Reader :)

Recommender Systems Wiki

Use and contribute and link to: http://www.recsyswiki.com/wiki/Main_Page

Now should have one such for ML methods in NLP and my life will be great

Convex Optimization.

Course I’m taking. Need to brush up on basics before diving in. And I’ve got less than a day to do that.

Anyone know a good crash course in linear algebra? Will be grateful. Thanks.

 

How to read a research paper

If I’m completely in the groove, with a firm topic in mind, I find it relatively easier to read papers. However when I’m attempting to get started on something, or am reading a paper which, say, I have to summarize for a course, I lose my footing. I procrastinate, I become reluctant to start.

I decided I wanted out of this shite, and hence googled for ‘How To Read A Paper’. I found this paper by someone from the University of Waterloo, and I suspect this will help out greatly.

Let me summarize it for you.

Essentially, given a research paper, you go over it in three passes.

First Pass (5-10 minutes):

  • Read the Title, Abstract and Introduction.
  • Read the section/subsection headings and ignore all else
  • Read the conclusions
  • Glance over the references and tick off those you’ve already read.
  • By the end of this pass, you should be able to answer 5 C’s about the paper:
    • Category
    • Context (What papers are related? What bases are used to analyze the problem?)
    • Correctness (Are the assumptions valid?)
    • Contributions of the paper
    • Clarity (Is the paper well-written?)

Second Pass (1 hour):

  • Read the paper more carefully, while ignoring details like proofs
  • Jot down points, make comments in the margins
  • Look carefully at all figures, especially graphs
  • Mark unread references for further reading (for background information).
  • Summarize main themes of the paper to someone else.
  • You mightn’t understand the paper completel. Jot down the points you don’t understand, and why.
  • Now, either
    • Decide not to read the paper
    • Return later to the paper after reading background material
    • Or persevere on to the third pass

Third Pass (4-5 hours):

  • You need to virtually reimplement the paper. Recreate the paper, its reasonings
  • Compare your recreation with the original
  • Think of how you would present the ideas, and compare with how the ideas are presented.
  • Here, you also jot down your ideas for future work
  • Reconstruct the entire structure of the paper from memory.
  • Now you should be able to identify the strong and weak points of the paper,  the implicit assumptions and the issues there might be with experimental or analytical techniques, as well as missing citational information.

That’s all.

Additionally, I think as a form of accountability (which I so need at the moment), I will blog every single paper I read, in accordance with the above structure.

Transfer Learning etc

I think this’d work best if I just updated my daily progress here than try giving comprehensive views of what I’m doing.

So you have data coming in that needs to be classified. Apparently the accuracy of most classifiers is abysmally low. We need to build a better classifier.

I took a month’s worth of data, and applied all possible classifiers on it, cross-validated it. Accuracy was roughly in the area of 85-90%. While that’s not excellent, it’s not bad, given the small amount of training data.

So what’s this low accuracy everyone’s talking of?

Turns out, the new data coming in turns out to be very different from the data you train on. You’ll train on June’s data, but July’s data’s going to be much different. The same words will end up reappearing in different class labels. Hence the low accuracy.

Also, you have not just one classifier, but many. It turns out that when you train many classifiers on subsets of the data, they perform better than training one classifier on the entire data.

Learning will have to keep evolving. I first thought of Active Learning in this context, where you’ll expect the user to label the stuff you are not sure about. But then, what if you confidently label stuff that is patently wrong?

The many classifiers bit of the problem helps us visualize the training data in a different way – Each category – class label – has many sub-categories. Now each classifier is trained on a month’s worth of data. It turns out that each month can be likened to a sub-category. You train on one sub-category, and test on another sub-category, and expect it to return the same class label. That’s like training a classifier on data that contains hockey-related terms for the Sports label, and then expecting it to recognize data that contains cricket-related terms as Sports too.

Sounds familiar?

This would be transfer learning/domain adaptation – you learn on one distribution, and test on a different distribution. The class labels and features however, remain the same.

This would more specifically be Transductive Transfer Learning – you have a training set, from distribution D1, and a test set, from distribution D2. You have this unlabelled test data available during training, and you somehow use this to tweak the model you’ll learn from the training data.

Many ways exist to do this. You can apply Expectation-Maximization on a Naive Bayes classifier trained on the training data, to maximize the expectation of the test data, while still doing well on the training data.  You can train an SVM, assign pseudo-labels to the test data, add those to the next iteration of training, until you get reasonable confidence measures on the test data, while still doing well on the training data.

All these approaches to Transductive Transfer Learning are fine. They assume that you have test data available during training time.

We have a slight twist on that. It might be too expensive to store the training data. Or you might have privacy concerns and hide your data, but just expose a classifier you’ve built on top of it.  So, essentially, all you have is a classifier, and you need to tweak that when training data is available.

Let’s complicate it further. You have a set of classifiers. You can pick and choose classifiers you want to combine based on some criteria on the test data, create a superclassifier, and then try tweaking that based on the test data.

For starters, check this paper by Dai out. Here, you have access to the training data. What if you don’t? Can you then tweak the classifier without knowing the data underlying it?

Let’s assume it’s possible.

Then, on some criteria you pick, you choose a few classifiers from the many that you have. You merge them. And then tweak that superclassifier. Like for example, your test data contains data related to hockey and Indian films [Labels are Sport and Film]. You have one classifier on cricket and Indian films, one on hockey and Persian films, another on football and Spanish films. So C1 and C2 are the classifiers closest to your data. You combine C1 and C2 such that you get a classifier that’d be equivalent to one that’s trained on hockey, persian films, cricket and Indian films. Optimal classifier. And then tweak it.

That’s the architecture.

The questions we seek to answer are the ones regarding How to choose which classifiers to merge. And How to tweak the classifier given test data; and whether we’d need any extra data.

And… a more fundamental question.. given that the test data’s going to be from a distribution none of your ensemble would have seen before, is it worth the while to merge classifiers? Or we can vary the KL-divergence between the training and test distributions and see how having an ensemble helps.

Nascent ideas so far.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: