Ideas in machine translation for Indian languages

For a while now, I’ve been pondering the problem of machine translation for Indian languages.

Given India has fourteen official languages, that are pretty damn closely related, and given there are so many enthusiastic people in the NLP domain, we should be at the forefront of machine translation. Unfortunately that is not the case. Yet.

It also bothered me that the current leader in a working machine translation system is Google. Google, while having some of the best scientists and engineers, is American in soul and legality. There are several reasons to have homegrown machine translation systems that are made in India, and which have a more Indian focus.

In any case, I haven’t worked personally on machine translation systems, though I have worked with colleagues who have, and it gives me a vague understanding of how it works. From what I’ve seen, Google is great in the generic case. But if you have a very specific focus, say, in the financial domain, or you want the translations to be conversational, or if you want to restrict yourself to the legal domain, you would need to improve and tweak what translations Google throws at you.

I’ve also seen that most of the machine translation work in Indian languages has been very academic. This is welcome, but in practice, these things don’t usually make it to the market. In my experience, approaching a problem like this from an academic perspective is very different from approaching it as an engineer. In academia, I have largely seen the approaches be technique based. The problem is just a setting to explore new techniques of solving it. This works brilliantly in uncovering new approaches to solving a problem. When I was at UCI, this defined my approaches. To find a newer, more improved technique to do something. As an engineer however, you want to find and implement a ‘good enough’ solution. You want whatever works. You don’t care if you need to have humans in the loop, or if your training data isn’t perfect. I haven’t seen (m)any Indian language translation systems with this approach of using whatever works, getting humans in the loop, and giving imperfect outputs.

I want to try this.

There are so many cool questions I want answered. Like, how easy will it be to translate between closely-related languages like Tamil-malayalam, Hindi-Punjabi, Assamese-Oriya-Bengali, Kannada-Telugu…. and so on? How well will Jason Baldridge’s two-hour-tagging-required POS-tagger work on Indian languages? What happens if I use Sanskrit as interlingua?

I also found that the largest corpus of cross-language translation for Indian languages is the Gazette of India. It is a Govt of India communication, that is posted in English as well as other languages. I think Google uses this for its statistical machine translation heavily. Unsurprisingly, there is a very formal tilt to the translations. This is way more pronounced in Indian languages where formal style is very, very different from casual conversation. Detecting the formality level of an English sentence and translating it appropriately into an Indian language seems like an interesting problem.

Use cases for translation in India are also something I wonder very hard about.

The obvious use case is a generic translation app. This is not something I’m inclined to go head-to-head with Google on. Not right away at least ;) But it ought to be something we keep coming back to.

The next obvious criterion is an API stack of some sort that others can use to build their apps and other regionalization needs. Google translate API seems to be a clear winner here as well. It will take a while to build something with that level of reliability and generic nature. But not too long, I’d wager.

A good start however would be a niche need. Like maybe translating legal documents from one language to another. Or to English (but then English is an Indian language too ;) ).  I can use Google’s API to generate training data cheaply, and then tweak my built model around for my specific usecase.

Another niche need would be to translate from one Indian language to another in an app that tourists/visitors can use to navigate around town. The kicker here is, how much more useful would your app be as compared to a phrasebook? A more useful app in this context would be one that can read signboards and translate them for you.

Yet another is to help the diaspora and other Indians learn a language through simple translated sentences. Again, this falls into the trap of how much better this would be if it was done like a phrasebook or the app version of “Learn Kannada in 30 days” manuals.

Another idea is to make the Government of India your customer, and help them with their regionalization needs. But then, the government has more bilingual people in the IAS itself than they need, and simple translation is probably not at all an issue when you’re operating at the scale of the government of India.

The dark side of me is thinking up an exciting novel/movie, though. Two idealistic US-educated scientists get inspired by Make In India and go back to make a simple translation app. After a whole lot of failures in monetizing their work, they are suddenly approached by the Govt of India, by the same officials who laughed their idea off earlier. Picture a Paresh Rawal at his droll politician best telling these meek urban types how much their idea will never work in the ‘real India’, and right after the interval coming back with a more serious professional look and demeanor along with the head of R&AW. Now I want this guy to be played by someone who radiates quiet power. Maybe Atul Kulkarni, but he’s got to look a decade older, and a bit more better built. And they find they can instantly become rich if they sell their code to NTRO, to use on the NETRA program (kind of like PRISM). They say thanks but no thanks, but heck, the head of NTRO tells them it’s an offer they can’t refuse. This guy’s got to be a persuasive shades-of-grey sort of wizened spy who used to work in ATT and NASA before he got recruited and had to fake his death and everything and now works under a new name. The two protagonists have a ‘Gasp! It’s him!’ moment of recognition because they’ve actually used a lot of his research to make their software. This NTRO guy can easily be played by Madhavan. And the rest of the plot is about how they decide on what to do with their software, whether they join NTRO, and whether they can sleep at night knowing they are being used to spy on billions of little online conversations every day.

Hmm. I gotta write that.

Backups

I found today that Amazon S3 has a really cool one-click backup, where you can set things to back up regularly to Amazon Glacier.

And Amazon DynamoDB also has this thing where you can set it to automatically back up to a table in another region.

You can also set DynamoDB to back up to S3.

Glacier is apparently like a substitute for magnetic tape, without the inconveniences of tape. Takes a while to restore, as well. Pretty cool idea. I wonder what competition exists in this space, currently. A cursory search suggests none.

Glad this is there. It’s pretty essential.

HCI and Swedish Medical Center

I had to go to the doctor recently. So the patient sits opposite to the doctor, maybe a little to the right, and the doctor’s in front of a computer, and is keying in things into the hospital management software. The doctor has her back to the door. What I noticed this time was, there’s a mirror at the back of the door. So the patient, from where she sits, can actually see the doctor’s computer monitor.

A bulb of recognition went off.

A long time ago, I’d attended a talk at UCI’s HCI seminar series. I think it was Dr. Yunan Chen’s practice talk for her presentation at CHI. Her research is mainly about device use in the medical field. This particular talk was about an ethnographic study of patients’ perceptions of doctors’ device use.

One thing that the patients had an issue with was doctors typing into a computer as the patients spoke. They wondered what the doctor was typing, whether the doctor actually was listening, and if the doctor was doing something like checking mail or Facebook. And that led to a lot of lack of confidence.

Looks like Swedish has taken into account that research. A mirror at the back of the door is a simple solution. You can be sure the doctor isn’t on Facebook, even if you can’t read what they’re typing through the mirror. And doctors also take time to show you what they’ve written and inform you they’ll be printing it out for you anyway, and that you can have online access to this information as well.

Pretty good, huh, to see something go from research to implementation :)

Back to being back here

I don’t remember the last time I posted here. I don’t even think anyone remembers this place exists.

Irrespective.

I’ve grown a lot careerwise. This blog was supposed to help me along that journey, but somehow got ignored by the wayside. Also there’s this overreaching guilt of not doing enough to post here. My big plans still remain. But every time someone asks me about them, I chuckle sadly.

So what’s been on with me? I graduated from UC Irvine under Dr. Ihler in 2011. After that, I was doing NLP for the finance industry for two years. It’s quite an interesting field, I must say. I had one class that covered insider trading and EBITDA and Mergers and Acquisitions and I found all of it enormously interesting. I didn’t unfortunately keep up with my financial knowledge though. I didn’t really need it in what I did on a day to day basis.

And what did I do? I worked on a whole bunch of interesting things. So you have a ginormous quantity of documents coming in in so many different forms, and you need to parse them all and extract data from them. So you end up doing all these extremely basic things. You use OCR to convert image PDFs to text. You parse PDF in all its ugliness and convert it to a simpler format, while taking care to preserving some of the PDF-ey things about PDFs. And then it turns out there are 90 languages and your clients speak English. So you translate 90 languages into English. Some of it’s easy, especially European languages. A lot of it is painful. But we aren’t looking for high-quality translations…. just enough for the numbers in the financial documents to make sense. But then you run into a lot of unique problems. You don’t want to translate Yuan to Dollars. You find that most off-the-shelf translators are built for general language, not finance-specific language, so all the translations are different.

And then you do other interesting stuff with all the stuff you’ve processed so far. You try Named Entity Recognition. You try recommending similar documents. You try identifying series in document streams. You try creating summaries.

All of it was mighty interesting. On a given day, I’d code in C, C#, Perl, Java and Python and it’d all be no big deal. I learnt what MVC and MVVM meant. I began taking a real interest in software design. I learnt how to write maintainable large code. And the benefits of version control.

And then it was time to move.

I work now for a large online retailer’s Search&Discovery division. And that’s all I can say about it. Maybe some day I’ll reminisce fondly on what text mining challenges I face here, the scale of what I work on, and other things that would have by then become old hat. But not now.

I’ve had other interesting experiences with data in the meantime. Facebook NY came up with a Data Science round table. The invitee list looked like Chief Scientist, Head, Data Science, Asst Prof…. you get the drift…. and then me, with less than two years of work experience. It was insanely interesting to meet such people and have them treat me like they had a lot to learn from me. I learnt so much that day that though I’ve forgotten all their names, the discussions are still etched in my mind. It isn’t everyday that you have MCMC sampling explained to you over beer and fries someone else is paying for.

And then I tried a hack I’m not allowed to talk about, and I learnt there’s a feature in POS Taggers called the Gazetteer, where all you do is give it a set of phrases and the POS they belong to, and bam, any occurrence of those phrases (exact matches) is tagged thus. It’s insanely useful when you have your own new part of speech, like say, Celeb Names or Book Titles or some such.

So that’s been what I’ve been upto. Let’s hope I keep up this pace of blogging.

 

Getting back to machine learning

I got done with my Master’s in Science in Computer Science. I graduated with a thesis titled ‘Graphical Models for Entity Coreference Resolution‘.

Since then, it’s been a long break from all things hardcore machine learning and data mining and natural language processing. I have a nice day job which pays for my essentials and still leaves me with enough time and money to do a lot of other stuff. My team does a lot of ML, but that does not include what I’m working on at the moment. It might involve me writing some code which learns stuff from data and predicts on some other data, but I don’t know yet.

It’s been a good break. I needed this. I’m a much more confident person now. I have more confidence in my abilities to write and maintain large bits of code. I think it’s about time for me to get back to learning all about machine learning and graphical models with no stress of deadlines and enough opportunity to explore, and most importantly, no feeling intimidated. Also, going through material the second time over would be a good way of absorbing all that I missed the first time.

I’ve been cleaning out my hard disk in order to make conditions ideal for me to do this. A messy filesystem is really hard to work with. Especially with no version control or anything. Things get messy and when it’s crunch time, it only gets worse, not being able to find what you want because you haven’t labelled anything right.

I cleared out all my backups off of my external hard drive. Then I moved my entire pre-NYC-move photographs to the external HD. Going over which individual images to keep and which ones to delete was very cringeworthy – I had been quite camera-happy before 2009, and had clicked a lot of pictures. They say your first 10,000 pictures are your worst. Believe me, mine were. So overtly cringeworthy. More so since back then I didn’t even used to pay attention to how I dressed or how I did my hair or how I maintained my skin. Now those issues don’t exist anymore, so the cringing isn’t coupled with embarrassment and helplessness in my head like before.

I then uninstalled a lot of unnecessary software. Multiple builds of Python, with crazy sets of plugins on each build. Outdated versions of Eclipse. And oh, so many datasets. Deleted what I could, shifted the rest to my external HD. Tried organizing all my music, tagging them appropriately and attempting to put them into the right folders. Wasn’t so easy, so gave up midway. But I discovered that Mp3Tag seems to be a good app to do this.

I then organized my huge collection of ebooks using Calibre. I seem to have a lot of crap I downloaded from Project Gutenberg back in my young-and-foolish days in the infancy of the Google-powered Internet. Somehow, I just can’t delete classic books, no matter how I’ve never read them. So they stay for now.

Turns out, I have tons of movies stored as well, which I’d downloaded off of Putlocker back when I couldn’t even afford Netflix. Organized them well. I also seem to have a small collection of stuff downloaded off Youtube – clever and rare Indian ads, rare music videos of indie Indian pop/rock/movies. I need to upload them back to Youtube someday, for the originals I downloaded from seem pretty much deleted off the face of Youtube.

I even found all the original Stanford Machine Learning Class videos with Prof. Ng. Heh, with Coursera and Udacity, and Khan Academy now, you don’t need any of those like I did back in 2008-09. It was a different time back then, really.

I installed Python 3.2 after that. And Eclipse Juno. Followed by PyDev and the Google App Engine plugin for Eclipse. A windows installer for SciPy exists which is compatible with Python 3.2. However, MatPlotLib’s official Windows installer releases don’t yet support Python 3.2. Thankfully, unofficial ones exist here (oh yay, look, it’s from UCI).  I can of course build everything from source, but I want to keep this as hassle-free as possible.

I also need to get started with version control on Google Code or some such, so that I keep all my code somewhere I can access from everywhere.

Now next on the agenda is to go through a machine learning textbook, or an online course and slowly build my own libraries for machine learning from scratch. Maybe I’ll try building a Weka replica – uniform interface for training and testing each algorithm.

After that is to work on probabilistic graphical models and build those from scratch as well.

And in the midst of all this, I want to publish the work I’ve done in my thesis, which will mean trying to replicate those results, in a new and improved way, taking into consideration all the ideas I didn’t have time for, and those which I could have implemented better.

Let’s see how it goes :) I hope to keep updating this place with all the stuff I do :)

Update: I found an ML textbook best suited to my needs finally! Machine Learning: An Algorithmic Perspective. I’ll start tomorrow, will see how much I learn.

[Webapp Idea] Twitter Link Browser

I use Twitter quite some. A lot of the people I follow share quite a lot of links. When I browse twitter on my mobile in the morning, I can’t check out all the links. I usually ‘Favorite’ the links that seem interesting and then browse them later. I’d actually prefer a better interface to this, which enables me to tag these links privately so that I can look for them later as well.

I found one such webapp whose name I now forget. The problem with it was it had a sucky interface and didn’t let me preview all the links properly. Then there’s also Tweetree which offers previews of shared links. I also like the Google Reader/Gmail sort of interface which keeps track of new links and already read links. And also, when multiple people share the same link, I’d like to see it all collapsed as one with “X, Y and Z shared this” next to it. Or something.

So this is one thing I’d like to build using Google App Engine.

The steps to do so would be as follows:

  1. Find a nice Twitter API interface for Python which can preferably be integrated with Google App Engine.
  2. Write code to get tweets from your Twitter timeline.
    2(a) Learn how to use Twitter OAuth.
  3. Detect tweets with links. When they do, extract the unshortened link.
  4. By now, you have a set of links, and can choose to display them as you wish.
  5. Use the App Engine datastore to store previously viewed links. Possible attributes to be stored along with link can include users who shared this link, timestamps of tweets which shared these links, viewed-or-not (when dropping into database after extraction, this attribute should have the value ‘No’), title of linked page. Also store time of last login.
  6. Workflow: On login, extract links from timeline and drop into database until the timestamp of the tweet you’re reading is lesser than the time of last login. Then display those links with ‘viewed-or-not’ value as ‘No’ as ‘Unread items’ and the rest as ‘read’ items. On clicking each link, mark them as read. Also provide checkboxes to mass-markAsRead.
  7.  Basic interface: Gmail HTML sorts. Previews and stuff can come later.
Components to build a basic version:
  • OAuth
  • Tweet-getter
  • Link-extract-and-drop-in-database. This in turn includes Link extractor, unshortener, title-getter, database interface.
  • Database queries to view links and mark them as read/unread.
  • User interface.
Anything missing so far? Loose ends? Anything can be done better? Are you working on this? Any advice on getting started or any of the individual components?

RIP, Reader

Yeah, this is yet another one of the funeral dirges for Google Reader. And I post it here instead of on my personal blog because I need to get into the habit of writing about technology here. Google Reader is hardly ‘technology’ as I intend it to be… I want to use this place for research updates and paper summaries.  But the anxiety about ‘not being good enough’ when it comes to all that is so much that I don’t want to write anything even remotely geeky. I need to snap out of that. And it’s NaNoWriMo, it’s about quantity more than quality. So here we go :)

So basically there are two main arguments against Google Reader’s integration with Google Plus. First is about how the user interface is sucky. And the second is about how the removal of sharing has killed the whole spirit of Reader. A third, if I may add, is that the platform/API is so bad, and everything is so messed up at first look that I can’t seem to wrap my mind around how to write a wrapper that makes things better.  Oh wait, there’s a fourth as well – the ‘stream’ format, as opposed to the folders-and-tags format, is the very antithesis of what Reader is supposed to be.

Let’s start with the appearance. Yes, white space is good. It makes things look ‘clean’. But that’s only when you have very specific things you want your user to see on your page. It works great for the Google.com homepage, for instance… all you want is a search bar. But when it’s a feed reader, it doesn’t work at all. When I log in, I don’t want to see half my screen space taken up by needless headers and whatnot. The bar with ‘Refresh’, ‘Mark as read’ and ‘Feed settings’ are needlessly large and prominent instead of being smaller and not taking up much space. They aren’t used all that much, to start with, that justifies their large font size. The focus here shouldn’t be on the options, but on the thing I’m reading. Fail.

Then everything’s gray, including links. If something’s not blue or purple, my mind doesn’t consider it a link. Sorry, but those are unwritten conventions on the Web. There’s no reason to change that now, and gray is a horrible color to show that something’s different from the rest of the black text. And the only spots of color on the page are a tiny dab of red to show the feed you’re currently reading, and a large button on the top left that says ‘Subscribe’. Dab of red, seriously? I much rather preferred the entire line showing the current feed highlighted instead of that little red bar. And I don’t add new feeds to read everyday that I need a large ‘Subscribe’ button. And when I do add feeds, I don’t add them using google.com/reader… I’m on the website I want to add, usually, and add feeds by clicking on the RSS icon, and then adding to reader.

Then the UI for sharing. It’s a lot more clicks to share something now. And yeah, the gripe is that whatever I share will be shared only on G+, but we’ll get to that in a moment. My problem with having to pick what circles I share with each time I share a feed is that it’s too much decision making too often. Atleast give me a set of check boxes of my circles so that all I need to do is two clicks instead of having to start typing my circle names.

It turned out, if you wanted to share something without publicly +1-ing something, you’d have to go to the top-right corner and click on ‘Share’. Well, how is that intuitive? And why would anyone design it that way, especially when the previous way to do that was by clicking on ‘share’ right below the post? Surely, it could have just had the Circles thing appear when you clicked the ‘share’ button, and +1-ing it could be a different button? And keep the top-right Share button if you like?

Now about sharing. I can share something with folks from Google Reader, yes, but they can only read it from Google Plus. Someone said that’s like retweeting something on Twitter from your client, like say Tweetdeck, but those who follow you can see your RT’s using only twitter.com. How retarded is that? I want a one-stop shop where I can do all my reading instead of having it spread over a zillion other places.

Due to which one of the things I wanted to do was build a wrapper website that integrated links shared on your G+ stream with your Reader feeds. I can’t seem to wrap my mind around how exactly it would work, but that’s one thing I certainly want to do.

The ‘stream’ format sucks for reading shared links. I have this problem with Twitter too, but on Twitter, you can ‘Favorite’ tweets which contain links and then read them one by one later. In fact, I was wondering about a platform that takes links on your Twitter timeline and puts them together for easy reading, feed reader style. Google Plus however has no such feature which you can use to tuck away stuff for later. If you’re too busy, you skip over a shared link and it’s lost. I much preferred the model where your feeds would all accumulate and if it got too much to handle, you could always mark all as read. Even better when your feeds would be properly organized.

And then Google Plus does a bad job of displaying shared links. It shows a small preview, but that’s more often than not insufficient. Buzz was better in this respect… atleast your images could be expanded, and posts could be expanded so that you could read it right there. Ha, one positive of this would however be that people would get a lot more hits on their websites. And it is not immediately apparent as to how inconvenient this sort of a visual format is, because people don’t share so much on Google Plus yet, and they don’t yet use it as a primary reader or such extensive use that it gets on their nerves.

And finally about the thing that has had the largest impact. Sharing.

Previously, in 2007, when Reader didn’t yet have sharing, we’d all come across nice links we’d want to share with our friends, and then either ping them on IM with it, or mail them the link. Needless to say, it was irksome. For both us and our friends. But somehow, when you shared it on Reader, the intrusiveness of sending links went away. It was just there, and if you liked it, you said so on the comments or by resharing it or referencing it in conversation. It stopped feeling like you were shoving it down someone’s throat, or someone shoved it down yours.

Sharing was also a nice way to filter content. For example, I loved reading Mental Floss’s feeds, but couldn’t stand the feed-puke that were feeds like TechCrunch and Reddit, whereas it was the other way for some of my friends. So we just followed each other, and I read the TechCrunch and Reddit content they deemed good enough to share, while I shared the interesting tidbits from Mental Floss.

Google Reader, I remember feeling, was a nice incubator for observing social network dynamics and introducing social features. It was my first first-hand exposure to recommender systems, before I moved to the USA and could actually shop on Amazon or watch movies on Netflix. It was interesting seeing how the recommendations incorporated stuff from your GTalk chats, your searches, stuff you ‘liked’… I remember freaking out about how after chatting often with a friend in LA my recommended feeds included a lot of LA-related blogs. And there was a search engine based treasure hunt at my undergrad college, and a friend and I remember saying “Oh man, googling stuff for this contest is so going to affect our Reader recommendations”.

It was also where I was recommended tons of blogs on ML and NLP and IR, due to which I went to grad school where I did, and did my thesis in what I did.

Also fun was the ‘Share as a note in reader’ bookmarklet. That way, I could share stuff from anywhere on the Internet with people who I knew would appreciate it.

Now it seems as if the Plus team wants to go and prove right that ex-Amazon Googler who said Google can’t do platforms well. Instead of providing services which can be used in a variety of ways to provide ‘just right’ experiences for a variety of people, Plus is trying to do it right all by itself. And failing miserably at that. The reason for Twitter’s success is the sheer variety of ways you can tweet – from your browser, from your smartphone, from your not-so-smart phone using Snaptu, from your dumb phone via text, your tablet, your desktop…. and I just don’t see that happening with Plus yet.

Maybe I wouldn’t be so mad if all the folks I share with on Reader were on Plus, but actually, hardly anyone is. And I don’t check my Plus feed on a regular basis either. I wouldn’t mind going on Plus to just read what everyone’s sharing, but the user experience is so bad I wouldn’t want that.

Google should have learnt from when it integrated Reader with Buzz and a lot of people found that irksome and simply silenced others’ Reader shares from their Buzz feed, that the Reader format doesn’t go well with the stream format.

There’s so much quite obviously broken with the product that you wonder if the folks who design and code this up actually use it as extensively as you do. Dogfooding is super-important in products like Google’s where there are a wide variety of users and user surveys can’t capture every single aspect.

But given that doing this to Google Reader seems just like when they cancelled Arrested Development, you begin to think they are probably aware of everything, and just don’t care about you the user and your needs anymore.

PS: Can anyone help me get the Google Plus Python API up and running on Google App Engine? I want to play with it, see what it does, and am not able to get it up and running.

PPS: Does a Greasemonkey script to make G+ more presentable sound like a good idea?

PPPS: Check out the folks at HiveMined. They are building a replacement for Google Reader :)

Recommender Systems Wiki

Use and contribute and link to: http://www.recsyswiki.com/wiki/Main_Page

Now should have one such for ML methods in NLP and my life will be great

Convex Optimization.

Course I’m taking. Need to brush up on basics before diving in. And I’ve got less than a day to do that.

Anyone know a good crash course in linear algebra? Will be grateful. Thanks.

 

How to read a research paper

If I’m completely in the groove, with a firm topic in mind, I find it relatively easier to read papers. However when I’m attempting to get started on something, or am reading a paper which, say, I have to summarize for a course, I lose my footing. I procrastinate, I become reluctant to start.

I decided I wanted out of this shite, and hence googled for ‘How To Read A Paper’. I found this paper by someone from the University of Waterloo, and I suspect this will help out greatly.

Let me summarize it for you.

Essentially, given a research paper, you go over it in three passes.

First Pass (5-10 minutes):

  • Read the Title, Abstract and Introduction.
  • Read the section/subsection headings and ignore all else
  • Read the conclusions
  • Glance over the references and tick off those you’ve already read.
  • By the end of this pass, you should be able to answer 5 C’s about the paper:
    • Category
    • Context (What papers are related? What bases are used to analyze the problem?)
    • Correctness (Are the assumptions valid?)
    • Contributions of the paper
    • Clarity (Is the paper well-written?)

Second Pass (1 hour):

  • Read the paper more carefully, while ignoring details like proofs
  • Jot down points, make comments in the margins
  • Look carefully at all figures, especially graphs
  • Mark unread references for further reading (for background information).
  • Summarize main themes of the paper to someone else.
  • You mightn’t understand the paper completel. Jot down the points you don’t understand, and why.
  • Now, either
    • Decide not to read the paper
    • Return later to the paper after reading background material
    • Or persevere on to the third pass

Third Pass (4-5 hours):

  • You need to virtually reimplement the paper. Recreate the paper, its reasonings
  • Compare your recreation with the original
  • Think of how you would present the ideas, and compare with how the ideas are presented.
  • Here, you also jot down your ideas for future work
  • Reconstruct the entire structure of the paper from memory.
  • Now you should be able to identify the strong and weak points of the paper,  the implicit assumptions and the issues there might be with experimental or analytical techniques, as well as missing citational information.

That’s all.

Additionally, I think as a form of accountability (which I so need at the moment), I will blog every single paper I read, in accordance with the above structure.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: