RIP, Reader

Yeah, this is yet another one of the funeral dirges for Google Reader. And I post it here instead of on my personal blog because I need to get into the habit of writing about technology here. Google Reader is hardly ‘technology’ as I intend it to be… I want to use this place for research updates and paper summaries.  But the anxiety about ‘not being good enough’ when it comes to all that is so much that I don’t want to write anything even remotely geeky. I need to snap out of that. And it’s NaNoWriMo, it’s about quantity more than quality. So here we go 🙂

So basically there are two main arguments against Google Reader’s integration with Google Plus. First is about how the user interface is sucky. And the second is about how the removal of sharing has killed the whole spirit of Reader. A third, if I may add, is that the platform/API is so bad, and everything is so messed up at first look that I can’t seem to wrap my mind around how to write a wrapper that makes things better.  Oh wait, there’s a fourth as well – the ‘stream’ format, as opposed to the folders-and-tags format, is the very antithesis of what Reader is supposed to be.

Let’s start with the appearance. Yes, white space is good. It makes things look ‘clean’. But that’s only when you have very specific things you want your user to see on your page. It works great for the Google.com homepage, for instance… all you want is a search bar. But when it’s a feed reader, it doesn’t work at all. When I log in, I don’t want to see half my screen space taken up by needless headers and whatnot. The bar with ‘Refresh’, ‘Mark as read’ and ‘Feed settings’ are needlessly large and prominent instead of being smaller and not taking up much space. They aren’t used all that much, to start with, that justifies their large font size. The focus here shouldn’t be on the options, but on the thing I’m reading. Fail.

Then everything’s gray, including links. If something’s not blue or purple, my mind doesn’t consider it a link. Sorry, but those are unwritten conventions on the Web. There’s no reason to change that now, and gray is a horrible color to show that something’s different from the rest of the black text. And the only spots of color on the page are a tiny dab of red to show the feed you’re currently reading, and a large button on the top left that says ‘Subscribe’. Dab of red, seriously? I much rather preferred the entire line showing the current feed highlighted instead of that little red bar. And I don’t add new feeds to read everyday that I need a large ‘Subscribe’ button. And when I do add feeds, I don’t add them using google.com/reader… I’m on the website I want to add, usually, and add feeds by clicking on the RSS icon, and then adding to reader.

Then the UI for sharing. It’s a lot more clicks to share something now. And yeah, the gripe is that whatever I share will be shared only on G+, but we’ll get to that in a moment. My problem with having to pick what circles I share with each time I share a feed is that it’s too much decision making too often. Atleast give me a set of check boxes of my circles so that all I need to do is two clicks instead of having to start typing my circle names.

It turned out, if you wanted to share something without publicly +1-ing something, you’d have to go to the top-right corner and click on ‘Share’. Well, how is that intuitive? And why would anyone design it that way, especially when the previous way to do that was by clicking on ‘share’ right below the post? Surely, it could have just had the Circles thing appear when you clicked the ‘share’ button, and +1-ing it could be a different button? And keep the top-right Share button if you like?

Now about sharing. I can share something with folks from Google Reader, yes, but they can only read it from Google Plus. Someone said that’s like retweeting something on Twitter from your client, like say Tweetdeck, but those who follow you can see your RT’s using only twitter.com. How retarded is that? I want a one-stop shop where I can do all my reading instead of having it spread over a zillion other places.

Due to which one of the things I wanted to do was build a wrapper website that integrated links shared on your G+ stream with your Reader feeds. I can’t seem to wrap my mind around how exactly it would work, but that’s one thing I certainly want to do.

The ‘stream’ format sucks for reading shared links. I have this problem with Twitter too, but on Twitter, you can ‘Favorite’ tweets which contain links and then read them one by one later. In fact, I was wondering about a platform that takes links on your Twitter timeline and puts them together for easy reading, feed reader style. Google Plus however has no such feature which you can use to tuck away stuff for later. If you’re too busy, you skip over a shared link and it’s lost. I much preferred the model where your feeds would all accumulate and if it got too much to handle, you could always mark all as read. Even better when your feeds would be properly organized.

And then Google Plus does a bad job of displaying shared links. It shows a small preview, but that’s more often than not insufficient. Buzz was better in this respect… atleast your images could be expanded, and posts could be expanded so that you could read it right there. Ha, one positive of this would however be that people would get a lot more hits on their websites. And it is not immediately apparent as to how inconvenient this sort of a visual format is, because people don’t share so much on Google Plus yet, and they don’t yet use it as a primary reader or such extensive use that it gets on their nerves.

And finally about the thing that has had the largest impact. Sharing.

Previously, in 2007, when Reader didn’t yet have sharing, we’d all come across nice links we’d want to share with our friends, and then either ping them on IM with it, or mail them the link. Needless to say, it was irksome. For both us and our friends. But somehow, when you shared it on Reader, the intrusiveness of sending links went away. It was just there, and if you liked it, you said so on the comments or by resharing it or referencing it in conversation. It stopped feeling like you were shoving it down someone’s throat, or someone shoved it down yours.

Sharing was also a nice way to filter content. For example, I loved reading Mental Floss’s feeds, but couldn’t stand the feed-puke that were feeds like TechCrunch and Reddit, whereas it was the other way for some of my friends. So we just followed each other, and I read the TechCrunch and Reddit content they deemed good enough to share, while I shared the interesting tidbits from Mental Floss.

Google Reader, I remember feeling, was a nice incubator for observing social network dynamics and introducing social features. It was my first first-hand exposure to recommender systems, before I moved to the USA and could actually shop on Amazon or watch movies on Netflix. It was interesting seeing how the recommendations incorporated stuff from your GTalk chats, your searches, stuff you ‘liked’… I remember freaking out about how after chatting often with a friend in LA my recommended feeds included a lot of LA-related blogs. And there was a search engine based treasure hunt at my undergrad college, and a friend and I remember saying “Oh man, googling stuff for this contest is so going to affect our Reader recommendations”.

It was also where I was recommended tons of blogs on ML and NLP and IR, due to which I went to grad school where I did, and did my thesis in what I did.

Also fun was the ‘Share as a note in reader’ bookmarklet. That way, I could share stuff from anywhere on the Internet with people who I knew would appreciate it.

Now it seems as if the Plus team wants to go and prove right that ex-Amazon Googler who said Google can’t do platforms well. Instead of providing services which can be used in a variety of ways to provide ‘just right’ experiences for a variety of people, Plus is trying to do it right all by itself. And failing miserably at that. The reason for Twitter’s success is the sheer variety of ways you can tweet – from your browser, from your smartphone, from your not-so-smart phone using Snaptu, from your dumb phone via text, your tablet, your desktop…. and I just don’t see that happening with Plus yet.

Maybe I wouldn’t be so mad if all the folks I share with on Reader were on Plus, but actually, hardly anyone is. And I don’t check my Plus feed on a regular basis either. I wouldn’t mind going on Plus to just read what everyone’s sharing, but the user experience is so bad I wouldn’t want that.

Google should have learnt from when it integrated Reader with Buzz and a lot of people found that irksome and simply silenced others’ Reader shares from their Buzz feed, that the Reader format doesn’t go well with the stream format.

There’s so much quite obviously broken with the product that you wonder if the folks who design and code this up actually use it as extensively as you do. Dogfooding is super-important in products like Google’s where there are a wide variety of users and user surveys can’t capture every single aspect.

But given that doing this to Google Reader seems just like when they cancelled Arrested Development, you begin to think they are probably aware of everything, and just don’t care about you the user and your needs anymore.

PS: Can anyone help me get the Google Plus Python API up and running on Google App Engine? I want to play with it, see what it does, and am not able to get it up and running.

PPS: Does a Greasemonkey script to make G+ more presentable sound like a good idea?

PPPS: Check out the folks at HiveMined. They are building a replacement for Google Reader 🙂

Advertisements

Recommender Systems Wiki

Use and contribute and link to: http://www.recsyswiki.com/wiki/Main_Page

Now should have one such for ML methods in NLP and my life will be great

Convex Optimization.

Course I’m taking. Need to brush up on basics before diving in. And I’ve got less than a day to do that.

Anyone know a good crash course in linear algebra? Will be grateful. Thanks.

 

How to read a research paper

If I’m completely in the groove, with a firm topic in mind, I find it relatively easier to read papers. However when I’m attempting to get started on something, or am reading a paper which, say, I have to summarize for a course, I lose my footing. I procrastinate, I become reluctant to start.

I decided I wanted out of this shite, and hence googled for ‘How To Read A Paper’. I found this paper by someone from the University of Waterloo, and I suspect this will help out greatly.

Let me summarize it for you.

Essentially, given a research paper, you go over it in three passes.

First Pass (5-10 minutes):

  • Read the Title, Abstract and Introduction.
  • Read the section/subsection headings and ignore all else
  • Read the conclusions
  • Glance over the references and tick off those you’ve already read.
  • By the end of this pass, you should be able to answer 5 C’s about the paper:
    • Category
    • Context (What papers are related? What bases are used to analyze the problem?)
    • Correctness (Are the assumptions valid?)
    • Contributions of the paper
    • Clarity (Is the paper well-written?)

Second Pass (1 hour):

  • Read the paper more carefully, while ignoring details like proofs
  • Jot down points, make comments in the margins
  • Look carefully at all figures, especially graphs
  • Mark unread references for further reading (for background information).
  • Summarize main themes of the paper to someone else.
  • You mightn’t understand the paper completel. Jot down the points you don’t understand, and why.
  • Now, either
    • Decide not to read the paper
    • Return later to the paper after reading background material
    • Or persevere on to the third pass

Third Pass (4-5 hours):

  • You need to virtually reimplement the paper. Recreate the paper, its reasonings
  • Compare your recreation with the original
  • Think of how you would present the ideas, and compare with how the ideas are presented.
  • Here, you also jot down your ideas for future work
  • Reconstruct the entire structure of the paper from memory.
  • Now you should be able to identify the strong and weak points of the paper,  the implicit assumptions and the issues there might be with experimental or analytical techniques, as well as missing citational information.

That’s all.

Additionally, I think as a form of accountability (which I so need at the moment), I will blog every single paper I read, in accordance with the above structure.

Transfer Learning etc

I think this’d work best if I just updated my daily progress here than try giving comprehensive views of what I’m doing.

So you have data coming in that needs to be classified. Apparently the accuracy of most classifiers is abysmally low. We need to build a better classifier.

I took a month’s worth of data, and applied all possible classifiers on it, cross-validated it. Accuracy was roughly in the area of 85-90%. While that’s not excellent, it’s not bad, given the small amount of training data.

So what’s this low accuracy everyone’s talking of?

Turns out, the new data coming in turns out to be very different from the data you train on. You’ll train on June’s data, but July’s data’s going to be much different. The same words will end up reappearing in different class labels. Hence the low accuracy.

Also, you have not just one classifier, but many. It turns out that when you train many classifiers on subsets of the data, they perform better than training one classifier on the entire data.

Learning will have to keep evolving. I first thought of Active Learning in this context, where you’ll expect the user to label the stuff you are not sure about. But then, what if you confidently label stuff that is patently wrong?

The many classifiers bit of the problem helps us visualize the training data in a different way – Each category – class label – has many sub-categories. Now each classifier is trained on a month’s worth of data. It turns out that each month can be likened to a sub-category. You train on one sub-category, and test on another sub-category, and expect it to return the same class label. That’s like training a classifier on data that contains hockey-related terms for the Sports label, and then expecting it to recognize data that contains cricket-related terms as Sports too.

Sounds familiar?

This would be transfer learning/domain adaptation – you learn on one distribution, and test on a different distribution. The class labels and features however, remain the same.

This would more specifically be Transductive Transfer Learning – you have a training set, from distribution D1, and a test set, from distribution D2. You have this unlabelled test data available during training, and you somehow use this to tweak the model you’ll learn from the training data.

Many ways exist to do this. You can apply Expectation-Maximization on a Naive Bayes classifier trained on the training data, to maximize the expectation of the test data, while still doing well on the training data.  You can train an SVM, assign pseudo-labels to the test data, add those to the next iteration of training, until you get reasonable confidence measures on the test data, while still doing well on the training data.

All these approaches to Transductive Transfer Learning are fine. They assume that you have test data available during training time.

We have a slight twist on that. It might be too expensive to store the training data. Or you might have privacy concerns and hide your data, but just expose a classifier you’ve built on top of it.  So, essentially, all you have is a classifier, and you need to tweak that when training data is available.

Let’s complicate it further. You have a set of classifiers. You can pick and choose classifiers you want to combine based on some criteria on the test data, create a superclassifier, and then try tweaking that based on the test data.

For starters, check this paper by Dai out. Here, you have access to the training data. What if you don’t? Can you then tweak the classifier without knowing the data underlying it?

Let’s assume it’s possible.

Then, on some criteria you pick, you choose a few classifiers from the many that you have. You merge them. And then tweak that superclassifier. Like for example, your test data contains data related to hockey and Indian films [Labels are Sport and Film]. You have one classifier on cricket and Indian films, one on hockey and Persian films, another on football and Spanish films. So C1 and C2 are the classifiers closest to your data. You combine C1 and C2 such that you get a classifier that’d be equivalent to one that’s trained on hockey, persian films, cricket and Indian films. Optimal classifier. And then tweak it.

That’s the architecture.

The questions we seek to answer are the ones regarding How to choose which classifiers to merge. And How to tweak the classifier given test data; and whether we’d need any extra data.

And… a more fundamental question.. given that the test data’s going to be from a distribution none of your ensemble would have seen before, is it worth the while to merge classifiers? Or we can vary the KL-divergence between the training and test distributions and see how having an ensemble helps.

Nascent ideas so far.

Comeback. Again.

So this post will slightly deviate from the general tone of this blog. It is a tad more personal.

I’ve just come out of a phase of unstructured time where I really really wanted to fix short-term goals for myself, and failed miserably. At the end of it all, I watched Julie And Julia, where the lead protagonist uses her blog to set short-term goals for herself, while also using it to check off each goal achievement. I want to emulate that.

I am now interning at a research lab in the industry. I am deciding on a problem statement that will mostly involve some form of transductive transfer learning. I have a great work environment, and an awesome mentor who helps with the short-term goal setting.

In such a setting, I feel I should probably maintain a daily log of how things are progressing, so that I can refer back to these notes later when I want to know how to set goals and progress with research. I have a controlled environment now, and it’d be interesting as well as helpful to document my time here such that I can replicate it elsewhere.

Most of my work will involve previous work and data that’s in the public domain, so I don’t think it’ll be a breach of any contract or NDA to talk about them on a public forum. Though, I might choose to make the text unsearchable and hence make the posts hidden, while keeping the password public. Not many know of the existence of this blog, and this I guess would be a sane decision. I’ll anyway have to check with my superiors #TODO.

Alrighty. Next post possibly coming in another hour.

Reading PDFs – Superpain.

I face this huge problem of late.

I have a lot of documents to be read which are in the .pdf format. Fine, it’s a universal format, yada yada.

It however is not without its pitfalls.

Firstly, .PDF is more for the visualization than for the content. By this, I mean once you write something to a .pdf file, it’s like you’ve written it to paper. It’s a format for printing out stuff more than anything else. So while there is plenty of software to create .pdf files, there are very few to actually edit them.

So why do I want to edit .pdf files? Well, most academic papers are written with 11pt or 12pt font which doesn’t make for easy reading. And I abhor the large spaces wasted in the margins. I tried Foxit PDF editor, but it turns out that it lets you correct typos, add images, delete images, and add and delete pages, but that’s about it. It won’t let you modify a file in the true sense of the word.

So convert it to Word format. I found an online PDF to DOC converter, and did so.

Job done, right?

No.

You change the font size, it messes up the equations and tables and general layout. You decrease the size of the margins, same issue.

Oh dear god, I really wish I had the original LaTeX source that I could just modify small bits of it, or fit the same text and images into a layout better suited for reading. But trying to do that from a .pdf is like trying to get a live cow from roast beef.

It’s about time proceedings of NIPS and CHI got published in Kindle format, don’t you think? Papers are written to be read, right?

PyBrain

Machine Learning Library for Python. Yay. Here.

Learning to Link With Wikipedia – II

I’m done with most of pre-processing. Feel free to tell me how crappy my code is. Just be polite, otherwise I’ll probably cry. This takes ages to write to disk. That’s the bottleneck. It’s a sort of hackjob, though I must say I used to write worse code.

And you can use this code if you like.

import xml.dom.minidom
import re

class xmlMine:
 stopWordDict = {'':1} #dictionary of stopwords

 titleArticleDict = {} #hashmap of titles mapped to articles.

 def xmlMine(self):
 print "instantiated"

 def getStopwords(self,stopWordFile):
 #loads stopwords from file to memory
 stopWordObj = open(stopWordFile)
 stopWordLines = stopWordObj.readlines()
 for stopWord in stopWordLines:
 stopWord = stopWord.replace("\n","")
 self.stopWordDict[stopWord] = 1
 #print self.stopWordDict

 def cleanTitle(self,title):
 #removes non-ascii characters from title
 return  "".join([x for x in title if ord(x) < 128])

 def extractLinksFromText(self,textContent):
 textContent = "]] "+textContent
 textContent = textContent.replace("\n"," ") #remove linebreaks
 textContent = textContent.replace("'","") #remove quotes. they mess up the regexes.

 #remove regions in wiki pages where looking for links is meaningless
 refs = re.compile("==[\s]*References[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("==[\s]*See Also[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("==[\s]*External links[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("==[\s]*Sources[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("==[\s]*Notes[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("==[\s]*Notes and references[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("==[\s]*Gallery[\s]*==.+")
 textContent = refs.sub(" ",textContent)

 refs = re.compile("\{\|[\s]*class=\"wikitable\".+?\|\}")
 textContent = refs.sub(" ",textContent)

 textContent = textContent + "[["

 #remove stuff that's not enclosed in [[]]
 brackets = re.compile("\]\].*?\[\[")
 textContent = brackets.sub("]] [[",textContent)
 wordList = textContent.split("]] [[") #and store only the list of words sans the brackets
 #print wordList

 newWordList = []

 for word in wordList:
 originalWord = deepcopy(word)
 word = word.lower() #convert to lowercase
 #remove part before |
 altText = re.compile(".*?\|")
 word = altText.sub("",word)
 #replace number, punctuation by space
 numbr = re.compile("\d") #number
 word = numbr.sub(" ",word)
 punct = re.compile("\W") #punctuation
 word = punct.sub(" ",word)

 #if space added, split by space. replace by two/more words
 newWords = word.split(" ")

 for newWord in newWords:
 #remove trailing s after consonant
 trailingS = re.compile("^(.*[bcdfghjklmnpqrtvwxyz])(s)$")
 if trailingS.match(newWord) is not None:
 lastS = re.compile("s$")
 newWord = lastS.sub("",newWord)
 #print newWord
 if newWord not in self.stopWordDict: #remove stopwords
 if len(newWord)>2: #no point of too-short words.
 newWordList.append(newWord)
 return newWordList

 def extractTextFromXml(self,xmlFileName):
 # extracts the <title> and <text> fields from the xml files
 # processes both.
 xmlFile = xml.dom.minidom.parse(xmlFileName)
 root = xmlFile.getElementsByTagName("mediawiki");
 for mediaWiki in root:
 pageList = mediaWiki.getElementsByTagName("page")
 for page in pageList:
 titleWords = ""
 text = []
 textNodes = page.getElementsByTagName("text")
 for textNode in textNodes:
 if textNode.childNodes[0].nodeType == textNode.TEXT_NODE:
 #print textNode.childNodes[0].data
 text = self.extractLinksFromText(textNode.childNodes[0].data)
 #self.extractLinksFromText(repr("[[link0]] blah [[link1]] nolink [[link2]] nolink [[link3]]"))
 titleNodes = page.getElementsByTagName("title")
 for titleNode in titleNodes:
 if titleNode.childNodes[0].nodeType == titleNode.TEXT_NODE:
 #print titleNode.childNodes[0].data.encode('utf-8')
 titleWords =  self.cleanTitle(titleNode.childNodes[0].data)
 #print titleWords
 self.titleArticleDict[titleWords] = text

def main():
 a = xmlMine()
 a.getStopwords("stopwords.txt")
 a.extractTextFromXml("Wikipedia-20090505185206.xml")
 opFile = open("links.txt","w")
 string = ""
 for article in a.titleArticleDict.keys():
 string = string + str(article)
 string = string + ":"
 linkList = a.titleArticleDict[article]
 for link in linkList:
 string = string + str(link) + ","

 lastComma = re.compile(",$")
 string = lastComma.sub("",string)
 string = string + "\n"
 opFile.write(string.encode('utf-8'))

if __name__ == "__main__":
 main()

Learning to Link with Wikipedia – I

I hope to maintain a log of the project I’m working on for my Data Mining course this quarter. I find blogging makes me feel more accountable on a day-to-day basis, and I could really use any help that comes my way on this.

So now to the problem:

Identifying which terms in a Wikipedia article need to be linked to other articles.

I have a dataset to work with. It has information about labels on the data and the words present in each document. I’m now trying to extract which words are linked.

So, yeah, still stuck in preprocessing.

I’ll post the python script after I’m done with it. Which should happen in the next few hours. Till then, I’m offline 🙂

%d bloggers like this: