Category Archives: natural language processing

Navigating the Machine Learning job market

Over the past couple of months, I have been trying to navigate the machine learning job market. It has been a bewildering, confusing, and yet immensely satisfying and informative time. Talking with friends in similar situations, I find a lot of common threads, and I find surprisingly little clarity online regarding this.

So I’ve just decided to put together the sum total of my experiences. Your mileage may vary. After you’re done being a fresher, your situation and what you’re looking for gets a little more unique, so take whatever I say with a pinch of salt.

I’ve been passionate about machine learning for six years or more now. Though I didn’t realize it at that time, a lot of project choices, career choices and course choices I made were with the thought of ‘does this help me get closer to a research-oriented job that involves text mining in some form?’.  I went to grad school at a university that was very research oriented and worked on a master’s thesis on an NLP problem, as well as a ton of projects in courses. My first job after that involved NLP in the finance industry. My second job also involved text processing. The jobs I got offers from after this period also involve NLP strongly. I’ve literally never worked on anything else. So you can understand where I’m coming from.

So. Machine learning jobs. Where are they, usually?

Literally everywhere, it turns out. Every company seems to have a research division that involves something to do with data, and data mining. The nature of these positions can vary.

There are positions where you need to have some knowledge of machine learning, and it kind of informs your job, which might or might not involve having to use ML-based solutions. Usually these positions are at large companies. As an example, you might be in a team whose output is, say, an email client. There’s some ML used in some features of the product, and it is important for you to be able to grasp and work around those algorithms, or be able to analyze data, but on a day to day basis you’re working on writing code that doesn’t involve any ML.

There are other similar positions where you deal with a higher volume of data, and they have simple solutions to get meaning out of them. Maybe they use Vowpal Wabbit on a Hadoop cluster on occasion. Or Mahout. But they’ve got the ML bit nailed down, and more of the work involves just doing big data kind of work. These positions are more ubiquitous. If you have some ML on your resume, as well as Hadoop or HBase, these doors open up to you. Most of the places that require this kind of a skillset are mid-sized companies kind of out of the startup phase.

Then you have the Data Scientist positions. This phrase is pretty catchall, and you find a wide variety of positions if you look for this title. Often at big firms, it means that you have knowledge of statistics, and can deal with tools like R, Excel, SQL databases, and maybe Python in order to find insights that help with business decisions. The volume of data you deal with isn’t usually large.

At startups though, this title means a lot more. You are usually interviewing to be the go-to person for all the ML needs in the company. The kind of skills interview all the ones I mentioned above, apart from having a thorough knowledge of other things like scikit-learn and Weka, as well as having worked on ML projects. Some big data experience is usually a plus. Often, you’re finding insights in the data and prototyping things that an engineering team will put in production. Or maybe you’re also doing that if ML is not central to the startup’s core business.

Most people are looking for the Research Engineer job. You aren’t usually coming up with new algorithms. But you’re implementing some. On the upper end of the scale, you’re going through research papers and implementing the algorithms in them and making them work. You need a fair idea of putting code into production and deviate from research in adding layers to things to make your system work in a more deterministic, debuggable fashion. An example would be several jobs at LinkedIn where a lot of the features on the site need you to use collaborative filtering or classification. Increasingly, these jobs work on large data, but often that is not the case, and people manage fine using parallel processing instead of graph databases and mapreduce.

In a mature team, this position might not require you to use your ML skills on a day to day basis. In a new team, this position would need you to work on end to end systems that happen to use ML that you will be implementing.

In larger firms, you probably just need to have worked on ML in grad school, and your past jobs. It doesn’t matter the nature of the kind of data you’ve worked on. In startups though, they start looking for more specific skills. Like they’d want someone who’s specifically worked on topic modelling. Or machine translation. The complexity of their system doesn’t usually call for a PhD. They would grab an off the shelf solution if they could. But they would ideally want someone who has an idea of these things own this component and manage it completely, and be able to hit the ground running, which is why they want someone who’s worked on same or similar things previously.

Which brings me to another point. All ML jobs aren’t equally interviewed for.

Several large as well as mid-sized tech firms hire you for the company, not for a specific team or role. Usually, the recruiter finds you based on buzzwords in your resume, and sets up interviews with you. The folks interviewing you probably work in teams that have nothing to do with your skills. It is possible you go through interviews not answering even one ML question. Later when you get hired, they try to match you to a team, and they try to take into account your ML background to place you in a relevant team. If you’re interviewing for a specific kind of job, this makes it harder as you don’t know until you’re done with the whole process about what kind of work you’ll be doing.

Like I said before, at startups probably, you’ll know exactly what kinds of problems you’ll be working on. But more often, you’re hired into a group of sister teams. They all require similar skills. Maybe they work on different components of the same product, all of which use ML in different ways. So you have a fair idea of what you’ll be working on, but not necessarily a clear picture. You might end up working at the heart of the ML algorithm, or maybe you’re preprocessing text. The interviews will go over your ML background and previous projects as well as ML-related problem-solving.

Then there’s the Applied Researcher role. You usually require a demonstrated capability of working on reasonably complex ML problems. You are occasionally putting things in production and need good coding skills. Often, you’re prototyping things after researching different approaches. When you do put things in production, it is usually tools that other teams that use ML in their solutions use. Language is no bar, but usually there’s an agreed-upon suite of tools and languages that the team uses.

The Researcher role usually requires a PhD. Your team is probably the idea factory of the company, or that particular line of business of that company. Intellectual property generation is part of the job. I’m not highly insightful about this line of work, because I haven’t known very many people opting for these positions, and it feels increasingly like PhDs take up the Applied Researcher/Research Engineer role in a team, and do the prototyping and analyses while others help with that as well as put these prototypes into production.

There’s a lot of overlap in all these different types of positions I’ve mentioned, and it isn’t a watertight classification. It’s a rough guide to the different kinds of positions there are.

So where do you find these jobs?

LinkedIn is a great resource. You can use ‘machine learning’, ‘data mining’, ‘image processing’ or ‘data science’ or ‘text mining’ or ‘natural language processing’ as search keywords. I’ve also found Twitter to be a great place to search for jobs using these same keywords.

There are tons of job boards that also enable you to search using these keywords. Apart from them, I find a lot of ML-specific job fora. There’s KDNuggets Jobs, NLPPeople, LinguistList which are browsable job boards. Apart from them, there are also mailing lists like ML-News and SIG-IRList. I’ve also found /r/MachineLearning on Reddit to be a good resource on occasion for jobs.

Now that you’ve found a position and sent them off your resume and they got back to you, what do you expect in the interview? Wait for my next post to find out!

Advertisements

Looking for a dataset for Kannada Machine Translation

Someone who works for the Govt of India told me about the Indian Gazette, which published a summary of all the activities of the government in English and Hindi. And there are state gazettes as well, which I assumed did the same. I found that the central government puts out the gazette with the same content in both English and Hindi. As perfect a sentence-alignment as you can expect.

Unfortunately, it doesn’t seem like the Karnataka government does that. They publish everything in only Kannada. The Kerala government publishes in only English. And the Tamil Nadu government publishes some bullet points in English and some in Tamil.

I’d not checked on this earlier, unfortunately. Now I’m back to square one, looking for a dataset for Kannada machine translation. Know of any?

Ideas in machine translation for Indian languages

For a while now, I’ve been pondering the problem of machine translation for Indian languages.

Given India has fourteen official languages, that are pretty damn closely related, and given there are so many enthusiastic people in the NLP domain, we should be at the forefront of machine translation. Unfortunately that is not the case. Yet.

It also bothered me that the current leader in a working machine translation system is Google. Google, while having some of the best scientists and engineers, is American in soul and legality. There are several reasons to have homegrown machine translation systems that are made in India, and which have a more Indian focus.

In any case, I haven’t worked personally on machine translation systems, though I have worked with colleagues who have, and it gives me a vague understanding of how it works. From what I’ve seen, Google is great in the generic case. But if you have a very specific focus, say, in the financial domain, or you want the translations to be conversational, or if you want to restrict yourself to the legal domain, you would need to improve and tweak what translations Google throws at you.

I’ve also seen that most of the machine translation work in Indian languages has been very academic. This is welcome, but in practice, these things don’t usually make it to the market. In my experience, approaching a problem like this from an academic perspective is very different from approaching it as an engineer. In academia, I have largely seen the approaches be technique based. The problem is just a setting to explore new techniques of solving it. This works brilliantly in uncovering new approaches to solving a problem. When I was at UCI, this defined my approaches. To find a newer, more improved technique to do something. As an engineer however, you want to find and implement a ‘good enough’ solution. You want whatever works. You don’t care if you need to have humans in the loop, or if your training data isn’t perfect. I haven’t seen (m)any Indian language translation systems with this approach of using whatever works, getting humans in the loop, and giving imperfect outputs.

I want to try this.

There are so many cool questions I want answered. Like, how easy will it be to translate between closely-related languages like Tamil-malayalam, Hindi-Punjabi, Assamese-Oriya-Bengali, Kannada-Telugu…. and so on? How well will Jason Baldridge’s two-hour-tagging-required POS-tagger work on Indian languages? What happens if I use Sanskrit as interlingua?

I also found that the largest corpus of cross-language translation for Indian languages is the Gazette of India. It is a Govt of India communication, that is posted in English as well as other languages. I think Google uses this for its statistical machine translation heavily. Unsurprisingly, there is a very formal tilt to the translations. This is way more pronounced in Indian languages where formal style is very, very different from casual conversation. Detecting the formality level of an English sentence and translating it appropriately into an Indian language seems like an interesting problem.

Use cases for translation in India are also something I wonder very hard about.

The obvious use case is a generic translation app. This is not something I’m inclined to go head-to-head with Google on. Not right away at least 😉 But it ought to be something we keep coming back to.

The next obvious criterion is an API stack of some sort that others can use to build their apps and other regionalization needs. Google translate API seems to be a clear winner here as well. It will take a while to build something with that level of reliability and generic nature. But not too long, I’d wager.

A good start however would be a niche need. Like maybe translating legal documents from one language to another. Or to English (but then English is an Indian language too 😉 ).  I can use Google’s API to generate training data cheaply, and then tweak my built model around for my specific usecase.

Another niche need would be to translate from one Indian language to another in an app that tourists/visitors can use to navigate around town. The kicker here is, how much more useful would your app be as compared to a phrasebook? A more useful app in this context would be one that can read signboards and translate them for you.

Yet another is to help the diaspora and other Indians learn a language through simple translated sentences. Again, this falls into the trap of how much better this would be if it was done like a phrasebook or the app version of “Learn Kannada in 30 days” manuals.

Another idea is to make the Government of India your customer, and help them with their regionalization needs. But then, the government has more bilingual people in the IAS itself than they need, and simple translation is probably not at all an issue when you’re operating at the scale of the government of India.

The dark side of me is thinking up an exciting novel/movie, though. Two idealistic US-educated scientists get inspired by Make In India and go back to make a simple translation app. After a whole lot of failures in monetizing their work, they are suddenly approached by the Govt of India, by the same officials who laughed their idea off earlier. Picture a Paresh Rawal at his droll politician best telling these meek urban types how much their idea will never work in the ‘real India’, and right after the interval coming back with a more serious professional look and demeanor along with the head of R&AW. Now I want this guy to be played by someone who radiates quiet power. Maybe Atul Kulkarni, but he’s got to look a decade older, and a bit more better built. And they find they can instantly become rich if they sell their code to NTRO, to use on the NETRA program (kind of like PRISM). They say thanks but no thanks, but heck, the head of NTRO tells them it’s an offer they can’t refuse. This guy’s got to be a persuasive shades-of-grey sort of wizened spy who used to work in ATT and NASA before he got recruited and had to fake his death and everything and now works under a new name. The two protagonists have a ‘Gasp! It’s him!’ moment of recognition because they’ve actually used a lot of his research to make their software. This NTRO guy can easily be played by Madhavan. And the rest of the plot is about how they decide on what to do with their software, whether they join NTRO, and whether they can sleep at night knowing they are being used to spy on billions of little online conversations every day.

Hmm. I gotta write that.

%d bloggers like this: