My favourite data science and machine learning books

At all times you should be reading a book which is too hard for you to read.

These books either have or still do fulfil that criteria for me.

They’re the foundation upon which I’ve built my knowledge of machine learning and data science. I’ll continue to read and reread these for years to come.

And if you’re learning machine learning or data science, they’re worth your time.

If I’m learning something new, I’ll usually find a good book on the topic and read it end-to-end. I’ll then follow up on the parts that stick. These books have plenty of parts which have stuck.

Books are listed in order of approachability (roughly), if you have 0 experience in machine learning or data science, start from the top, if you’ve got Python and math down pat, go from the bottom.

Machine Learning for Humans by Vishal Maini and Samer Sabri

A primer on machine learning you can read in a day. Available  free online .

A primer on machine learning you can read in a day. Available free online.

This book started as a series on Medium. The authors wanted to explain all they knew about machine learning in a readable and approachable way. And they’ve done just that.

If you want a zero-to-one resource you can use to build an understanding of some of the most important machine learning concepts, but you haven’t encountered machine learning before, this book is for you. Even if you’re already a machine learning practitioner, this book is worth reading. It’ll give you inspiration for sharing your work in way which is approachable for others.

Read for free online.

Python for Data Analysis by Wes McKinney

Data science begins with data analysis.

Data science begins with data analysis.

Start learning data science or machine learning and you’re going to be using Pandas (a Python library for data analysis). The best thing about this book is it’s written by the creator of Pandas so you know you’re learning from the best.

As a machine learning engineer, I spent most of my time using Pandas to manipulate data to get it ready for machine learning models.

This book will show you how to use Pandas to analyse your data, clean it, change it and most of all, use it for data science and machine learning.

As a data scientist or machine learning practitioner, you can never have enough Pandas knowledge.

Buy on Amazon.

Hands-on Machine Learning with Scikit-Learn and TensorFlow by Aurelien Geron

A must have on the desk of any machine learner.

A must have on the desk of any machine learner.

If you’re getting into machine learning and you want a one-stop practical resource, this is it. It’ll take you through two powerful machine learning libraries, Scikit-Learn and TenorFlow and teach you machine learning concepts through coded examples.

Each concept has code to go along with it. So you could read this book, get an understanding of what machine learning is capable of, then adjust the code examples to your own problems.

Buy on Amazon.

Grokking Deep Learning by Andrew Trask

Grok: Understanding (something) intuitively.

Grok: Understanding (something) intuitively.

I started learning deep learning via Udacity’s Deep Learning Nanodegree. Andrew Trask was of one of the teachers. He’s now a researcher at DeepMind.

Back then, there was only a few of chapters released. I sat on my couch flicking through page by page, learning how to build a neural network from scratch with NumPy (a Python numerical library).

I was hooked on the descriptive analogies he used to describe machine learning concepts.

“Deep learning hyperparameters can be tuned like the dials on your oven.”

I devoured each new chapter as it came out.

But now you don’t have to wait, the full book is ready.

This book is a chance to learn deep learning from the ground-up and with hands-on examples from one of the best practitioners in the field.

Buy on Amazon.

Buy/peek-inside on Manning.

The 100-Page Machine Learning Book by Andriy Burkov

This is the book I wish I had when I started learning machine learning.

This is the book I wish I had when I started learning machine learning.

The start here and continue here of machine learning. That’s what I called it my book review. After reading Machine Learning for Humans, if you’re hungry to get deeper on what makes machine learning algorithms tick, this is the book for you.

My favourite part is it covers problems in machine learning and gives you solutions, as well as the rational behind those solutions. All within 100-pages.

You could read this in a day if you want. But you don’t need to. Take your time. Learning anything new takes time. Especially machine learning.

If the 100-pages aren’t enough, there’s QR codes scattered throughout with extra-curriculum curated by the author.

Buy on Amazon.

Read for free on the book’s website.

The Deep Learning Book by Ian Goodfellow, Yoshua Bengio and Aaron Courville

The ground truth of deep learning.

The ground truth of deep learning.

This is the newest edition to my collection. I bought the hard copy. It’s the book which fulfils the criteria at the start of the article.

I’m most excited for the math sections at the start. I’ve been a code-first learner. Hence the order of these books. But deep learning and machine learning are based on applied math. The code and frameworks might change over time but the math doesn’t change. Linear algebra is always going to be linear algebra.

The Deep Learning Book is written by three titans of the deep learning world. Goodfellow is the inventor of GANs, Bengio is one of the original discovers of deep learning and Courville’s academic works have been cited nearly 50,000 times.

This book dives deep on all of the deep learning concepts you should know about (not a pun).

Buy on Amazon.

Read for free online.

Remember, machine learning is broad. Use these books as a foundation to base your knowledge on and improve it by getting your hands dirty.

Knowledge which isn’t applied is wasted. There’s no better way to learn than to make mistakes.

Keep learning.

How to explain machine learning to someone who hasn't heard of machine learning

When you learned to ride a bike, you were terrible at the start.

Then your Mum told you had to balance a bit to the left you got better but you weren’t good.

A few more goes and you could ride without your training wheels.

After riding a bike 1,000 times it’s harder for you to fall over than not.

Machine learning is getting a computer to do things without explicitly telling it to do so. With things being finding patterns in data.

With lots of data, machine learning algorithms are like you riding a bike for the 1,001st time. Really good.

Without it, they’re like you riding a bike with training wheels for the first time, not so good.

How do they find the patterns? What’s data?

Data can be any kind of information. Pictures, text, numbers. But computers work best with numbers.

Machine learning algorithms find patterns by turning everything into numbers.

A picture is a combination of different pixels each with different colours. A pixel of value 255, 255, 255 is white, 0, 0, 0 is black.

To tell if a dog is an image, a machine learning algorithm processes the pixel numbers of images with dogs and images without dogs and remembers the differences. If the next image it sees numbers are closer to the ones with dogs, it will say there’s a dog in there.

The important thing to remember is the lack of context, you can’t ask the algorithm where the dog is. It will only tell you there is a dog.

Text can be turned into numbers too. If the word dog is 1 and the word pet is 0, the word car might be 75. Why? Because dog and pet are used in more similar context than dog and car.

This is how your email blocks spam automatically. If a new email gets converted to numbers and it looks like other spam email, converted to numbers, the new email will be classified as spam.

There are many more machine learning techniques I’ve skipped over but this is a good start.

If someone has never heard of it before, most people are visual learners.

Paint a picture for them.

Screen Shot 2019-07-12 at 10.38.15 pm.png

My first contribution to an open source deep learning library

GitHub still confuses me. But it's needed. You can create your own tools but the best come from collaboration.

The philosophy of open source is simple. Take the best information and knowledge from others and make it available to every one in an accessible manner and let them create.

It says, here's the thing we've built, you can use it for free but if you find a way to improve it, let us know but we'd appreciate it if you made the change yourself.

Most open source libraries have far more users than contributors. And that's a good thing. It shows the scalability of software. It means many can benefit from the work of a few.

Since starting to learn machine learning, I've used plenty of open source software but I'd never contributed back. Until now.

We've been working on a text classification problem at Max Kelsen. The model we built was good, really good. But it wasn't perfect. No model is. So we wanted to know what it didn't know.

Our search led to Bayesian methods. I don't have the language to describe them properly but they offer a solution to the problem of figuring out what your model doesn't know.


In our case, we used Monte Carlo dropout to estimate model uncertainty. Monte Carlo dropout removes part of your model every time it makes a prediction. The Monte Carlo part means you end up with 100 (this number can change) different predictions on each sample all made with slightly different versions of your original model. How your 100 predictions vary, indicates how certain or uncertain your model is about a prediction. In the ideal scenario, all 100 would be the same. Where as, 100 different predictions would be considered very uncertain.

Our text classifier was based on the ULMFit architecture using the deep learning library. This worked well but the library didn't have Monte Carlo dropout built-in. We built it for our problem and it worked well.* Maybe others could find value from it too, so we made a pull request to the GitHub repository.

With a few changes from the authors, the code was accepted. Now others can use the code we created.

A contribution to open source doesn't have to be adding new functionality. It could be fixing an error, adding some documentation about something or making existing code run better.

Still stuck?

Best to start with scratching your own itch. You might not have one to begin with, I didn't for 2-years. But now I've done it once, I know what's required for next time.

If you want to learn more, I made a video about the what, why and how of a pull request. And I used the one we made to the library as the example.

*After a few more experiments, we've started to question the usefulness of the Monte Carlo dropout method. In short, our thinking is if you simulate different versions of your model enough, eventually you end up with your same model. So the pull request may not be as useful as we originally thought. You have to be skeptical of your own work. Doing so is what makes it better. Stay tuned.

Gradatim Ferociter | Ask a Machine Learning Engineer Anything May 2019

Harsath messaged me a year ago. He was getting into machine learning and data science and had seen some of my videos.

Every time I’d post a new video he’d be one of the first to comment. Something insightful, something kind. I’ve always been grateful to see his name pop up. There’s also Shaik, Hammad, Gregory, Yash, Paul and many more.

This time Harsath told me he got a role in the field. He’d been working hard towards it and was offered a job.

It came after sustained effort over time. Step by step. It reminded me of the saying, gradatim ferociter, it means step ferociously.

Big things rarely happen in one go. It takes many small steps, one at a time. And each step has to be taken with passion and ferocity.

Congratulations Harsath. Keep up the effort and keep learning.

In the May Ask me Anything, I answered your questions around studying online versus at college, how to get a job in the field, having a PhD versus self-taught, using Bayesian methods in machine learning, my intermittent fasting schedule and more.

As always, if you have any more questions, feel free to reach out.

Advice which made me a better machine learning engineer

Athon was doing a talk. Something about Variational Autoencoders. He got deep. Much of it I didn’t understand. All I know is one-half tries to condense a distribution from a larger one into a smaller one and the other half tries to turn the smaller one into a new version of a larger one as close as possible to the original.

We went for a break.

There were fish sticks, wings, more 10-minute foods. The kind which tastes good for the 10-minutes you’re eating them but make you feel terrible after.

John was there. We’d met before. He was telling me about a hackathon his team won by using Julia (programming language) to denoise images and then used them for image recognition.

He told me how his company got acquired by a larger company. He’d been at the new company a few weeks but preferred smaller companies.

In between bites of fish sticks, I asked John questions.

John had been programming since he was young. I had 18-months under my belt. There were things he was saying I didn’t understand but I kept a constant stream of nods.

John asked me a question.

What do you think your strength is?

I spoke.

Well, I know I’ll never be the best engineer. But…

John interrupted me.

You won’t be with that attitude.

I was going to continue with my regular story. I know I’ll never be the best engineer but I can be the communication bridge between engineers and customers.

But I didn’t. I digested what John said along with chewed fish sticks. I spoke.

You’re right.

John kept talking.

You won’t improve if you think like that. Even if you know you won’t be the best, be careful what words you use, they’ll hold you back.

Every time I’ve run into a problem since and wanted to bang my head against a wall wanted to give up wanted to try something easier instead of doing the hard thing I remember back to what John said.

I say to myself.

I’m the best engineer in the world.


How do you learn machine learning when you're rusty at math?

Mum, how can I get out of the exam?


I’m going to fail.

Tears started filling my eyes. I was sitting at my desk with the lamp on, 11 pm the night before the final exam.

Maths C. That’s what they called it. There was Maths A and B but C was the hardest and I was doing it.

There was something about matrices and imaginary numbers and proofs. I couldn’t do any of it. Only a few matrix multiplications, the easier ones.

I did the exam. Somehow I passed. I shouldn’t have. My teacher let me off the hook. That was 2010.

University came and I majored in biomedicine. I failed my first statistics course, twice. Then I changed out of biomedicine.

I graduated in 2015 with a dual major in food science and nutrition. Now food is one of my religions.

2017 happened and I decided to get into machine learning. I’d seen the headlines, read the articles.

Andrew Ng’s machine learning course kept getting recommended so I started there.

The content was tough. All the equations, all the Greek symbols. I hadn’t touched any of it since high school.

But the code was exciting, and the concepts made sense. Ng’s teaching skills meant they just made sense.

So I followed those. Kept going at it. This time I didn’t have an Xbox to distract me like high school.

My math is still rusty. I’ve done some Khan Academy courses on matrix manipulation, calculus and bookmarked some linear algebra courses to get into. There’s one from Rachel Thomas and another one from Stanford.

Math is a language, it takes time to learn, time to be able to speak it. Programming is the same. Machine learning combines them both and a bit more.

I started with programming first. Built some machine learning models, using Python code, TensorFlow, PyTorch and the others. Saw how the concepts linked with the code. It got me hooked.

You can start learning machine learning without an in-depth knowledge of the math behind it. If your math is rusty, you can learn machine learning with concepts and code first. Many of the tools available to you abstract away the math and let you build.

But when you gain a little momentum, learn a little more, hit a roadblock, you can dive into the math.


The future of education is online (+ 5 resources I've been loving)

Not everyone has access to the best colleges in the world. But the internet provides a way for everyone to access the best knowledge in the world.

There are no shortage of learning materials. Only a shortage of willingness to learn.

Even with such great learning resources available, it still takes a dedicated effort to work through them. To build upon and to create with them.

And one of the best ways for knowledge to spread and be useful is if it’s shared.

Here are 5 things which have caught my attention this week:

1. Open-source state-of-the-art conversational AI

Thomas Wolf wrote a great blog post summarising how the HuggingFace team built a competition winning conversational AI.

All done in 250 lines of refactored PyTorch code on GitHub! 🔥

2. Open-source Data Science Degree

The Open Source Society Unversity repository contains pathways you can use to take advantage of the internet to educate yourself.

3. GitHub Learning Lab

I need to get better at GitHub.

It’s a required skill for all developers and coders.

So I've been using the GitHub learning lab, a free training resource from The GitHub Training Team.

4. 30+ deep learning best practices

This forum post from the forums collates some of the best tidbits for improving your models.

My favourite is the cyclic learning rate.

5. A neural network recipe from Tesla's AI Lead

Training neural networks can be hard. 

But there are a few things you can do to help.

And Andrej Karpathy has distilled them for you.

My favourite?

Become one with the data.

PS this post is an excerpt from the newsletter I sent out this morning. If you’d like to get more like these delivered to your inbox, sign up for more.

University vs. Studying Online and How to Get Around Smart People

Lukas emailed me asking a few questions. I replied back with some answers and then he dug deeper. He thought about what I said and then wanted to know more.

I replied back to him with some of my thoughts which I tidied up a bit and put below. The headings are the topics Lukas was curious about. This post doesn’t have all the context but I think you’ll find some value out of it.

Hey Lukas,

I’ll answer these how I did the last ones and break them apart a bit.

1. “University/school teaches some stuff that you don’t really need or want”

This is true. But also true of all learning. Whatever resource you choose, you’ll never use all of it. Some knowledge will come from elsewhere, some will vanish into nothing.

The reason learning online is valuable is it gives you the chance to narrow down on what it is you want immediately. University and school take a ‘boil the ocean’ solution because that’s the only valid one for what they offer. Individualised learning hasn’t made its way into traditional education services. I found I learn best when I follow what I’m interested in so I take the approach of learning the most important thing when it’s required. What's most important? It will depend on the project you’re working on.

Whilst this is an ideal approach for me. It’s important to always reflect on practicality. If I’m building a business and all I want to do is follow what I’m interested in, will that always line up with what customers/the market want? Maybe. Maybe not.

Lately, I’ve been taking the concept of time splitting and applying it to most of what I do. A 70/20/10 split I stole from Google.

In essence, 70% on core product/techniques (improving and innovating on existing knowledge), 20% on new ventures (still tied to core product) and 10% on moonshots (things that might not work).

In the case of my core product, it’s learning health and machine learning skills that can be applied immediately. I distil these in a work project/online creation I share with others.

For new ventures, it’s taking the core product skills and then expanding them on things I haven’t yet done, learning a new technique, working on a new project. But still tied to the core pillars of health and technology.

For moonshots, it’s going, ‘where will the world be in 5-10 years and how can I start working on those things now.’ These don’t necessarily have to relate to the core product but mine kind of still are (since the crossover of health, technology and art interests me most). For this, I’ve been playing around with the idea of an augmented reality (AR) coach/doctor. If AR glasses are going to be a thing, how could I build a health coach service which lives in the AR realm and is summoned/ever present to give insights into different aspects of your health? All of this would be of course personalised to the individual.

If you're still on the fence between university and learning on your own. One thing you may want to look into is the ‘2-year self apprenticeship’. I wrote an article about this which will shed some more light. Especially at 20, this would be something I’d highly recommend (I already have to my brothers, who are your age).

Remember, there's no rush. You've got plenty of time. Work hard and enjoy it.

2. “Why math at university versus on your own?”

I mentioned I was thinking of going to university to study mathematics rather than online. Here's why.

I learned Chinese and Japanese throughout 2016. The most helpful thing was being able to practice speaking with other people face to face.

I stopped after a year and have lost most of what I learned.


Because I don’t use it and don’t need to use it every day. English is 99.999% enough for conversations in Australia and the work I do.

Math is also a language. The language of nature. Being able to speak it and work on it with other people is a great way to accelerate your knowledge.

That isn’t to say you couldn’t do the same online. But put it this way, I would never try to learn another language without practising conversing from day 1.

If you want to learn French, move to France. If you want to learn math, take math classes with other people who speak math.

3. “How do you get physically around smart people?”

Aside from working with a great team or going to university and having a great cohort. Meetups are the number 1 thing for this.

They are weird and awkward and beautiful.

I always feel like a fish out of water there because everyone seems like a genius.

Events related to your field are priceless. They don’t have to be too often either. I’m finding once a month or so as a sound check to be enough.

4. “Which platform was best for opportunities?”

For content partnerships and online business opportunities: YouTube & LinkedIn (I've been approached or partnered with Coursera, A Cloud Guru, DataCamp, and more).

For career progression: LinkedIn. If I was looking for a job or more business opportunities, I’d be posting and interacting here daily.

For reaching an audience: Medium. Words are powerful. Writing every day is the best habit I have (aside from daily movement and staying healthy).

A tip for creating.

People are interested in two things when they look at content. Being educated and/or being entertained. Bonus points if you can do both but you don’t need to do both. One is suffice.

Especially if you’re doing a 2-year self apprenticeship or some kind of solo learning journey, share your work from day 1. Share what you’re learning and teach others if you can.

Do not expect it to go viral. Do not expect everyone to love it. These aren’t required.

What’s required is for you to continue improving your skills and to continue improving how to communicate said skills.

Over the long term, those two things are what matter.

Let me know if there’s any follow ups.

Great questions.


Daniel Bourke

Activity vs. Progress

“Are you making progress or completing activities?” he said, “That’s what I ask myself at the end of each day.”

“I’m writing that down.”

We kept talking. Not much more worth writing down though.

“Let me know what you get up to.”

“Okay, I will.”

“Talk soon.”

“Have a good day mate. Goodbye.”

Too many activities can feel like progress. That’s what he was talking about. You could be working yourself to the bone but the list never gets any smaller.

Maybe it’s time to get a new list.

One which leads to progress instead of a whole bunch of activities being checked off at the end of the day.

I catch myself when I’m writing a list each morning. On the days where there are only two or three things, write, workout, read, I go to add more add more as a habit. But would more activities lead to progress?

If your goal is to progress, you must decide which activities lead to it and which don’t. It’s hard and you’ll never be able to do it for sure but you can make a decision to. A decision to step back a decision to think about what does add to progress and cut what doesn’t.

In my latest video, I share how I got Google Cloud Professional Data Engineer Certified. I passed the exam without meeting any of the prerequisites. How? A few activities which led to progress. But the certification isn’t the real progress. The real progress comes from doing something with the skills the certificate requires. More on that in the future.

What does a machine learning engineers day look like?

Someone asked me on LinkedIn what they should learn for the rest of the year in order to become a machine learning engineer.

The specific skills are hard to narrow down as every role will be different. I can only share what I’ve learned the past year being a machine learning engineer at Max Kelsen.

I’ve copied the message I replied with here.

Hey [name removed]!

I'm great thank you! I trust you're well too.

Well, machine learning engineers may have different roles at different companies but let me talk you through what my day usually looks like.

  • 9 am - reading articles/papers online about machine learning (arXiv and Medium are the two usual places).

  • 10 am - working on the current project and (sometimes) applying what I've just been reading online.

  • 4 pm - pushing my code to GitHub and writing down experiments for the next day.

  • 5 pm - sending a small report to the team about what I've been working on during the day.

(these are all ideal scenarios)

Now, what happens during the 10-4pm (this is where most of the code gets done). Usually, it will be all be Python code within a Jupyter Notebook playing around with different datasets.

At the moment I'm working on a text classification problem using the Flair library.

As for what skills I'd suggest are most valuable (in my current role).

1. Exploring datasets using exploratory data analysis, this notebook by Daniel Formosso is a great example.

I also wrote an article with a bit more of a gentle introduction to exploratory data analysis which may help.

2. Being able to research different data science and machine learning techniques and apply them to current problems.

This one is a little more tricky because it will be different from problem to problem.

How you could practice this would be to enter a Kaggle competition (previous or current) and start figuring out different practices for different kinds of data, tabular, text, images.

Why Kaggle?

Because it's free, there are others who show their work (so you know what a good job is) and the datasets are relatively close (all real datasets differ a little) to what you'd be working on as a machine learning engineer.

Once you've spent a couple of months doing 1. and 2. you may want to look into what it takes to deploy a machine learning model in production. However, don't rush towards this. This is still a bit of a dark art (it's doable but not well documented yet). I think over the next year, this step will become more and more accessible to everyone.

I hope this helps.

Let me know if you'd like me to tidy anything/clarify some things.

[If you’re reading this, you can reach out and ask questions too, I’ll do my best to answer.]

So many people are learning machine learning. What should you do to stand out?

There it was. Podcasts, YouTube, blog posts, machine learning here there changing this changing that changing it all.

I had to learn. I started. Andrew Ng’s Machine Learning course on Coursera. A bunch of blog posts. It was hard but I was hooked. I kept going. But I needed some structure. I put a few courses together in my own AI Masters Degree. I’m still working through it. It won’t finish. The learning never stops.


You know this. You’ve seen it happening. You’ve seen the blog posts, you’ve seen the Quora answers, you’ve seen the endless papers the papers which are hard to read the good ones which come explained well with code.

Everyone is learning machine learning.

Machine learning is learning everyone.

How do you stand out?

How how how.

A) Start with skills

The ones you know about, math, code, probability, statistics. All of these could take decades to learn well on their own. But decades is too long. Everyone is learning machine learning. You have to stand out from everyone.

There are courses for these things and courses are great. Courses are great to build a foundation.

Read the books, do the courses, structure what you’re learning.

This week I’m practising code for 30-minutes per day. 30-minutes. That’s what I have to do. When I don’t feel like practicing. I’ll remind myself. These are the skills I have to learn. It’ll be yes or no. It’s my responsibility. I’ll do it. Yes.

Why skills?

Because skills are non-negotiable. Every field requires skills. Machine learning is no different.

If you’re coming from zero, spend a few months getting into the practical work of one thing, math, code, statistics, something. My favourite is code, because it’s what the rest come back to.

If you’re already in the field, a few months, a fear years in, reaccess your skills, what needs improving? What are you good at? How could you become the best in the world at it? If you can’t become the best in the world, combine with something else you’re good at and become the best in the world at the crossover.

B) Got skills? Good. Show them.

Ignore this if you want.

Ignore it and only pay attention to the above. Only pay attention to getting really good at what you’re doing. If you’re the best in the world at what you do, it’s inevitable the world will find out.

What if you aren’t the best in the world yet?

Share your work.

You make a website.

I made this up. It might exist.

On your website you share what you’ve been up to. You write an article on an intuitive interpretation of gradient descent. There’s code there and there’s math there. You’ve been working on your skills so to give back you share what you’ve learned in a way others can understand.

The code tab links to your GitHub. On your GitHub you’ve got examples of different algorithms and comments around them and a tutorial on exploratory data analysis of a public health dataset since your interest in health. You’ve ingested a few recent papers and tried to apply it to something.

LinkedIn is your resume, you’ve listed your education, your contributions to different projects the porjects you’ve built the ones you’ve worked on. Every so often you share an update of your latest progress. This week I worked on adding in some new functions to my health project.

You’re getting a bit of traction but it’s time to step it up. You’re after the machine learning role at Airbnb. Their website is so well designed you stayed at their listings you’re a fan of what the work they do you know you could bring them value with your machine learning skills.

You make another website.

I made this one up too. Kudos if you’re already on it.

You send it to a few people on the Airbnb recruitment team you found on LinkedIn with a message.

Hi, my name is Charlie, I hope this finds you well.

I’ve seen the Machine Learning Engineer role on your careers page and I’d like to apply.

I made this website which shows my solutions to some of your current challenges.

If you check it out, I’d love your advice on what best to do next.

5/6 of the people you message click on it. This is where they see what you’ve done. You built a recommendation engine. It runs live in the browser. It uses your machine learning skills. Airbnb need a machine learning engineer who has experience with recommendation engines. They recommend a few things.

3 reply with next steps of what to do. The other 2 refer to other people.

How many other people sent through a website showcasing their skills?


Maybe you don’t want a job. Maybe you want to research. Maybe you want to get into a university. The same principles apply.

Get good at what you do. Really good.

Share your work.

How much?

80% skills.

20% sharing.