A Gentle (and visual) Introduction to Exploratory Data Analysis

Pink singlet, dyed red hair, plated grey beard, no shoes, John Lennon glasses. What a character. Imagine the stories he’d have. He parked his moped and walked into the cafe.

This cafe is a local favourite. But the chairs aren’t very comfortable. So I’ll keep this short (spoiler: by short, I mean short compared to the amount of time you’ll actually spend doing EDA).

When I first started as a Machine Learning Engineer at Max Kelsen, I’d never heard of EDA. There are a bunch of acronyms I’ve never heard of.

I later learned EDA stands for exploratory data analysis.

It’s what you do when you first encounter a data set. But it’s not a once off process. It’s a continual process.

The past few weeks I’ve been working on a machine learning project. Everything was going well. I had a model trained on a small amount of the data. The results were pretty good.

It was time to step it up and add more data. So I did. Then it broke.

I filled up the memory on the cloud computer I was working on. I tried again. Same issue.

There was a memory leak somewhere. I missed something. What changed?

More data.

Maybe the next sample of data I pulled in had something different to the first. It did. There was an outlier. One sample which had 68 times the amount of purchases as the mean (100).

Back to my code. It wasn’t robust to outliers. It took the outliers value and applied to the rest of the samples and padded them with zeros.

Instead of having 10 million samples with a length of 100, they all had a length of 6800. And most of that data was zeroes.

I changed the code. Reran the model and training began. The memory leak was patched.

Pause.

The guy with the pink singlet came over. He tells me his name is Johnny.

He continues.

‘The girls got up me for not saying hello.’

‘You can’t win,’ I said.

‘Too right,’ he said.

We laughed. The girls here are really nice. The regulars get teased. Johnny is a regular. He told me he has his own farm at home. And his toenails were painted pink and yellow, alternating, pink, yellow, pink, yellow.

Johnny left.

Back to it.

What happened? Why the break in the EDA story?

Apart from introducing you to the legend of Johnny, I wanted to give an example of how you can think the road ahead is clear but really, there’s a detour.

EDA is one big detour. There’s no real structured way to do it. It’s an iterative process.


Why do EDA?

When I started learning machine learning and data science, much of it (all of it) was through online courses. I used them to create my own AI Masters Degree. All of them provided excellent curriculum along with excellent datasets.

The datasets were excellent because they were ready to be used with machine learning algorithms right out of the box.

You’d download the data, choose your algorithm, call the .fit() function, pass it the data and all of a sudden the loss value would start going down and you’d be left with an accuracy metric. Magic.

This was how the majority of my learning went. Then I got a job as a machine learning engineer. I thought, finally, I can apply what I’ve been learning to real-world problems.

Roadblock.

The client sent us the data. I looked at it. WTF was this?

Words, time stamps, more words, rows with missing data, columns, lots of columns. Where were the numbers?

‘How do I deal with this data?’ I asked Athon.

‘You’ll have to do some feature engineering and encode the categorical variables,’ he said, ‘I’ll Slack you a link.’

I went to my digital mentor. Google. ‘What is feature engineering?’

Google again. ‘What are categorical variables?’

Athon sent the link. I opened it.

There it was. The next bridge I had to cross. EDA.

You do exploratory data analysis to learn more about the more before you ever run a machine learning model.

You create your own mental model of the data so when you run a machine learning model to make predictions, you’ll be able to recognise whether they’re BS or not.

Rather than answer all your questions about EDA, I designed this post to spark your curiosity. To get you to think about questions you can ask of a dataset.


Where do you start?

How do you explore a mountain range?

Do you walk straight to the top?

How about along the base and try and find the best path?

It depends on what you’re trying to achieve. If you want to get to the top, it’s probably good to start climbing sometime soon. But it’s also probably good to spend some time looking for the best route.

Exploring data is the same. What questions are you trying to solve? Or better, what assumptions are you trying to prove wrong?

You could spend all day debating these. But best to start with something simple, prove it wrong and add complexity as required.

Example time.


Making your first Kaggle submission

You’ve been learning data science and machine learning online. You’ve heard of Kaggle. You’ve read the articles saying how valuable it is to practice your skills on their problems.

Roadblock.

Despite all the good things you’ve heard about Kaggle. You haven’t made a submission yet.

That was me. Until I put my newly acquired EDA skills to work.

You decide it’s time to enter a competition of your own.

You’re on the Kaggle website. You go to the ‘Start Here’ section. There’s a dataset containing information about passengers on the Titanic. You download it and load up a Jupyter Notebook.

What do you do?

What question are you trying to solve?

‘Can I predict survival rates of passengers on the Titanic, based on data from other passengers?’

This seems like a good guiding light.


An EDA checklist

Every morning, I consult with my personal assistant on what I have to do for the day. My personal assistant doesn’t talk much. Because my personal assistant is a notepad. I write down a checklist.

If a checklist is good enough for pilots to use every flight, it’s good enough for data scientists to use with every dataset.

My morning lists are non-exhaustive, other things come up during the day which have to be done. But having it creates a little order in the chaos. It’s same with the EDA checklist below.

An EDA checklist

1. What question(s) are you trying to solve (or prove wrong)?
2. What kind of data do you have and how do you treat different types?
3. What’s missing from the data and how do you deal with it?
4. Where are the outliers and why should you care about them?
5. How can you add, change or remove features to get more out of your data?

We’ll go through each of these.

What would you add to the list?


What question(s) are you trying to solve?

I put an (s) in the subtitle. Ignore it. Start with one. Don’t worry, more will come along as you go.

For our Titanic dataset example it’s:

Can we predict survivors on the Titanic based on data from other passengers?

Too many questions will clutter your thought space. Humans aren’t good at computing multiple things at once. We’ll leave that to the machines.

Sometimes a model isn’t required to make a prediction.

Before we go further, if you’re reading this on a computer, I encourage you to open this Juypter Notebook and try to connect the dots with topics in this post. If you’re reading on a phone, don’t fear, the notebook isn’t going away. I’ve written this article in a way you shouldn’t need the notebook but if you’re like me, you learn best seeing things in practice.



What kind of data do you have and how to treat different types?

You’ve imported the Titanic training dataset.

Let’s check it out.

training.head()
.head() shows the top five rows of a dataframe. The rows you’re seeing are from the Kaggle Titanic Training Dataset.

.head() shows the top five rows of a dataframe. The rows you’re seeing are from the Kaggle Titanic Training Dataset.

Column by column, there’s: numbers, numbers, numbers, words, words, numbers, numbers, numbers, letters and numbers, numbers, letters and numbers and NaNs, letters. Similar to Johnny’s toenails.

Let’s separate the features out into three boxes, numerical, categorical and not sure.

Columns of different information are often referred to as features. When you hear a data scientist talk about different features, they’re probably talking about different columns in a dataframe.

Columns of different information are often referred to as features. When you hear a data scientist talk about different features, they’re probably talking about different columns in a dataframe.

In the numerical bucket we have, PassengerId, Survived, Pclass, Age, SibSp, Parch and Fare.

The categorical bucket contains Sex and Embarked.

And in not sure we have Name, Ticket and Cabin.

Now we’ve broken the columns down into separate buckets, let’s examine each one.

The Numerical Bucket

numerical_bucket.png

Remember our question?

‘Can we predict survivors on the Titanic based on data from other passengers?’

From this, can you figure out which column we’re trying to predict?


We’re trying to predict the green column using data from the other columns.

We’re trying to predict the green column using data from the other columns.

The Survived column. And because it’s the column we’re trying to predict, we’ll take it out of the numerical bucket and leave it for the time being.

What’s left?

PassengerId, Pclass, Age, SibSp, Parch and Fare.

Think for a second. If you were trying to predict whether someone survived on the Titanic, do you think their unique PassengerId would really help with your cause?

Probably not. So we’ll leave this column to the side for now too. EDA doesn’t always have to be done with code, you can use your model of the world to begin with and use code to see if it’s right later.

How about Pclass, SibSp and Parch?

These are numbers but there’s something different about them. Can you pick it up?

What does Pclass, SibSp and Parch even mean? Maybe we should’ve read the docs more before trying to build a model so quickly.

Google. ‘Kaggle Titanic Dataset’.

Found it.

Pclass is the ticket class, 1 = 1st class, 2 = 2nd class and 3 = 3rd class. SibSp is the number of siblings a passenger has on board. And Parch is the number of parents someone had on board.

This information was pretty easy to find. But what if you had a dataset you’d never seen before. What if a real estate agent wanted help predicting house prices in their city. You check out their data and find a bunch of columns which you don’t understand.

You email the client.

‘What does Tnum mean?’

They respond. ‘Tnum is the number of toilets in a property.’

Good to know.

When you’re dealing with a new dataset, you won’t always have information available about it like Kaggle provides. This is where you’ll want to seek the knowledge of an SME.

Another acronym. Great.

SME stands for subject matter expert. If you’re working on a project dealing with real estate data, part of your EDA might involve talking with and asking questions of a real estate agent. Not only could this save you time, but it could also influence future questions you ask of the data.

Since no one from the Titanic is alive anymore (RIP (rest in peace) Millvina Dean, the last survivor), we’ll have to become our own SMEs.

There’s something else unique about Pclass, SibSp and Parch. Even though they’re all numbers, they’re also categories.

How so?

Think about it like this. If you can group data together in your head fairly easily, there’s a chance it’s part of a category.

The Pclass column could be labelled, First, Second and Third and it would maintain the same meaning as 1, 2 and 3.

Remember how machine learning algorithms love numbers? Since Pclass, SibSp and Parch are already all in numerical form, we’ll leave them how they are. The same goes for Age.

Phew. That wasn’t too hard.


The Categorical Bucket

categorical_bucket.png

In our categorical bucket, we have Sex and Embarked.

These are categorical variables because you can separate passengers who were female from those who were male. Or those who embarked on C from those who embarked from S.

To train a machine learning model, we’ll need a way of converting these to numbers.

How would you do it?

Remember Pclass? 1st = 1, 2nd = 2, 3rd = 3.

How would you do this for Sex and Embarked?

Perhaps you could do something similar for Sex. Female = 1 and male = 2.

As for Embarked, S = 1 and C = 2.

We can change these using the .LabelEncoder() function from the sklearn library.

training.embarked.apply(LabelEncoder().fit_transform)

Wait? Why does C = 0 and S = 2 now? Where’s 1? Hint: There’s an extra category, Q, this takes the number 1. See the  data description page  on Kaggle for more.

Wait? Why does C = 0 and S = 2 now? Where’s 1? Hint: There’s an extra category, Q, this takes the number 1. See the data description page on Kaggle for more.

We’ve made some good progress towards turning our categorical data into all numbers but what about the rest of the columns?

Challenge: Now you know Pclass could easily be a categorical variable, how would you turn Age into a categorical variable?


The Not Sure Bucket

not_sure.png

Name, Ticket and Cabin are left.

If you were on Titanic, do you think your name would’ve influenced your chance of survival?

It’s unlikely. But what other information could you extract from someone's name?

What if you gave each person a number depending on whether their title was Mr., Mrs. or Miss.?

You could create another column called Title. In this column, those with Mr. = 1, Mrs. = 2 and Miss. = 3.

What you’ve done is created a new feature out of an existing feature. This is called feature engineering.

Converting titles to numbers is a relatively simple feature to create. And depending on the data you have, feature engineering can get as extravagant as you like.

How does this new feature affect the model down the line? This will be something you’ll have to investigate.

For now, we won’t worry about the Name column to make a prediction.

What about Ticket?

ticket_head.png

The first few examples don’t look very consistent at all. What else is there?

training.Ticket.head(15)

The first 15 entries of the Ticket column.

The first 15 entries of the Ticket column.

These aren’t very consistent either. But think again. Do you think the ticket number would provide much insight as to whether someone survived?

Maybe if the ticket number related to what class the person was riding in, it would have an effect but we already have that information in Pclass.

To save time, we’ll forget the Ticket column for now.

Your first pass of EDA on a dataset should have the goal of not only raising more questions about the data but to get a model built using the least amount of information possible so you’ve got have a baseline to work from.

Now, what do we do with Cabin?

You know, since I’ve already seen the data, my spidey-senses are telling me it’s a perfect example for the next section.

Challenge: I’ve only listed a couple examples of numerical and categorical data here. Are there any other types of data? How do they differ to these?


What’s missing from the data and how do you deal with it?

missingno.matrix(train, figsize = (30,10))
The  missingno library  is a great quick way to quickly and visually check for holes in your data, it detects where NaN values (or no values) appear and highlights them. White lines indicate missing values.

The missingno library is a great quick way to quickly and visually check for holes in your data, it detects where NaN values (or no values) appear and highlights them. White lines indicate missing values.

The Cabin column looks like Johnny’s shoes. Not there. There are a fair few missing values in Age too.

How do you predict something when there’s no data?

I don’t know either.

So what are our options when dealing with missing data?

The quickest and easiest way would be to remove every row with missing values. Or remove the Cabin and Age column entirely.

But there’s a problem here. Machine learning models like more data. Removing large amounts of data will likely decrease the ability of our model to predict whether a passenger survived or not.

What’s next?

Imputing values. In other words, filling up the missing data with values calculated from other data.

How would you do this for the Age column?

When we called .head() the Age column had no missing values. But when we look at the whole column, there are plenty of holes.

When we called .head() the Age column had no missing values. But when we look at the whole column, there are plenty of holes.

Could you fill missing values with average age?

There are drawbacks to this kind of value filling. Imagine you had 1000 total rows, 500 of which are missing values. You decide to fill the 500 missing rows with the average age of 36.

What happens?

Your data becomes heavily stacked with the age of 36. How would that influence predictions on people 36-years-old? Or any other age?

Maybe for every person with a missing age value, you could find other similar people in the dataset and use their age. But this is time-consuming and also has drawbacks.

There are far more advanced methods for filling missing data out of scope for this post. It should be noted, there is no perfect way to fill missing values.

If the missing values in the Age column is a leaky drain pipe the Cabin column is a cracked dam. Beyond saving. For your first model, Cabin is a feature you’d leave out.

Challenge: The Embarked column has a couple of missing values. How would you deal with these? Is the amount low enough to remove them?


Where are the outliers and why you should be paying attention to them?

‘Did you check the distribution?’ Athon asked.

‘I did with the first set of data but not the second set…’ It hit me.

There it was. The rest of the data was being shaped to match the outlier.

If you look at the number of occurrences of unique values within a dataset, one of the most common patterns you’ll find is Zipf’s law. It looks like this.

Zipf’s law: The highest occurring variable will have double the number of occurrences of the second highest occurring variable, triple the amount of the third and so on.

Zipf’s law: The highest occurring variable will have double the number of occurrences of the second highest occurring variable, triple the amount of the third and so on.

Remembering Zipf’s law can help to think about outliers (values towards the end of the tail don’t occur often and are potential outliers).

The definition of an outlier will be different for every dataset. As a general rule of thumb, you may consider anything more than 3 standard deviations away from the mean might be considered an outlier.

You could use a general rule to consider anything more than three standard deviations away from the mean as an outlier.

You could use a general rule to consider anything more than three standard deviations away from the mean as an outlier.

Or from another perspective.

Outliers from the perspective of an (x, y) plot.

Outliers from the perspective of an (x, y) plot.

How do you find outliers?

Distribution. Distribution. Distribution. Distribution. Four times is enough (I’m trying to remind myself here).

During your first pass of EDA, you should be checking what the distribution of each of your features is.

A distribution plot will help represent the spread of different values of data you have across. And more importantly, help to identify potential outliers.

train.Age.plot.hist()

Histogram plot of the Age column in the training dataset. Are there any outliers here? Would you remove any age values or keep them all?

Histogram plot of the Age column in the training dataset. Are there any outliers here? Would you remove any age values or keep them all?

Why should you care about outliers?

Keeping outliers in your dataset may turn out in your model overfitting (being too accurate). Removing all the outliers may result in your model being too generalised (it doesn’t do well on anything out of the ordinary). As always, best to experiment iteratively to find the best way to deal with outliers.

Challenge: Other than figuring out outliers with the general rule of thumb above, are there any other ways you could identify outliers? If you’re confused about a certain data point, is there someone you could talk to? Hint: the acronym contains the letters M E S.


Getting more out of your data with feature engineering

The Titanic dataset only has 10 features. But what if your dataset has hundreds? Or thousands? Or more? This isn’t uncommon.

During your exploratory data analysis process, once you’ve started to form an understanding AND you’ve got an idea of the distributions AND you’ve found some outliers AND you’ve dealt with them, the next biggest chunk of your time will be spent on feature engineering.

Feature engineering can be broken down into three categories: adding, removing and changing.

The Titanic dataset started out in pretty good shape. So far, we’ve only had to change a few features to be numerical in nature.

However, data in the wild is different.

Say you’re working on a problem trying to predict the changes in banana stock requirements of a large supermarket chain across the year.

Your dataset contains a historical record of stock levels and previous purchase orders. You're able to model these well but you find there are a few times throughout the year where stock levels change irrationally. Through your research, you find during a yearly country-wide celebration, banana week, the stock levels of bananas plummet. This makes sense. To keep up with the festivities, people buy more bananas.

To compensate for banana week and help the model learn when it occurs, you might add a column to your data set with banana week or not banana week.

# We know Week 2 is a banana week so we can set it using np.where()
df["Banana Week"] = np.where(df["Week Number"] == 2, 1, 0)
A simple example of adding a binary feature to dictate whether a week was banana week or not.

A simple example of adding a binary feature to dictate whether a week was banana week or not.

Adding a feature like this might not be so simple. You could find adding the feature does nothing at all since the information you’ve added is already hidden within the data. As in, the purchase orders for the past few years during banana week are already higher than other weeks.

What about removing features?

We’ve done this as well with the Titanic dataset. We dropped the Cabin column because it was missing so many values before we even ran a model.

But what about if you’ve already run a model using the features left over?

This is where feature contribution comes in. Feature contribution is a way of figuring out how much each feature influences the model.

An example of a feature contribution graph using Sex, Pclass, Parch, Fare, Embarked and SibSp features to predict who would survive on the Titanic. If you’ve seen the movie, why does this graph make sense? If you haven’t, think about it anyway. Hint: ‘Save the women and children!’

An example of a feature contribution graph using Sex, Pclass, Parch, Fare, Embarked and SibSp features to predict who would survive on the Titanic. If you’ve seen the movie, why does this graph make sense? If you haven’t, think about it anyway. Hint: ‘Save the women and children!’

Why is this information helpful?

Knowing how much a feature contributes to a model can give you direction as to where to go next with your feature engineering.

In our Titanic example, we can see the contribution of Sex and Pclass were the highest. Why do think this is?

What if you had more than 10 features? How about 100? You could do the same thing. Make a graph showing the feature contributions of 100 different features. ‘Oh, I’ve seen this before!’

Zipf’s law back at it again. The top features have far more to contribute than the bottom features.

Zipf’s law at play with different features and their contribution to a model.

Zipf’s law at play with different features and their contribution to a model.

Seeing this, you might decide to cut the lesser contributing features and improve the ones contributing more.

Why would you do this?

Removing features reduces the dimensionality of your data. It means your model has fewer connections to make to figure out the best way of fitting the data.

You might find removing features means your model can get the same (or better) results on fewer data and in less time.

Like Johnny is a regular at the cafe I’m at, feature engineering is a regular part of every data science project.

Challenge: What are other methods of feature engineering? Can you combine two features? What are the benefits of this?


Building your first model(s)

Finally. We’ve been through a bunch of steps to get our data ready to run some models.

If you’re like me, when you started learning data science, this is the part you learned first. All the stuff above had already been done by someone else. All you had to was fit a model on it.

Our Titanic dataset is small. So we can afford to run a multitude of models on it to figure out which is the best to use.

Notice how I put an (s) in the subtitle, you can pay attention to this one.

Cross-validation accuracy scores from a number of different models I tried using to predict whether a passenger would survive or not.

Cross-validation accuracy scores from a number of different models I tried using to predict whether a passenger would survive or not.

But once you’ve had some practice with different datasets, you’ll start to figure out what kind of model usually works best. For example, most recent Kaggle competitions have been won with ensembles (combinations) of different gradient boosted tree algorithms.

Once you’ve built a few models and figured out which is best, you can start to optimise the best one through hyperparameter tuning. Think of hyperparameter tuning as adjusting the dials on your oven when cooking your favourite dish. Out of the box, the preset setting on the oven works pretty well but out of experience you’ve found lowering the temperature and increasing the fan speed brings tastier results.

It’s the same with machine learning algorithms. Many of them work great out of the box. But with a little tweaking of their parameters, they work even better.

But no matter what, even the best machine learning algorithm won’t result in a great model without adequate data preparation.

Exploratory data analysis and model building is a repeating circle.

The EDA circle of life.

The EDA circle of life.

A final challenge (and some extra-curriculum)

I left the cafe. My ass was sore.

At the start of this article, I said I’d keep it short. You know how that turned out. It will be the same as your EDA iterations. When you think you’re done. There’s more.

We covered a non-exhaustive EDA checklist with the Titanic Kaggle dataset as an example.

1. What question are you trying to solve (or prove wrong)?

Start with the simplest hypothesis possible. Add complexity as needed.

2. What kind of data do you have?

Is your data numerical, categorical or something else? How do you deal with each kind?

3. What’s missing from the data and how do you deal with?

Why is the data missing? Missing data can be a sign in itself. You’ll never be able to replace it with anything as good as the original but you can try.

4. Where are the outliers and why should pay attention to them?

Distribution. Distribution. Distribution. Three times is enough for the summary. Where are the outliers in your data? Do you need them or are they damaging your model?

5. How can you add, change or remove features to get more out of your data?

The default rule of thumb is more data = good. And following this works well quite often. But is there anything you can remove get the same results? Less but better? Start simple.

Data science isn’t always about getting answers out of data. It’s about using data to figure out what assumptions of yours were wrong. The most valuable skill a data scientist can cultivate is a willingness to be wrong.

There are examples of everything we’ve discussed here (and more) in the notebook on GitHub and a video of me going through the notebook step by step on YouTube (the coding starts at 5:05).

FINAL BOSS CHALLENGE: If you’ve never entered a Kaggle competition before, and want to practice EDA, now’s your chance. Take the notebook I’ve created, rewrite it from top to bottom and improve on my result. If you do, let me know and I’ll share your work on my LinkedIn. Get after it.

Extra-curriculum bonus: Daniel Formoso's notebook is one of the best resources you’ll find for an extensive look at EDA on a Census Income Dataset. After you’ve completed the Titanic EDA, this is a great next step to check out.

If you’ve got something on your mind you think this article is missing, leave a response below or send me a note and I’ll be happy to get back to you.

Source: https://towardsdatascience.com/a-gentle-in...

Work in progress

I’m working on a longer form article. An introduction to exploratory data analysis to go along with the Code with Me video I did exploring the Kaggle Titanic dataset and the notebook code to go with it.

I’ve spent the past two days writing and and refining it.

I wanted to get it published today but it’s getting late and you know my thoughts on sleep. I work better when I sleep well.

In the past I’d have trouble walking away from something unless it’s done. But I’ve learned, especially with writing (and code) it pays to walk away, think about nothing for a while and then come back at it with a different pair of eyes.

The next time you look at it, you’ll see things you missed before. That’s what I’ll be doing tomorrow morning.

If you want to read it in the meantime, it’s in draft form on Medium. It needs some graphics and a little tidying but if you do read it, what would you change?

The Five C's of Online Learning

This post originally appeared on Quora as my answer to 'Udacity or Coursera for AI machine learning and data science courses?'

P1000829.jpg

Tea or coffee?

Burger or sandwich?

Rain or sunshine?

Pushups or pull-ups?

Can you see the pattern?

Similar but different. It’s the same with Udacity and Coursera.

I used both of them for my self-created AI Masters Degree. And they both offer incredibly high-quality content.

The short answer: both.

Keep scrolling for a longer version.

Let’s go through the five C’s of online learning.

If you’ve seen my work, you know I’m a big fan of digging your own path and online platforms like Udacity and Coursera are the perfect shovel. But doing this right requires thought around five pillars.


Curiosity

When you imagine the best version of yourself 3–5 years in the future, what are they doing?

Does it align with what’s being offered by Udacity or Coursera?

Is the future you a machine learning engineer at a technology company?

Or have you decided to take the leap on your latest idea and go full startup mode?

It doesn’t matter what the goal is. All of them are valid. Mine is different to yours and yours will be different to the other students in your cohort.

The important part is an insatiable curiosity. In Japanese, this curiosity is referred to as ikigai or your reason for getting up in the morning.

Day to day, you won’t be bounding out of bed running to the laptop to get into the latest class or complete the assignment you’re stuck on.

There will be days where everything else except studying seems like a better option.

Don’t beat yourself up over it. It happens. Take a break. Rest.

Even with all the drive in the world, you still need gas.


Contrast

Sam was telling me about a book he read over the holidays.

‘There were some things I agreed with but some things I didn’t.’

My insatiable curiosity kicked in.

‘What did you disagree with?’

I was more interested in that. He said it was a good book. What were the things he didn’t like?

Why didn’t he like those things?

The contrast is where you learn the most.

When someone agrees with you, you don’t have to back up your argument. You don’t have to explain why.

But have you ever heard two smart people argue?

I want to hear more of those conversations.

When two smart people argue, you’ve got an opportunity to learn the most.

If they're both smart, why do they disagree?

What are their reasons for disagreeing?

Take this philosophy and apply it to learning online through Udacity or Coursera.

If they’re like tea and coffee, where's the difference?

When I did the Deep Learning Nanodegree on Udacity, I felt like I had a wide (but shallow) introduction to deep learning.

Then when I did Andrew Ng’s deeplearning.ai after, I could feel the knowledge compounding.

Andrew Ng’s teachings didn’t disagree with Udacity’s, they offered a different point of view.

The value is in the contrast.


Content

Both partner with world-leading organisations.

Both have world class quality teachers.

Both have state of the art learning platforms.

When it comes to content, you won’t be disappointed by either.

I’ve done multiple courses on both platforms and I rate them among the best courses I’ve ever done. And I went to university for 5-years.

Udacity Nanodegrees tend to go for longer than Coursera.

For example, the Artificial Intelligence Nanodegree is two terms both about 3–4 months long.

Whereas Coursera Specializations (although at times a similar length), you can dip in and out of.

For example, complete part 1 of a Specialization, take a break and return to the next part when you’re ready. I’m doing this for the Applied Data Science with Python Specialization.

If content is at the top of your decision-making criteria, make a plan of what it is you hope to learn. Then experiment with each of the platforms to see which better suits your learning style.


Cost

Udacity has a pay upfront pricing model.

Coursera has a month-to-month pricing model.

There have been times I completed an entire Specialization on Coursera within the first month of signing up, hence only paying for one month.

Whereas, all the Udacity Nanodegree’s I’ve done, I’ve paid the total up front and finished on (or after) the deadline.

This could be Parkinson’s Law at play: things take up as much time as you allow them.

Both platforms offer scholarships as well as financial support services, however, I haven’t had any experience with these.

I drove Uber on weekends for a year to pay for my studies.

I’m a big believer in paying for things.

Especially education.

When I pay for something, I take it more seriously.

Paying for something is a way of saying to yourself, I’m investing my money (and time spent earning it), I better invest my time into too.

All the courses I’ve completed on both platforms have been worth more than the money I spent on them.


Continuation

You’ve decided on a learning platform.

You’ve decided on a course.

You work through it.

You enjoy it.

Now what do you do?

Do you start the next course?

Do you start applying for jobs?

Does the platform offer any help with getting into the industry?

Udacity has a service which partners students who have completed a Nanodegree with a careers counsellor to help you get a role.

I’ve never got a chance to use this because I was hired through LinkedIn.

What can you do?

Don’t be focused on completing all the courses.

Completing courses is the same as completing tasks. Rewarding. But more tasks don’t necessarily move the needle.

Focus on learning skills.

Once you’ve learned some skills. Practice communicating those skills.

How?

Share your work.

Have a nice GitHub repository with things you’ve built. Stack out your LinkedIn profile. Build a website where people can find you. Talk to people in your industry and ask for their advice.

Why?

Because a few digital certificates isn’t a reason to hire someone.

Done all that?

Good. Now remember, the learning never stops. There is no finish line.

This isn’t scary. It’s exciting.

You stop learning when your heart stops beating.


Let’s wrap it up

Both platforms offer some of the highest quality education available.

And I plan on continuing to use them both to learn machine learning, data science and many other things.

But if you can online choose one, remember the five C’s.

  1. Curiosity — Stay curious. Remember it when learning gets tough.

  2. Contrast — Remix different learning resources. All the value in life is at the combination of great things.

  3. Content — What content matches your curiosity? Follow that.

  4. Cost — Cost restrictions are real. But when used right, your education is worth it.

  5. Continuation — Learn skills, apply them, share them, repeat.

More

I’ve written and made videos about these topics in the past. You might find some of the resources below valuable.

Source: https://qr.ae/TUnFZB

Some thoughts on university versus learning online for data science

Zac emailed me asking a question.

Keep on working and keep looking for new opportunities in the field…
OR 
Go back to uni and finish the last 18 months of my degree.

He just finished an internship and has about 18-months left at university before he finishes his computer science degree.

It’s a tough choice.

I sat and thought about it for a while. Then replied to the email with some unedited thoughts.

And I’m sharing them here, also unedited. Bear in mind, I’ve never been to university to study computer science.

Zac,

Here’s how I see it, I’m gonna write a few thoughts out loud.

Where do you want to be/see yourself in 3-5 years? 

It sounds like you’re pretty switched on to where your skillset lies (aka, teaching yourself, working on things which interest you).

Might be worth having a think about which one better suits the ideal version of you in 3-5 years.

Does that ideal version of you require a university degree? Or could that version of you get by without one?

Which one is the most uncomfortable in the short term?

I’m very long term focused (I have to remind myself of this every day). So whenever I come up to a hard decision, I ask myself, ‘Which one is hardest in the short term?’

I treat short term as anything under 2-3 years (the starting era of the ideal version of yourself).

18-months isn’t really the longest time

How much of a rush are you in?

Could you stick out the 18-months, share your work online through an online portfolio, upskill yourself through various other courses (and jump ahead of others) and come out with a degree AND some extra skills.

Get after it

This is countering the above point.

If you think you have the balls to chase after it (sounds like you already do), why do you need university to be a gatekeeper?

Sure, not having an official degree may shut you off from some companies, but to me, a piece a paper never really meant much. Especially when the best quality materials in world are available online.

I have a colleague doing a data science masters at UQ and he said he has learned way more since working with Max Kelsen than at university.

Put it this way, I was driving Uber this time last year. But I followed through with my curriculum, shared my work online and got found by an awesome company.

Share your work

Whichever path you choose, I can’t emphasis this enough. Make sure people can find you online.

If you’re not going to get a degree. Be the person who’s name comes up on others LinkedIn feeds for data science posts. Have some good Medium articles, share what you’ve been doing.

It’ll feel weird in the start. Trust me. But then you’ll realise the potential of it.

All of sudden, you can become an expert in your field by being the one to communicate the skills you’re learning.

How did I do?

What would you do in Zac’s situation? Learn online and look for more work experience? Or stick out the 18-months of computer science?

How to send 500 lines of Python code up in flames

Jupyter Lab was open. The notebooks and data I was working on were sitting on the left.

It was close to home time.

After digging through docs all day to figure out how to get some old code running, my brain was looking like this: FQRUQ#$%(#$QJTQITHqRjlrjkaw

Every time I push to GitHub I have to look up a guide.

3-minutes later, I sent off a push command and noticed all the files were being pushed. Not what we wanted. Only the new stuff was to go up.

CRTL-C.

CRTL-C.

CRTL-C.

It stopped.

> git reset
> git add [notebooks]
> git commit -m "adding latest datascience notebooks"
> git push origin master

"You're already one commit ahead of master."

What?

Google time.

*clicks on first stackoverflow link*

No good.

*clicks back*

*next link*

Better.

"To rollback a commit you can use git revert..."

Back to the shell.

> git log

*copies previous commit hash key*

> git revert [insert above hash key]

All of a sudden two files disappeared from the Jupyter Lab directory. The exact two files I wanted to commit.

And the data folder was now empty. "Huh?"

30-minutes of trying to revert a revert later, 1 of the 2 notebooks were nowhere to be seen. We saved one because I still had it open. The other wasn’t as lucky. ~500 lines of Python code gone.

Moral of the story?

When it comes to Git. Move slow and save things.

-

PS. I found a really cool (and colourful) guide on how to use git. I’ve bookmarked for future reference. If you want to step up your git/GitHub game, you might be interested.

Source: https://www.linkedin.com/feed/update/urn:l...

How to explore your first Kaggle competition dataset and make a submission

The first time doing something is always the hardest.

People had asked me in the past, 'Have you entered Kaggle competitions?'

'Not yet.'

Until the other day. I made my first official submission.

I'd dabbled before. Looked around at the website. Read some posts. But never properly downloaded the data and went through it.

Why?

Fear. Fear of looking at the data and having no idea what to do. And then feeling bad for not knowing anything.

But after a while, I realised that's not a helpful way to think.

I downloaded the Titanic dataset. The one that says 'Start here!' when you visit the competitions page.

A few months into learning machine learning, I wouldn't have been able to explore the dataset.

I learned by starting at the top of the mountain instead of climbing up from the bottom. I started with deep learning instead of practising how to explore a dataset from scratch.

But that's okay. The same principle would apply if you start exploring a dataset from scratch. Once the datasets got bigger, and you wanted your models to be better, you'd have to learn deep learning eventually.

Working through the Titanic data take me a few hours. Then another few hours to tidy up the code. The first run through of any data exploration should always be a little messy. After all, you're trying to build an intuition of the data as quick as possible.

Then came submission time. My best model got a score of just under 76%. Yours will too if you follow through the steps in the notebook on my GitHub.

I made the notebook accessible so you can follow it through and make your very own first Kaggle submission.

There are a few challenges and extensions too if you want to improve on my score. I encourage you to see how you go with these. They might improve the model, they might not.

If you do beat my score, let me know. I'd love to hear about what you did.

Want a coding buddy? When I finished my first submission, I livestreamed myself going step by step through the code. I did my best to explain each step without going into every little detail (otherwise the video would've been 6-hours long instead of 2).

I'll be writing a more in-depth post on the what and why behind the things I did in the notebook. Stay tuned for that.

In the meantime, go and beat my score!

You can find the full code and data on my GitHub.

What kind of data do you have?

So you’ve got some data and you’re wondering what can be learned from it. Is it numerical or categorical? Does it have high dimensionality or cardinality?

Dimension-what-ity?

It’s no secret that data is everywhere. But it’s important to recognise not all data is the same. You might have heard the term data cleaning before. And if you haven’t, it’s not too different to regular cleaning.

When you decide it’s time to tidy your house, you put the clothes on the floor away, and move the stuff from the table back to where it should go. You’re bringing order back to a chaotic environment.

The same thing happens with data. When a machine learning engineer starts looking at a dataset, they ask themselves, ‘where should this go?’, ‘what was this supposed to be?’ Just like putting clothes back in the closet, they start moving things around, changing the values of one column and normalising the values of another.

But wait. How do you know what to do to each piece of data?

Back to the house cleaning analogy. If you have a messy kitchen table, how do you know where each of the items goes?

The spices go in the pantry because they need to stay dry. The milk goes back in the fridge because it has to stay cold. And the pile of envelopes you haven’t opened yet can probably go into the study.

Now say you have a messy table of data. One column has numbers in it, the other column has words in it. What could you with each of these?

A convenient way to break this down is into numerical and categorical data.

Before we go further, let’s meet some friends to help unpack these two types of values.

Harold the pig loves numbers. He counts his grains of food every day.

Klipklop the horse watches all the cars go past the field and knows every type there is.

And Sandy the fish loves both. She knows there’s safety in numbers and loves all the different types of marine life under the sea.

Harold the pig loves numerical data, Klipklop favours categorical data and Sandy the fish loves both.

Harold the pig loves numerical data, Klipklop favours categorical data and Sandy the fish loves both.

Numerical data

Like Harold, computers love numbers.

With any dataset, the goal is often to transform it in a way so all the values are in some kind of numerical state. This way, computers can work out patterns in the numbers by performing large-scale calculations.

In Harold’s case, his data is already in a numerical state. He remembers how many grains of food he’s had every day for the past three years.

He knows on Saturdays he gets a little extra. So he saves some for Mondays when the supply is less.

You don’t necessarily need a computer to figure out this kind of pattern. But what if you were dealing with something more complex?

Like predicting what Company X’s stock price would be tomorrow, based on the value of other similar companies and recent news headlines about Company X?

Ok – so you know the stock prices of Company X and four other similar companies. These values are all numbers. Now you can use a computer to model these pretty easily.

But what if you wanted to incorporate the headline ‘Company X breaks new records, an all-time high!’ into the mix?

Harold is great at counting. But he doesn’t know anything about the different types of grains he has been eating. What if the type of grain influenced how many pieces of grain he received? Just like how a news headline may influence the price of a stock.

The kind of data that doesn’t come in a straightforward numerical form is called categorical data.


Categorical data

Categorical data is any kind of data which isn’t immediately available in numerical form. And it’s typically where you will hear the terms dimensionality and cardinality thrown around.

This is where Klipklop the horse comes in. He watches the cars go past every day and knows the make and model of each one.

But say you wanted to use this information to predict the price of a car.

You know the make and model contribute something to the value. But what exactly?

How do you get a computer to understand that a BMW is different from a Toyota?

With numbers.

This is where the concept of feature encoding comes in. Or in other words, turning a category into a number so that a computer learns how each of the numbers relates.

Let’s say it’s been a quiet day and Klipklop has only seen 3 cars.

A BMW X5, a Toyota Camry and a Toyota Corolla. How could you turn these cars into numbers a machine could understand whilst still keep their inherent differences?

There are many techniques, but we’ll look at two of the most popular – one-hot-encoding and ordinal encoding.

Ordinal Encoding

This is where the car and its make are assigned a number in the order they appeared.

Say the BMW went by first, followed by the Camry, then the Corolla.

Table 1: Example of ordinal encoding different car makes.

Table 1: Example of ordinal encoding different car makes.

But does this make sense?

By this logic, a BMW + Toyota should equal a Toyota (1 + 2 = 3). Not really.

Ordinal encodings can be used for some situations like time intervals but it’s probably not the best choice for this case.

One-hot-encoding

One-hot encoding assigns a 1 to every value that applies to each individual car, and 0 to every value that does not apply.

Table 2: Example of one-hot encoding different car makes and types.

Table 2: Example of one-hot encoding different car makes and types.

Now our two Toyotas are similar to each other because they both have 1’s for Toyota but differ on their make.

One-hot-encoding works well to encode category values into numbers but has a downfall. Notice how the number of values used to describe a car increased from 2 to 5.

This is where the term high dimensionality gets used. There are now more parameters describing what each car is than there is the number of cars.

For a computer to learn meaningful results, you want the ratio to be high in the opposite way.

In other words, you’d prefer to have 6,000 examples of cars and only 6 ways of describing them rather than the other way round.

But of course, it doesn’t always work out this way. You may end up with 6,000 cars and 1,000 different ways of describing them because Klipklop has seen 500 different types of makes and models.

This is the issue of high cardinality – when you have many different ways of describing something but not many examples of each.

For an ideal price prediction system, you’d want something like 1,000 Toyota Corollas, 1,000 BMW X5s and 1,000 Toyota Camrys.

Ok, enough about cars.

What about our stock price problem? How could you incorporate a news headline into a model?

Again, you could do this a number of ways. But we’ll start with a binary representation.

Binary Encoding

You were born before the year 2000, true or false?

Let’s say you answered true. You get a 1. Everyone born after the year 2000 gets a 0. This is binary encoding in a nutshell.

For our stock price prediction, let’s break our news headlines into two categories – good and bad. Good headlines get a 1 and bad headlines get a 0.

With this information, we could scan the web, collecting headlines as they come in and feeding these into our model. Eventually, with enough examples, it would start to get a feel of the stock price changes based on the value it received for the headline.

And with the model, you start to notice a trend – every time a bad headline comes out, the stock price goes down. No surprises.

We’ve used a simple example here and binary encodings don’t exactly capture the intensity of a good or bad headline. What about neutral, very good or very bad? This is where our the previously discussed ordinal encoding could come in.

-2 for very bad headlines, -1 for bad, 0 for neutral, 1 for good and 2 for very good. Now it makes sense that very bad + very good = neutral.

There are more complex ways to bring words into a machine learning model but we’ll leave those for a future article.

The important thing to note is that there are many different ways seemingly non-numerical information can be converted into something a computer can understand.


What can you do?

Machine learning engineers and data scientists spend much of their time trying to think like Sandy the fish.

Sandy knows she’ll be safe staying with the other school of fish but she also knows there’s plenty to learn from exploring the unknown.

It’s easy to lean on only numerical information to draw insights from. But there’s so much more information hidden in diverse ways.

By using a combination of numerical and categorical information, more realistic and helpful models of the world can be built.

It’s one thing to model the stock market using price information, but it’s a whole other game when you add news headlines to the mix.

If you’re looking to start harnessing the power of your data with techniques like machine learning and data science, there are a few things you can to get the most of it.

Normalising your data

If you’re collecting data, what format is it stored in?

The format itself isn’t necessarily as important as the uniformity. Collect it but make sure it’s all stored in the same way.

This applies for numerical and categorical data, but especially for categorical data.

More is better

The ideal dataset has a good balance between cardinality and dimensionality.

In other words, plenty of examples of each particular sample.

Machines aren’t quite as good as humans when it comes to learning (yet). We can see Harold

the pig once and remember what a pig looks like, whereas, a computer needs thousands of examples of a picture of a pig to remember what a pig looks like.

A general rule of a thumb for machine learning is that more (quality) data equals better models.

Document what each piece of information relates to

As more and more data is collected, it’s important to be able to understand what each piece of information relates to.

At Max Kelsen, before any kind of machine learning model is run, the engineers spend plenty of time liaising with subject matter experts who are familiar with the data set.

Why is this important?

Because a machine learning engineer may be able to build a model which is 99% accurate but it’s useless if it’s predicting the wrong thing. Or worse, 99% accurate on the wrong data.

Documenting your data well can help prevent these kinds of misfires.

It doesn’t matter whether you’ve got numerical data, categorical data or a combination of both – if you’re looking to get more out of it, Max Kelsen can help.

Source: https://maxkelsen.com/blog/what-kind-of-da...

Four hours per day

Is all you need.

If you want to learn something, the best way to do it is bit by bit.

Cramming for exams in university never worked for me. I remember walking into campus straight to the canteen on exam day.

‘Two Red Bulls please.’

Then my knee would spend the next two-hours in the exam room tapping away but my brain would fail to connect the dots.

The most valuable thing I took away from university was learning how to learn.

By my final year, my marks started to improve. Instead of cramming a couple of days before the exam, I spread my workload out over the semester. Nothing revolutionary by any means. But it was to me.

Now whenever I want to learn something, I do the same. I try do a little per day.

For data science and programming, my brain maxes out at around four hours. After that, the work starts following the law of diminishing returns.

I use the Pomodoro technique.

On big days I’ll aim for 10.

Other days I’ll aim for 8.

It’s simple. You set a timer for 25-minutes and do nothing but the single task you set yourself at the beginning of the day for that 25-minutes. And you repeat the process for however many times you want.

Let’s say you did it 10-times, your day might look like:

8:00 am

Pomodoro 1

5-minute break

Pomodoro 2

5-minute break

Pomodoro 3

5-minute break

Pomodoro 4

30-minute break

10:25 am

Pomodoro 5

5-minute break

Pomodoro 6

5-minute break

Pomodoro 7

5-minute break

Pomodoro 8

60-minute break

1:20 pm

Pomodoro 9

5-minute break

Pomodoro 10

5-minute break

2:20 pm

Now it’s not even 2:30 pm and if you’ve done it right, you’ve got some incredible work done.

You can use the rest of the afternoon to catch up on those things you need to catch up on.

Don’t think 10 lots of 25-minutes (just over 4-hours) is enough time to do what you need?

Try it. You’ll be surprised what you can accomplish in 4-hours of focused work.

The schedule above is similar to how I spent my day the other day. Except I threw in a bit of longer break during the middle of the day to go to training and have a nap.

I was working through the Applied Data Science Specialization with Python by the University of Michigan on Coursera. The lessons and projects have been incredibly close to what I’ve been doing day-to-day as a Machine Learning Engineer at Max Kelsen.

PS best to put your phone out of sight when you’ve got your timer going. I use a Mac App called Be Focused, it’s simple and does exactly the above.