This week, our Deep Learning Study Group focused on a popular neural network architecture for machine vision called Convolutional Neural Networks. This technique is used to, for example, recognise that a face is yours on Facebook or to enable Tesla's self-driving cars to identify traffic. Modelled loosely on the primate visual cortex, this approach is powerful and has become ubiquitous within contemporary Artificial Intelligence.
“Fundamentals of Deep Learning” talk to Data Science + FinTech Meetup →
On Wednesday, I had the joy of presenting to a highly-engaged audience at the Data Science + FinTech meetup. Click through for a quick summary, my full slides, and video clips.
Deep Learning Study Group Session #4: Proofs of Key Neural Net Properties →
In our fourth iteration, we discussed key neural net properties, namely that they can compute any function and that deep ones tend to have unstable (typically, vanishing) gradients.
Three Themes of Seth Moulton’s Inaugural Term, Visualized with Data →
In 2014, I volunteered on Congressman Seth Moulton's first political campaign. Two years on, I've dusted off my media-analytics toolkit to summarize his inaugural term in office with data visualizations.
It's a joy to see Seth working with Democrats and Republicans alike to deliver for his country and his constituents in Massachusetts. Running unopposed this November, I'm excited to see the contributions his across-the-aisle diplomacy will bring in his second term.
Deep Learning Study Group Session #3: Improving Neural Networks →
During the third installment of our Deep Learning Study Group, we examined rules of thumb for improving neural networks. This post summarises what we covered, including step-by-step details for tuning a network's hyperparameters.
Deep Learning Study Group Session #2: The Backpropagation Algorithm →
Our Deep Learning Study Group moved forward yesterday by focusing on the ubiquitous Backpropagation Algorithm.
Minimizing Unwanted Bias with Recruitment Algorithms →
I woke up this morning to a thoughtful Financial Times piece on the “risks of relying on robots for fairer staff recruitment”.
While the author advances well-founded concerns for our industry, the risks associated with integrating algorithms into the talent acquisition process are appreciably offset by the benefits: scalability, access to a broader candidate pool, and, vitally, openness.
Deep Learning Study Group Session #1: Perceptrons and Sigmoid Neurons →
Last week, it was my pleasure to host the heavily-oversubscribed inaugural session of the Deep Learning Study Group at our untapt HQ in New York.
I learned a ton from the broad range of well-prepared, articulate, and intelligent engineers and scientists that attended.
Here's a recap of what we covered and an overview of where we're going next.
Deep Learning Study Group →
I am launching a study group for folks interested in applying techniques from the explosively influential field of Deep Learning.
Our first meeting will be held at untapt HQ in two weeks. Details, including recommended preparatory work, can be found by clicking through to the full post.
Joint Statistical Meetings 2016 →
I'm delighted to represent untapt at the Joint Statistical Meetings in Chicago, the largest gathering of statisticians this year. We presented on our novel approach to hierarchical Bayesian modelling, which enables us to efficiently automate decision-making with our data.
The End of Code
This is a brief, popular lay piece about the increasingly widespread field of Deep Learning that demonstrates the statistical technique's beauty, mystery and the power it has over us all.
"As our technological and institutional creations have become more complex, our relationship to them has changed. Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own."
Optimal Resume Length →
Leveraging our rich internal data, we observed that the optimal resume length is 250 to 350 words.
Hiring Quickly Matters →
We used our internal data, which consists of ten thousand applications to technology roles, to uncover that hiring managers respond significantly more quickly in situations that eventually lead to a hire relative to situations where they don't.
A Data Science Approach to Maximizing Data Scientist Salary →
Does a Ph.D. improve your salary? In the field of Data Science, it does a tiny bit, but is hardly worth the investment of years of your life (if pay is primarily why you're pursuing the degree). On the other hand, learning a single cutting-edge analytics technique relates to a pay jump of $12,000. Here are my thoughts on using data to maximise your pay, based on a recent talk at Women in Machine Learning in Data Science.
How to Transition from Academia to Data Science →
I recently had the honor of sitting on a great panel to discuss how academic scientists can transition into industry as Data Scientists. This post summarises our key points on the topic.
Profile in Computational Imagination →
I'm grateful to have been interviewed by Computational Imagination. Check out the transcript for tips on transitioning from academia to the data science industry and a perspective on trends in the field.
"What is Code?" →
Computer code is ubiquitous, driving all of your digital interactions and most modern business processes. If you've never programmed but would like to have an understanding of the fundamentals of software, this post should be a great primer.
The First Self-Aware Machines →
A robot has demonstrated self-awareness. Is Homo sapiens now on the verge of extinction? Probably!
Genomics: Imminent, Positive, Disruptive Tech →
I wax lyrically about biology, particularly recent advances in DNA sequencing, and the impacts we can expect these technologies to have in medicine.
The Neurophysiology of Happiness →
Thanks to my friend Kayleigh Pleas for providing me with such a wealth of brilliant content for this post on the neurophysiology of emotion and how we can deliberately adapt our brain to improve our baseline level of happiness.