When I first learned to swim as a child, I was forced to face my fears and plunge into the water, back first. Though it was scary, it proved to be a very effective way to learn—back strokes are now one of the things I enjoy most about swimming. An age-old Bateke wisdom ratifies this way of learning:
You learn how to cut down the trees by cutting them down.
In the present day, I see organizations that are overwhelmed with machine learning (and other new-generation tech) mostly due to ubiquitous content alluding to the prerequisites of skill set, infrastructure, data, and so on, to the extent that ML’s value to business looks uncertain. Their experience reminds me of my initial fear in that first swimming lesson, but one recommendation stands out above the rest:
Stop viewing ML from the edge. Jump in the pool and practice.
As an experienced practitioner of ML, I’d like to share my insights and a framework that could enable companies to start reducing ML to practice and derive valuable outcomes. But before I jump in, I’d like to set the context with an example of a valuable learning moment.
As autumn settles in on the city of Chicago, I have started taking regular evening runs. On one of these occasions, I snapped this picture of the Chicago skyline before taking a break to binge some online content.
I came across a Medium article detailing a package called ImageAI, developed by Moses Olafenwa. This Python library is built for empowering developers to build systems with self-contained deep learning and computer vision capabilities using simple and few lines of code. My interest piqued, I hit “save” to revisit the article later and sprinted back home in the twilight.
I decided to try out this method myself. I set up the dependencies, such as Tensorflow, OpenCV, and Keras, and started reducing the code to practice using the photo from my evening run. As I hit ‘run’, the output unfolded and I saw accurate bounding boxes detecting objects—a person, bicycle, and boat.
This activity of reducing the ready-to-use package for practice triggered scenarios that could be useful for the industrial machinery and components manufacturers that I work with. This would allow them to optimize or extend their business processes to address new challenges and opportunities. Here are some of those cases:
- Random bin-picking tasks: Instead of manual programs, robots could learn to detect objects that need to be picked up through deep learning. Here is an example of how Fanuc, the world’s largest industrial robot maker, is using deep reinforcement learning for automating tasks such as bin-picking.
- Automated inventory management: Manual inventory counts is a tedious and cost-intensive process, replete with counting errors. Deep learning is improving the game here. I tested ImageAI on another image from my runs that featured a lot of people. Here is what I observed:
Replace people with objects and there you have an inventory counting application. Multiple papers, like this one, are being written about testing applications of computer vision in this area.
- Visual inspection for quality control: Deep learning-based visual inspections are turning out to be much more accurate than human inspections in cases involving product inspections not suitable for the naked eye. For example, chip manufacturing or complex assemblies built through additive manufacturing, or 3D printing, is ripe for computer vision applications. Landing.ai, a company started by the great Andrew NG, has developed a framework which requires very small training dataset to get started.
What is the framework to bring AI/ML to a value-oriented practice?
Machine learning/AI has reached a level where companies are genuinely exploring processes to be augmented or transformed through ML. However, not every organization has the ML/AI skillset of large technology providers and are therefore overwhelmed. What should these organizations do?
1. First and foremost, understand the value chain of machine learning and your role
Know your position in the value chain of ML/AI. There are chipmakers (Nvidia, Graph core), algorithm makers (Baidu, Google), platform and infrastructure providers (AWS, GCP, SCP, Azure), enterprise solution providers (SAP, MS), industry solution providers, and corporate consumers (GSK, GE, Wal-Mart etc.). As you can imagine, your role would depend on your position in the value chain; therefore, you need to approach ML/AI opportunities differently. For example, an employee in Google, which is both a producer and a consumer of AI, could either be creating newer algorithms or testing the existing ones for a new application, but an employee in another corporation would focus more on value-driven consumption by utilizing what the upstream players deliver.
As an employee, determine your role in the value chain of ML and set your goals accordingly.
2. Discover, consume, ideate to value
Given the time pressure to bring new ideas to market, there is hardly any sense in getting caught in the endless loops of learning these complex technologies. Ayasdi, an AI platform company, points to the huge opportunity in the consumption of the existing AI packages and approaches. The need is to shift to consuming existing packages on Github, ML APIs offered by companies like MS, Google, SAP, and AWS, and to test scenarios that could bring significant benefits.
The whole point of Amazon’s Deep Lens and Google’s Cloud AutoML is to let developers find near endless applications of video, image, and text analysis based on deep learning. The discovery, consumption, ideation chain that we saw with the object detection scenario above could very well be applied to your business, in turn reducing ML to practice. This could potentially be the de facto job description of teams tasked with ML and AI tasks in such companies.
3. Think in terms of scale and order of change
In his book “Enlightenment Now,” Steven Pinker refers to scale and order of change as important criteria for any impactful policy decisions. The criteria for applications of ML and AI scenarios should be no different.
Instead of building from scratch with no scale and value in mind, you must be scratching to build for scale and value.
Post discover-consume-ideate phase, the short-listed scenarios could be tested in terms of time and dollars spent per training or inference task versus the value that is generated for the end customer. In simple economic terms, both supply chain and consumer surplus need to be maximized.
I have come across companies that do just this – they have scenarios in mind, they discover packages/papers/APIs/Github for applications, they apply in their context, and they look for ways to improve upon the performance, only to get further inspired with more scenarios.
In other words, they take the plunge.
Read more about the value chain of AI here.