AWS Startups Blog
How machine learning fits into new product development: A few pointers
Guest post by Daniel Everts, CTO at Nudge
AI picked up a lot of heat through 2017 and 2018. In what felt like an overnight rush, interest exploded, as fresh information, new techniques, new products, and new risks hit the scene. News reports came in massive waves, and businesses scrambled to figure out where AI fit into their product offerings.
The hype around AI was triggered by a rediscovery of a method first conceived in the early 1940s that became increasingly common in the early 2000s. This technique—deep learning—was pushed into the spotlight in 2012, after a new model proved itself infinitely better than its predecessors. This model, unlike many in use at the time (and still today), often does not need as much “tuning” or “know-how” to produce a great result as transitional methods.
Additionally, the decreasing costs of computing power, the availability of servers previously unreachable to all but the largest businesses (thanks, AWS!), and the use of GPUs, which allow for large amounts of calculations to run simultaneously, all contributed to the hype. These have had a huge add-on effect whereby as more people become interested in machine learning, more people produce information, code, and courses to help along deep learning’s growth.
Where we are now
To date, scientists and engineers have made significant strides in the field of machine learning, These so-called “narrow” intelligences are becoming extremely good at singular tasks, like recognizing images, and decoding voice and intent. We’re getting very, very good at this, to the point of being scary—check this out.
From a hands-on perspective, anyone can now supply any kind of machine learning application with two sets of items. The first part is the examples; these are examples of real-life data. These could be photographs, they could be sentences from a book, they could be an audio clip. Second, the target, represents a label for each piece of example data supplied. This gives the application clear instructions, this exact data generated this output. This sentence is happy. This photo is a bird. This audio clip says “hello.”
For example, in the context of recognizing trees versus birds, you would supply 1,000 images of trees, with the “target” as “tree,” and another 1,000 images labeled as “bird.” Or if you wanted to have a model guess someone’s age based off of their name, you would add names as the “data” and the age as the target. From here, you would run the data through an equation where, across the thousands or millions of examples, you would make small changes, observing the result and adjusting the equation in response.
Our view on things
At Nudge, we’ve always kept on the edge of new tech as it becomes available—the recent wave of upgrades to AI is no different. Nudge very quickly launched Executive Notes, their automated tool to generate insights, and began work on CommonSense shortly after. We view AI like any other new tool—as something that needs to be utilized to add value, not implemented just for the sake of it. The mistake that many make here is that they rush so quickly to implement *something* that it only ends up being disappointing—or even worse, providing a faulty, hard to validate feature. AI should be invisible and remain hidden from the users’ direct sight and provide deep insight—not be a flashy billboard dominating your experience.
What we’re working on
A rundown of Nudge’s current roadmap shows our focus—AI should give you an unfair advantage over your competition, give you highly personalized feedback, and help abstract away the need for local expertise in highly specific domains.
- CommonSense
Nudge runs a lot of campaigns. CommonSense helps distill our learnings from years of seeing common patterns in campaigns—are your shares too low? We’ve seen this before. Impressions suddenly drop? This too. Using anomaly detection, and a suite of clearly defined rules, we’re able to ensure your campaign is and remains healthy throughout its lifecycle.
- Content prediction
Content prediction is getting into some of Nudge’s broader experiments. What if you could know how your content would perform with your audience before you ran it? Content prediction delivers on this process. We’ve invested heavily, right since the beginning, to collect absolutely as much data as possible. The result is the ability to do this. We can say—based off more than 50 factors—roughly how long users will spend on your content, how much they’ll read, or how many will go on to share the content post-click. We can then, as you tweak the content as per the recommendations, show how your users’ behavior will begin to change.
- Content Planner
Who are your audiences? Nudge knows which publishers help drive the best attention for your particular target demographic. Why not use actual data, coupled with deep internal expertise, to help tailor the right publisher, or a combination of publishers to achieve the best results? Why trust what a publisher tells you? Let us help make the decision based off of actual fact, not a sales pitch.
What’s Next
AdTech and AI fit together. AdTech companies frequently have large datasets on hand, and a minor upper-hand can give a huge multiplicative result. Huge and highly complex problems are now becoming within reach, the ability to have thousands of dimensions act as factors—big and small—in making purchasing or optimization decisions is an option that, until recently, would’ve been out of reach for all but the top layer of AdTech companies.
With these new classes of algorithms becoming available, we reach a point where humans can no longer always understand how a decision is made. In terms of algorithms, these are called “black box algorithms.” Effectively, I can give it a piece of data and when the results come out, the operator will likely have no idea how the result was generated. This could prove dangerous in the short term. If data is misused, say, from a DSP to provide low-quality recommendations, this could cause massive headaches at scale. The same, and with even higher risk, is use in the medical industry, or with fake news. Seeing examples of this, news about self-driving cars, and GDPR laws coming into place mean that there’s a ton more regulation on the way, and very likely a few more major incidents while we figure out the best way to make sure of this tech.
Looking more optimistically, in the very near future, we’re going to start to see more and more small shops, who now have access to the know-how, the hardware, and the same data many huge companies have. This will give rise to a generation of companies providing incredibly niche, and incredibly valuable services (that perhaps don’t make sense to have run at the scale of many of the top corporations in play today), that previously would, and could never have existed.
We’re hiring remote developers, if you’d like to find out more, visit giveitanudge.com/careers