Using Efficiency and Scalability to Create AI Solutions in Startups
Guest post by Dr. Janet Bastiman, Chief Science Officer at StoryStream
Necessity has always been a driver of innovation and efficiency, and this is available in near endless quantities in startup companies harnessing Artificial Intelligence (AI) to solve unique problems. How do you bring cutting edge unique solutions with small teams when large multinationals with near unlimited budgets can target anything they want?
This isn’t a problem that can be solved by pressuring your teams to work excessive hours, it’s very much more a question of appropriate focus, maximising the use of every asset you have, automation and thinking as far out of the box as possible.
All good commercial AI starts with a customer problem, and for smaller companies, finding a niche problem and specialising can give great results. One of the best examples of the use of artificial intelligence to drive business benefit is to automate complex tasks, something we focus on at StoryStream, where our smart content platform delivers content automation solutions to the automotive industry. In automotive marketing, the content traditionally has been manually annotated, with marketers going through images and text by hand to find the most relevant for their needs. With the volume explosion of customer created content, this is not scalable for any company. Manual curation means missing out on a lot of content, how can you be sure that you have the best? Unlike professional images, these can be very variable in how the car is positioned, the lighting and composition, making this a very technically difficult problem. Even humans can have difficulty with some of these tasks! When the annotations need to be detailed and accurate down to the make and model of the vehicle in the image, this isn’t even something you can outsource easily, you need domain experts.
While some of our competitors have simplified the problem by only supporting certain years, or models, or orientation of vehicle, we challenged ourselves to go deeper and broader. With a small team, limited budget, and fixed timescales, this was not a trivial task! Solving each problem usually results in several experiments we want to run in parallel, and we also want our outputs to be ready for production. Spending the time of an AI researcher doing repetitive tasks or converting items to production is time taken away from making the cutting edge solution that can make the difference for the company, so this all needs to be automated.
Our developers use their GPU laptops to get the initial research done: solving the technical problems, choosing what sort of AI solution to implement, and being able to watch the training phase for a short while to ensure that everything is working as expected. Laptops have their resource limitations and, once we’re happy that the system is ready we pass it to AWS for full training. This makes sense for us – our data is already stored in AWS S3 buckets and often we’re running multiple different solutions in parallel at this stage and having the means to flex our resource here is essential. Even when we have found a potential solution, we’ll run several variations to make sure we have the best possible outcome. We can’t do all these experiments manually so we’ve automated as much as possible so that the team of AI experts don’t have to waste any time coding anything other that the AI networks themselves. Everything else is set with containers and configuration.
With this level of automation, the team can stay focussed on problems. They have time and resource to try some of the latest techniques that give us the edge over our much larger competitors, and our pipeline means that we can get individual solutions from idea to live in very short timescales.