Helping businesses bring more firepower to the fight against AI-fuelled disruptors is the name of the game for Integrate.ai, a Canadian startup that’s announcing a $30M Series A today.
The round is led by Portag3 Ventures . Other VCs include Georgian Partners, Real Ventures, plus other (unnamed) individual investors also participating. The funding will be used for a big push in the U.S. market.
Integrate.ai’s early focus has been on retail banking, retail and telcos, says founder Steve Irvine, along with some startups which have data but aren’t necessarily awash with AI expertise to throw at it. (Not least because tech giants continue to hoover up talent.)
Its SaaS platform targets consumer-centric businesses — offering to plug paying customers into a range of AI technologies and techniques to optimize their decision-making so they can respond more savvily to their customers. Aka turning “high volume consumer funnels” into “flywheels”, if that’s a mental image that works for you.
In short it’s selling AI pattern spotting insights as a service via a “cloud-based AI intelligence platform” — helping businesses move from “largely rules-based decisioning” to “more machine learning-based decisioning boosted by this trusted signals exchange of data”, as he puts it.
Irvine gives the example of a large insurance aggregator the startup is working with to optimize the distribution of gift cards and incentive discounts to potential customers — with the aim of maximizing conversions.
“Obviously they’ve got a finite amount of budget for those — they need to find a way to be able to best deploy those… And the challenge that they have is they don’t have a lot of information on people as they start through this funnel — and so they have what is a classic ‘cold start’ problem in machine learning. And they have a tough time allocating those resources most effectively.”
“One of the things that we’ve been able to help them with is to, essentially, find the likelihood of those people to be able to convert earlier by being able to bring in some interesting new signal for them,” he continues. “Which allows them to not focus a lot of their revenue or a lot of those incentives on people who either have a low likelihood of conversion or are most likely to convert. And they can direct all of those resources at the people in the middle of the distribution — where that type of a nudge, that discount, might be the difference between them converting or not.”
He says feedback from early customers suggests the approach has boosted profitability by around 30% on average for targeted business areas — so the pitch is businesses are easily seeing the SaaS easily paying for itself. (In the cited case of the insurer, he says they saw a 23% boost in performance — against what he couches as already “a pretty optimized funnel”.)
“We find pretty consistent [results] across a lot of the companies that we’re working with,” he adds. “Most of these decisions today are made by a CRM system or some other more deterministic software system that tends to over attribute people that are already going to convert. So if you can do a better job of understanding people’s behaviour earlier you can do a better job at directing those resources in a way that’s going to drive up conversion.”
The former Facebook marketing exec, who between 2014 and 2017 ran a couple of global marketing partner programs at Facebook and Instagram, left the social network at the start of last year to found the business — raising $9.6M in seed funding in two tranches, according to Crunchbase.
The eighteen-month-old Toronto based AI startup now touts itself as one of the fastest growing companies in Canadian history, with a headcount of around 40 at this point, and a plan to grow staff 3x to 4x over the next 12 months. Irvine is also targeting growing revenue 10x, with the new funding in place — gunning to carve out a leadership position in the North American market.
One key aspect of Integrate.ai’s platform approach means its customers aren’t only being helped to extract more and better intel from their own data holdings, via processes such as structuring the data for AI processing (though Irvine says it’s also doing that).
The idea is they also benefit from the wider network, deriving relevant insights across Integrate.ai’s pooled base of customers — in a way that does not trample over privacy in the process. At least, that’s the claim.
(It’s worth noting Integrate.ai’s network is not a huge one yet, with customers numbering in the “tens” at this point — the platform only launched in alpha around 12 months ago and remains in beta now. Named customers include the likes of Telus, Scotiabank, and Corus.)
So the idea is to offer an alternative route to boost business intelligence vs the “traditional” route of data-sharing by simply expanding databases — because, as Irvine points out, literal data pooling is “coming under fire right now — because it is not in the best interests, necessarily, of consumers; there’s some big privacy concerns; there’s a lot of security risk which we’re seeing show up”.
What exactly is Integrate.ai doing with the data then? Irvine says its Trusted Signals Exchange platform uses some “pretty advanced techniques in deep learning and other areas of machine learning to be able to transfer signals or insights that we can gain from different companies such that all the companies on our platform can benefit by delivering more personalized, relevant experiences”.
“But we don’t need to ever, kind of, connect data in a more traditional way,” he also claims. “Or pull personally identifiable information to be able to enable it. So it becomes very privacy-safe and secure for consumers which we think is really important.”
He further couches the approach as “pretty unique”, adding it “wouldn’t even have been possible probably a couple of years ago”.
From Irvine’s description the approach sounds similar to the data linking (via mathematical modelling) route being pursued by another startup, UK-based InfoSum — which has built a platform that extracts insights from linked customer databases while holding the actual data in separate silos. (And InfoSum, which was founded in 2016, also has a founder with a behind-the-scenes’ view on the inners workings of the social web — in the form of Datasift’s Nic Halstead.)
Facebook’s own custom audiences product, which lets advertisers upload and link their customer databases with the social network’s data holdings for marketing purposes is the likely inspiration behind all these scenes.
Irvine says he spotted the opportunity to build this line of business having been privy to a market overview in his role at Facebook, meeting with scores of companies in his marketing partner role and getting to hear high level concerns about competing with tech giants. He says the Facebook job also afforded him an overview on startup innovation — and there he spied a gap for Integrate.ai to plug in.
“My team was in 22 offices around the world, and all the major tech hubs, and so we got a chance to see any of the interesting startups that were getting traction pretty quickly,” he tells TechCrunch. “That allowed us to see the gaps that existed in the market. And the biggest gap that I saw… was these big consumer enterprises needed a way to use the power of AI and needed access to third party data signals or insights to be able to enabled them to transition to this more customer-centric operating model to have any hope of competing with the large digital disruptors like Amazon.
“That was kind of the push to get me out of Facebook, back from California to Toronto, Canada, to start this company.”
Again on the privacy front, Irvine is a bit coy about going into exact details about the approach. But is unequivocal and emphatic about how ad tech players are stepping over the line — having seen into that pandora’s box for years — so his rational to want to do things differently at least looks clear.
“A lot of the techniques that we’re using are in the field of deep learning and transfer learning,” he says. “If you think about the ultimate consumer of this data-sharing, that is insight sharing, it is at the end these AI systems or models. Meaning that it doesn’t need to be legible to people as an output — all we’re really trying to do is increase the map; make a better probabilistic decision in these circumstances where we might have little data or not the right data that we need to be able to make the right decision. So we’re applying some of the newer techniques in those areas to be able to essentially kind of abstract away from some of the more sensitive areas, create representations of people and patterns that we see between businesses and individuals, and then use that as a way to deliver a more personalized predictions — without ever having to know the individual’s personally identifiable information.”
“We do do some work with differential privacy,” he adds when pressed further on the specific techniques being used. “There’s some other areas that are just a little bit more sensitive in terms of the work that we’re doing — but a lot of work around representative learning and transfer learning.”
Integrate.ai has published a whitepaper — for a framework to “operationalize ethics in machine learning systems” — and Irvine says it’s been called in to meet and “share perspectives” with regulators based on that.
“I think we’re very GDPR-friendly based on the way that we have thought through and constructed the platform,” he also says when asked whether the approach would be compliant with the European Union’s tough new privacy framework (which also places some restrictions on entirely automated decisions when they could have a significant impact on individuals).
“I think you’ll see GDPR and other regulations like that push more towards these type of privacy preserving platforms,” he adds. “And hopefully away from a lot of the really creepy, weird stuff that is happening out there with consumer data that I think we all hope gets eradicated.”
For the record, Irvine denies any suggestion that he was thinking of his old employer when he referred to “creepy, weird stuff” done with people’s data — saying: “No, no, no!”
“What I did observe when I was there in ad tech in general, I think if you look at that landscape, I think there are many, many… worse examples of what is happening out there with data than I think the ones that we’re seeing covered in the press. And I think as the light shines on more of that ecosystem of players, I think we will start to see that the ways they’ve thought about data, about collection, permissioning, usage, I think will change drastically,” he adds.
“And the technology is there to be able to do it in a much more effective way without having to compromise results in too big a way. And I really hope that that sea change has already started — and I hope that it continues at a much more rapid pace than we’ve seen.”
But while privacy concerns might be reduced by the use of an alternative to traditional data-pooling, depending on the exact techniques being used, additional ethical considerations are clearly being dialled sharply into view if companies are seeking to supercharge their profits by automating decision making in sensitive and impactful areas such as discounts (meaning some users stand to gain more than others).
The point is an AI system that’s expert at spotting the lowest hanging fruit (in conversion terms) could start selectively distributing discounts to a narrow sub-section of users only — meaning other people might never even be offered discounts.
In short, it risks the platform creating unfair and/or biased outcomes.
Integrate.ai has recognized the ethical pitfalls, and appears to be trying to get ahead of them — hence its aforementioned ‘Responsible AI in Consumer Enterprise’ whitepaper.
Irvine also says that raising awareness around issues of bias and “ethical AI” — and promoting “more responsible use and implementation” of its platform is another priority over the next twelve months.
“The biggest concern is the unethical treatment of people in a lot of common, day-to-day decisions that companies are going to be making,” he says of problems attached to AI. “And they’re going to do it without understanding, and probably without bad intent, but the reality is the results will be the same — which is perpetuating a lot of biases and stereotypes of the past. Which would be really unfortunate.
“So hopefully we can continue to carve out a name, on that front, and shift the industry more to practices that we think are consistent with the world that we want to live in vs the one we might get stuck in.”
The whitepaper was produced by a dedicated internal team, which he says focuses on AI ethics and fairness issues, and is headed up by VP of product & strategy, Kathryn Hume.
“We’re doing a lot of research now with the Vector Institute for AI… on fairness in our AI models, because what we’ve seen so far is that — if left unattended, if all we did was run these models and not adjust for some of the ethical considerations — we would just perpetuate biases that we’ve seen in the historical data,” he adds.
“We would pick up patterns that are more commonly associated with maybe reinforcing particular stereotypes… so we’re putting a really dedicated effort — probably abnormally large, given our size and stage — towards leading in this space, and making sure that that’s not the outcome that gets delivered through effective use of a platform like ours. But actually, hopefully, the total opposite: You have a better understanding of where those biases might creep in and they could be adjusted for in the models.”
Combating unfairness in this type of AI tool would mean a company having to optimize conversion performance a bit less than it otherwise could.
Though Irvine suggests that’s likely just in the short term. Over the longer term he argues you’re laying the foundations for greater growth — because you’re building a more inclusive business, saying: “We have this conversational a lot. “I think it’s good for business, it’s just the time horizon that you might think about.”
“We’ve got this window of time right now, that I think is a really precious window, where people are moving over from more deterministic software systems to these more probabilistic, AI-first platforms… They just operate much more effectively, and they learn much more effectively, so there will be a boost in performance no matter what. If we can get them moved over right off the bat onto a platform like ours that has more of an ethical safeguard, then they won’t notice a drop off in performance — because it’ll actually be better performance. Even if it’s not optimized fully for short term profitability,” he adds.
“And we think, over the long term it’s just better business if you’re socially conscious, ethical company. We think, over time, especially this new generation of consumers, they start to look out for those things more… So we really hope that we’re on the right side of this.”
He also suggests that the wider visibility afforded by having AI doing the probabilistic pattern spotting (vs just using a set of rules) could even help companies identify unfairnesses they don’t even realize might be holding their businesses back.
“We talk a lot about this concept of mutual lifetime value — which is how do we start to pull in the signals that show that people are getting value in being treated well, and can we use those signals as part of the optimization. And maybe you don’t have all the signal you need on that front, and that’s where being able to access a broader pool can actually start to highlight those biases more.”
This article originally appeared in Tech Crunch