A few years ago, Karine Mellata and Michael Lin met while working in Apple’s machine fraud and algorithmic risk team. Both engineers, Mellata and Lin have been involved in addressing online abuse issues including spam, botting, account security and developer fraud for Apple’s growing customer base.
Despite their efforts to develop new models to keep up with evolving abuse patterns, Mellata and Lin felt they were left behind — and stuck rebuilding key elements of the trust and security infrastructure.
“As regulation puts more control over teams to assemble somewhat ad-hoc trust and security responses, we saw a real opportunity for us to help modernize this industry and contribute to a safer Internet for everyone,” Mellata told the TechCrunch in an email interview. “We dreamed of a system that could magically adapt as quickly as the abuse itself.”
So Mellata and Lin co-founded Internal, a startup that aims to give security teams the tools they need to prevent abusive behavior in their products. Intrinsic recently raised $3.1 million in a seed round that included Urban Innovation Fund, Y Combinator, 645 Ventures and Okta.
Intrinsic’s platform is designed to control user-generated content and AI, providing infrastructure that enables customers—primarily social media companies and e-commerce marketplaces—to identify and take action on content that violates their policies. Intrinsic focuses on integrating security products, automatically orchestrating tasks such as banning users and flagging content for review.
“Intrinsic is a fully customizable AI content moderation platform,” said Mellata. “For example, Intrinsic can help a publishing company producing marketing materials avoid providing financial advice, which carries legal obligations. Or we can help markets identify listings like brass knuckles, which are illegal in California but not in Texas.”
Mellata argues that there are no off-the-shelf classifiers for these types of differentiated categories, and that even a well-resourced trust and security team would need several weeks—or even months—of engineering time to add new automated detection categories in-house.
Asked about rival platforms like Spectrum Labs, Azure, and Cinder (which is a near-direct competitor), Mellata says she sees Intrinsic as a standout in its (1) explainability and (2) vastly expanded toolbox. Intrinsic, he explained, allows customers to “ask” it about mistakes it makes in content control decisions and offers explanations for its reasoning. The platform also hosts manual review and flagging tools that allow customers to tailor monitoring models to their own data.
“Most conventional trust and security solutions are not flexible and are not built to evolve with abuse,” said Mellata. “Resource-constrained trust and security teams are looking to the vendor for help now more than ever and trying to reduce retention costs while maintaining high security standards.”
In the absence of third-party auditing, it’s hard to tell how accurate a particular vendor’s monitoring models are — and whether they’re sensitive to these kinds of things. prejudices which plagues content retention models elsewhere. But Intrinsic, in any case, appears to be gaining traction thanks to “large, established” corporate clients who sign contracts in the “six-figure” range on average.
Intrinsic’s near-term plans are expanding the size of its three-person team and expanding its surveillance technology to cover not only text and images, but also video and audio.
“The broader slowdown in technology is driving greater interest in automation for trust and security, which puts Intrinsic in a unique position,” said Mellata. “COOs are interested in reducing costs. Chief Compliance Officers are interested in reducing risk. Intrinsic helps with both. We’re cheaper and faster and catch far more abuse than existing vendors or similar in-house solutions.”