After a career as an incident investigator with risk management firms like Kroll and FTI Consulting, Aaron Narva was working with a big international bank client at compliance software maker Exiger. He was responsible for monitoring that client’s legal compliance after it had made headlines a decade earlier for a money-laundering scandal.
“While I was at Exiger, we acquired some software businesses, including an AI software tool that helped pull risk out of unstructured public records. And we built a tool to help identify corruption and sanctions risk in business relationships for very large companies,” Narva told TechCrunch.
That work gave him the idea for Conflixis. Hospitals and other big medical practices face similar risks from corruption as banks. Drug companies and device manufacturers get so famously chummy with doctors that doctors are required to disclose conflicts of interest: junkets, consulting fees, sponsorship of research grants and the like.
Much research shows that those who get too chummy are more likely to prescribe those drugs and devices, whether they would produce better outcomes for the patient or not. The risk is so great that the government runs a database called OpenPaymentsData.com, where anyone can see conflict-of-interest disclosures.
Yet disclosing such conflicts doesn’t stop the problem, which puts hospitals at legal risk. A host of laws prohibit such behavior by doctors, everything from the Stark Law, to the Anti-Kickback Statute (AKS).
At the same time, commercial interests do need to work with physicians – the medical experts – to help them research new drugs and build devices. So not every interaction is forbidden.
Narva envisioned an AI-powered software-as-a-service that would identify for hospitals and big medical practices the actual situations that put the hospital – if not the patient – at risk.
“A big health system might have 200,000 relationships between its doctors and vendors and suppliers,” Narva said. “Which of those 200,000 relationships is impacting you from any one of like six risks?”
Risks range from running afoul of laws to unfavorable medical outcomes. The federal government also provides a database that publishes hospital quality-of-care information.
Narva called a friend he’d known since the eighth grade, Joseph Bergen, director of engineering at Buzzfeed at the time, to ask Bergen’s opinion of the idea. Bergen liked it so much, he quit his job and became a cofounder.
Conflixis works by ingesting data from OpenPaymentsData.com, the hospital’s procurement data, claims data, patient outcomes records, conflict-of-interest forms, other sources. It analyzes all conflict points to identify the ones that a hospital should investigate.
“Ok, we looked at all 5,000 or 10,000 relationships. Here are the seven that you need to actually look at. Like, we boiled the ocean and here are the seven,” he describes as an example.
Conflixis takes it a step further, and can also predict a hospital’s spending and suggest ways to reduce it. For instance, is the hospital buying a more expensive piece of equipment based on a recommendation from a doctor that has a relationship with that vendor, instead of a less expensive one?
“We can make it so that hospitals are reducing their regulatory risk significantly, increasing their trust and transparency with their patients, yes, but also making better operational decisions about what they’re buying,” he says.
Founded in 2023, the company already has a handful of clients with just under $5 million in revenue, Narva said. It just announced a $4.2 million seed round co-led by Lerer Hippeau (the fund founded by BuzzFeed’s former chairman Kenneth Lerer) and Origin Ventures, with mark vc, Springtime Ventures, and pre-seed investor Cretiv Capital participating.
Conflixis joins a crowded field of health industry compliance software companies like Compliatric and Symplr, alhough some are more focused on protecting patient data than on corruption and procurement.
Narva says what sets Conflixis apart is the way it’s married its employees’ careers in investigative work with LLMs. It has modified off-the-shelf models to look for patterns in the data based on “our backgrounds in transaction monitoring and corruption in big data investigations,” he says.