A managed care organization spent over a year and millions of dollars on an AI tool that didn't work. Their team was frustrated, leadership questioned every investment, and despite sophisticated technology and abundant data, something crucial was missing: context.
We live in a time where AI promises to solve everything—from predicting customer behavior to diagnosing rare diseases. Yet for every success story in tech headlines, there's a quieter reality: organizations are investing heavily in AI solutions that simply don't deliver.
The Problem: AI Doesn't Know Your World
"There's usually an expectation that AI is going to fill in the gaps in terms of context," explains David Monroe, CCNY's Director of Business Development. "The idea that it would understand your clinical practice, your evidence-based practices, and how you've configured those practices to your population. However, those things come from the people who do the work."
This touches on a fundamental misunderstanding about how AI operates. While large language models are trained on vast amounts of information, they lack a nuanced understanding of your specific environment, constraints, and objectives.
Think of it like having access to the world's most extensive library, but not knowing which books are relevant to your particular challenge. AI might connect economics to your social media strategy, but what you really need is insight into community-level factors that affect youth outcomes.
Context isn't just about being more specific in your prompts—it's about providing the environmental framework that shapes how information should be interpreted and applied.
For this managed care organization, context meant understanding several key factors:
The Solution: Collaborative Intelligence
Working with CCNY, they took a different approach. We chose supervised learning to maintain visibility into how conclusions were reached. Over nine months, we developed predictive analytics covering more than 350,000 covered lives, but more importantly, we involved the client throughout the iterative process.
The breakthrough came not from the final algorithm, but from the collaborative approach. Rather than delivering a black box solution, we worked alongside their team, sharing analysis and incorporating their expertise at each step. When they received the final predictive model, they understood not just what it could do—but how it worked and what the limitations were.
The Results
The organization could now:
One particularly human insight emerged from the data: for families supporting children with developmental disabilities, respite care was crucial—but it didn't matter when or how it was delivered. Weekends, weeknights, summer programs—the specific timing mattered less than simply giving families a break from challenging circumstances.
This insight felt deeply human and actionable. It gave the organization hope and concrete direction for program development.
The Key Takeaway for Your Organization
Perhaps most importantly, effective AI implementation requires what researchers refer to as "the human in the loop." AI should function as collaborative intelligence, not autonomous decision-making.
Success comes from treating AI as a sophisticated intern rather than an expert consultant. You bring the context, domain knowledge, and environmental understanding; AI brings processing power and pattern recognition. Together, you generate insights that neither could achieve alone.
For organizations considering AI investments, success requires focusing on several key elements:
The question isn't whether the technology is sophisticated enough—it probably is. The question is whether you're prepared to invest in the human elements that make AI truly valuable.