Skip to Content

Beyond the Basics: Lessons From Morningstar’s AI Evolution

How our inside-out approach to AI adoption is evolving from tool-centric to a holistic strategy—with lessons for anyone on their AI journey.

Artificial intelligence isn’t new. But building an organization that uses it responsibly, productively, and at scale? That is an ongoing challenge.

At Morningstar, we still test, question, and recalibrate. What’s been valuable for us—and holds potential for others—is how we’ve embedded AI learning into our workflows, and how that journey might help others in the industry moving beyond experimentation.

Our learnings and takeaways aren’t a blueprint. Instead, they’re a set of real-world lessons—from cultural shifts and infrastructure bets to data readiness and regulatory governance. They may be especially relevant for firms facing the same questions we asked ourselves: Where can AI in financial services really help? What makes it work? And how do we ensure it’s compliant, credible, and truly useful?

Lesson 1: Data Quality Is Key

Before models, before prompts, before prototypes—there’s content.

The core of Morningstar’s business has always been data and research. It was critical, from day one, that our AI solutions were firmly grounded in the same data and independent research that our users trust and rely on every day. Ensuring that we could provide AI-driven systems and agents with seamless access to this content is a foundational element of our AI strategy.

Our terabytes of structured data sets cover more than a million investment vehicles across a myriad of asset classes – including managed investments, equities, indexes, and cryptocurrency. Another input is Morningstar’s decades of independent research reports, written by industry-leading global analysts.

Why it worked:

  • We started with a strong foundation of well organized, high quality, and easy-to-access content
  • Research, data, and AI engineering teams collaborated early and often
  • Creation of additional AI-accessible content services improves the quality and eases the generation of new AI workflows.

No matter how sophisticated your AI ambitions are, they’ll always be limited by the quality of your data. Inconsistent, unstructured, or outdated pipelines don’t just slow down progress, they actively derail it. Many so-called AI failures in finance, from model hallucinations to compliance breaches, are ultimately rooted in a failure to provide AI with appropriate context to ground its responses.

At Morningstar, our structured, enriched investment datasets and extensive libraries of research content don’t just power our own tools; they also enable clients to build their own. To make this easier, we’ve introduced AI-ready versions of our content that clients can use to ground their AI applications, no matter which platform they’re using to build those applications. For example, if a client is building an application on Azure AI Foundry, they’ll find we’ve already integrated the Direct AI Agent there. We’ve also just launched the Direct MCP Server, which is available off-the-shelf for users of Claude for Enterprise. Again, our agents and MCP Server can be used by clients in any platform, and since we own and operate the pipelines, and have our domain experts continuously evaluating and improving the quality of the content flowing through those pipelines, it makes the experience much more turnkey. In a market where many firms are still struggling with data readiness, offering this kind of flexibility is another way we help our clients move faster and build smarter.

Organizations looking to scale AI meaningfully must first lay the groundwork: clean, reliable, well-governed AI data pipelines. We’ve learned that’s a key place to start.

Lesson 2: Trust is Built on Transparency

When we launched Mo, our advisor-facing digital assistant, we weren’t simply aiming to shorten research time. We were trying to solve for something advisors consistently said they needed: faster access to insights, without sacrificing credibility.

Mo summarizes long-form analyst research with links, citations, and audit trails. It’s not a flashy chatbot; it’s a structured query layer grounded in source data.

We maintain a strict standard—every answer must be explainable and traceable. That’s why we’ve developed internal tooling that audits answers against source content and flags responses with low confidence. It’s the same rigor we apply to human-generated research.

We’re also exploring new layers of AI-readiness: topic modeling, content tagging, and structured metadata that allow our content to be more dynamically retrieved, summarized, and embedded in user workflows.

Why it worked:

  • Every answer can be traced back to a vetted source
  • We applied the same validation protocols to Mo as we do to human-generated research
  • We continually evaluate the quality of Mo’s responses based on ongoing usage and user-provided feedback

Working on Mo taught us that generative AI must prioritize explainability over novelty. Advisors didn’t need a chatbot with personality; they needed answers they could trust, backed by sources they could see. These systems didn’t emerge fully formed. Instead, they evolved as we listened to users and worked through challenges.

For others building AI, we’ve learned that it’s worth investing early in the underlying systems that keep answers grounded, auditable, and usable in the real world. In high-trust environments, AI has to repeatedly prove itself—and that’s okay.

Mo also highlights an important milestone for Morningstar: we’ve successfully externalized AI through client-facing tools. While many firms are still experimenting internally, our digital assistant represents a fully deployed, advisor-ready system built on our trusted research infrastructure.

Lesson 3: Governance Can Accelerate, Not Stifle

While many firms may see regulation as a reason to wait, we took it as an opportunity to begin building guardrails. We established our Responsible AI Council early, adopting responsible AI principles anchored in practices of leading AI firms. The Council includes leaders across many disciplines and includes legal, risk, security, technology and product leads who establish key ground rules and operating principals for teams working with AI at Morningstar.

We treat AI like we treat any sensitive capability: assess the risks, mitigate them thoughtfully, and communicate our approach clearly to clients and stakeholders.

The mindset that any amount of risk is unacceptable is incompatible with AI progress. Accordingly, we don’t aim for zero risk—we aim for managed, transparent risk, with robust processes for both risk assessment and monitoring.

Why it worked:

  • Risk became part of the design process, not a post-launch filter
  • Clients with AI concerns were met with transparency
  • Governance wasn't a roadblock; it was an accelerant for trust

We didn’t have all the answers when we started building governance into our AI processes, but knew we couldn’t treat it as an afterthought. Establishing cross-functional oversight early helped us navigate uncertainty with more confidence and respond to client concerns with clarity. Our AI governance best practices found that putting structure around risk allows us to more confidently forge ahead as seek to innovate on behalf of our customers.

For others on a similar path, learn from our experience—governance doesn’t have to slow things down. When embedded thoughtfully, it can create the trust and alignment needed to move forward faster and more confidently.

Lesson 4: Adoption Starts With Culture, Not Code

When we were starting out, many of our generative AI efforts were siloed or seen as speculative. This likely sounds familiar, as most firms begin with curiosity and caution—and we did, too. But curiosity only becomes adoption when users see personal value. We knew we needed to give our teams the ability to see AI in action in the areas it impacted their daily roles.

That’s why we launched chat.morningstar.com, an internal sandbox where anyone in our organization could build AI bots that addressed day-to-day workflow pains.

Over 150 custom bots have been created in the past six months by teams across Morningstar, ranging from investor communications to legal. These include tools to generate RFP responses, summarize analyst calls, and even auto-tag CRM entries.

The feedback loop is immediate. Those who use AI every day are also shaping how we embed it in our products.

Why it worked:

  • The tools were built by the users, not for them
  • Use cases emerged organically from daily friction, not innovation mandates
  • Feedback loops were tight: If it didn’t help, it didn’t last

Several common barriers can hinder the AI transformation in financial firms and across many industries: a lack of hands-on experimentation environments, the perception that AI is a top-down initiative, and unclear policies around risk tolerance for experimentation. What we’ve found is that AI succeeds when it’s pulled into workflows by real needs, not pushed out by innovation teams alone.

Lesson 5: Treat AI Like R&D

AI doesn’t stand still, and neither should your approach to building with it. AI use cases in finance that didn’t pan out six months ago might work perfectly today. New models, better prompts, clearer data—all of these elements can radically change the outcome.

One way we’ve approached this is by building an automated AI evaluation framework – providing tools that allow for frequent retesting as content and systems evolve. This framework not only supports the our quality assurance of our production systems, it also allows for rapid iteration as we conduct proof-of-concept and R&D work. If an idea doesn’t work initially – our platform makes it easy to revisit and retest later as models improve and new techniques emerge.

This philosophy reframes the question from “Does it work?” to “Is it ready?” It also creates a culture where early setbacks inform learning and iteration rather than being viewed as final verdicts. Experimentation becomes a renewable investment, something that accrues value over time—even when results aren’t immediate.

Why it worked:

  • Platform investments and automated evaluations can turn failures into future experiments, not dead ends
  • Teams had permission to revisit and refine, not just ship or scrap
  • The approach aligned with how fast models and tooling evolve

Teams navigating the fast pace of AI development benefit from building a culture focused on return-on-learning, not just return-on-investment. The takeaway is that it’s worth building a culture where learning compounds. The teams seeing momentum with AI aren't just chasing wins, they’re also reworking near-misses. Instead of writing off early failures, they’re circling back with better data, better prompts, or better models.

Innovation is iterative—not everything has to work the first time, and early challenges can be highly instructional.

Lesson 6: Transformation Hides in the Mundane

The AI that drives the biggest impact isn’t always the most innovative; it’s often the most practical. We’ve found that our most transformative use cases focus on reducing daily friction, not reinventing entire workflows.

We’ve used AI to accelerate portfolio uploads, auto-tag CRM notes, flag anomalies in data, and generate internal reports. These aren’t headline-grabbing features—but when repeated hundreds of times a day across teams, the cumulative impact is substantial.

We’ve also brought these capabilities directly to clients. For example, our Direct Advisory Suite includes AI features that streamline advisor workflows, embedding automation and insights into external-facing environments where trust and usability are paramount.

Why it worked:

  • We focused on high-frequency tasks where even small gains add up
  • AI was positioned as a helper, not a disruptor
  • The benefits were immediate and easy to measure

Some of the most valuable applications of AI come from winning little battles, so don’t overlook the mundane tasks. If a task is repetitive, high-volume, and rule-based, it’s probably ripe for automation. And when you stack up enough of those micro-efficiencies, the bigger transformation starts to reveal itself. For teams just getting started, this is often the easiest, and most defensible, place to begin: not by reinventing workflows, but by smoothing out the ones that already exist.

By delivering measurable value—both internally and externally—we’ve proven that AI’s most transformative applications often begin with everyday pain points.

What We’re Really Building

AI at Morningstar isn’t merely a product or a strategy. It’s a perspective.

We’re learning how to embed AI into the real rhythms of work where the stakes are high, the data is messy, and the users are human.

Importantly, we haven’t stopped at internal tools. We’ve also developed client-facing AI solutions like Mo that extend the value of our AI investments to advisors and clients alike—an area still developing across the industry. We also offer plug and play agents such as Direct AI Agent to allow clients to integrate the power of Morningstar research and data into their own AI workflows. We will continue to release more capabilities that bring AI-generated insights to our clients.

That attitude has shaped our belief that sustainable AI outcomes require:

  • Clean, well-structured data
  • Real, observed workflow needs, not abstract hypotheticals
  • Transparent governance and risk management
  • Ongoing retesting and iteration, not one-and-done rollouts

While we don’t have it all figured out, we’ve seen that asking the right questions—where’s the friction? what’s the right data to address it? how will we validate it?—goes further than chasing the newest tool.

The biggest thing we’ve learned is that AI in financial services doesn’t succeed because you launch a system. It succeeds when people can trust it, shape it, and see its value in their own work.

That’s what we’re building toward. If you’re on a similar journey, we hope our lessons can help you build momentum with purpose, clarity, and confidence.

You might also be interested in...