Microsoft Quietly Upgrades Copilot’s “Think Deeper” Feature with Newer OpenAI Model

Listen to this Post

Featured Image
Copilot’s Evolving AI Capabilities: An Overlooked Upgrade

While Microsoft Copilot may not have the same buzz as ChatGPT, it has carved out its own space in the AI landscape. Often overshadowed, Copilot still offers solid performance—especially with its deeper reasoning tools. One of its standout features, “Think Deeper,” has now quietly undergone a potential upgrade. Originally powered by OpenAI’s o3-mini-high model (with a knowledge cutoff in October 2023), Copilot seems to be A/B testing the newer o4-mini-high model, which has fresher training data up to June 2024. This subtle shift hints at a significant enhancement in Copilot’s reasoning capabilities and overall knowledge scope.

Microsoft’s Copilot Shifts Gears Behind the Scenes

Microsoft’s Copilot offers different AI modes depending on the user’s subscription level. Free users get access to a standard “Quick Response” mode, while paid users who subscribe to Copilot Pro for \$20/month unlock an extra mode: “Think Deeper.” This deeper reasoning engine was confirmed in March 2025 to be powered by OpenAI’s o3-mini-high—one of the more sophisticated yet lightweight premium models at the time. Notably, this model was no longer available in other OpenAI services like ChatGPT Plus or Enterprise as OpenAI moved to the newer o4 family of models.

In recent weeks, subtle signs began to emerge showing that Microsoft may be experimenting with the newer o4-mini-high model. During one session with “Think Deeper,” the AI reported a knowledge cutoff of October 2023—consistent with o3-mini-high. However, in a different session using a separate Microsoft account, the same feature cited a knowledge cutoff of June 2024. This anomaly suggested the possibility of backend A/B testing, in which Microsoft might be rotating users between o3 and o4 models without explicitly informing them.

This shift aligns with broader industry trends.

The decision not to fully replace o3-mini-high with a more expensive model like o3 or GPT-4.1 is practical. Costs and latency must be balanced with performance, and Microsoft seems to have opted for a model that offers newer data without excessive computational overhead.

What Undercode Say:

A Subtle Yet Strategic Upgrade

Microsoft’s quiet shift toward using o4-mini-high in

Why o4-mini-high Makes Sense

The o4-mini-high model represents a smart middle ground. It’s more efficient and current than o3-mini-high while being far less expensive and latency-heavy than the full GPT-4.1 model. For Microsoft, this means they can offer users a perceptible performance boost in “Think Deeper” without a spike in infrastructure costs or user delays. It ensures Copilot remains competitive while maintaining profitability for its \$20/month subscription tier.

Implications for the User Experience

Users won’t see a flashy UI change or announcement. But those on the newer o4 model will likely notice faster, more up-to-date responses and more accurate reasoning. This brings Copilot closer to ChatGPT’s performance, closing the gap between the two services, especially for users unwilling to switch ecosystems. It also means that casual users may benefit from cutting-edge tech without even realizing it, depending on which test group they fall into.

Transparency and Trust Concerns

While this upgrade benefits users, the lack of official communication raises transparency concerns. If Microsoft is indeed switching models, even on a test basis, informing users would help manage expectations and build trust. Knowledge cutoff dates impact the reliability of answers, especially for fast-moving topics like tech, politics, or current events.

Differentiation Through Subtlety

Copilot doesn’t need to beat ChatGPT head-on. Instead, it can win by delivering consistent, enterprise-grade AI in a familiar Microsoft environment. Quietly improving its underlying models helps retain users and improve satisfaction without disrupting workflows. This strategy reflects Microsoft’s broader AI philosophy: improve steadily, integrate deeply, and reduce friction for users already within the Microsoft ecosystem.

The Future of Reasoning Tools

As models like o4-mini-high become more common, we’ll likely see reasoning tools evolve to offer better contextual understanding and broader memory. If Copilot continues on this path, “Think Deeper” might eventually support memory features, cross-session context, or even domain-specific customization, especially in enterprise use cases.

How This Affects Other OpenAI Integrations

Interestingly, OpenAI no longer offers o3-mini-high on its own platform, signaling a strategic shift away from older models. This makes Microsoft one of the few remaining platforms still offering access to o3—although that too seems to be on its way out. The broader transition to o4 indicates growing confidence in these newer models’ ability to handle complex tasks at scale.

A Testing Ground for Future Enhancements

Copilot could be Microsoft’s experimental playground, where new models and features are stealth-tested before a wider rollout. The A/B testing strategy may be paving the way for further personalization, smarter logic chains, and reduced latency. If the o4-mini-high experiment proves successful, we can expect an official upgrade notice or even new Copilot tiers offering richer interactions.

🔍 Fact Checker Results:

✅ o3-mini-high has a knowledge cutoff of October 2023

✅ o4-mini-high uses data up to June 2024

❌ Microsoft has not publicly confirmed the upgrade to o4-mini-high in Copilot

📊 Prediction:

Copilot’s Think Deeper will fully transition to o4-mini-high by Q3 2025 🚀
Microsoft will quietly roll this out to all Pro subscribers with no UI change 🎯
A future version of Copilot may integrate o4.1-mini or memory tools by early 2026 🧠

References:

Reported By: Rb07z8MTUhtml
Extra Source Hub:
https://www.reddit.com/r/AskReddit
Wikipedia
OpenAi & Undercode AI

Image Source:

Unsplash
Undercode AI DI v2

Join Our Cyber World:

💬 Whatsapp | 💬 Telegram