Mitigate Dangers of AI Agents

By Nathan Ferchtandiker

ChatGPT Image Nov 10, 2025, 09_31_16 AM

AI agents are fantastic tools, they save a lot of time — in our often very busy schedules. We believe that well-set-up AI agents will soon save hours per day on knowledge processing tasks like research or summarizing complex documents.

That’s the upside but to ensure that AI agents don’t turn from helpful assistants into liabilities, let’s walk through the most common risks and how to keep them in check.

 

Data Quality: When Poor Data Leads to Bad AI

AI agents are only as good as the data they’re trained or fine-tuned on. Low-quality, outdated, or biased datasets can skew the output and embed systemic issues into decision-making processes. This is particularly concerning in real estate investment where poor advice can translate directly into lost capital or reputational damage.

How to mitigate: 

  • Use diverse, high-quality data during training. The data portfolio of KR&A is rich and checked for bias during cleaning.

 

Security Vulnerabilities: Agents can be manipulated

Malicious actors can embed manipulative commands within prompts to coerce the agent into revealing sensitive data or misbehaving in unintended ways. This risk grows exponentially when AI agents are integrated into sensitive enterprise environments or given access to proprietary information.

How to mitigate:

  • Limit the agent’s access to only the data it truly needs.
  • Implement strict user access controls — only trusted users can interact with sensitive agents.

 

Lack of Explainability: How did it come up with that? 

AI agents can produce convincing results, but when asked how they arrived at a conclusion, many models struggle to provide transparent answers. This becomes a serious issue in a regulated industry such asset management, or in scenarios where accountability and traceability are non-negotiable. Regulators, clients, and internal teams alike need clarity and it is unclear whether AI agents are capable of delivering.

How to mitigate:

  • Treat AI agents as advisors, not decision-makers—a human should always be responsible for taking critical decisions.

 

Misinformation and Hallucinations: When AI Makes Stuff Up! 

One of the more well-documented flaws in current AI systems is their tendency to “hallucinate”—generate plausible-sounding but factually incorrect content. Since these outputs are often presented with high confidence, they can easily mislead unsuspecting users. This can undermine business reports, investor briefings, or customer communications.

How to mitigate:

  • Require the agent to show its reasoning step-by-step, like a student showing steps in their math homework.
  • Cross-check AI outputs against trusted data sources.

 

Final Takeaway: Trust but Always Verify 

AI agents are powerful. They can save you hours every week and give you insights faster than any human analyst. But they aren’t perfect — and treating them as infallible oracles is a recipe for disaster.

  • The safest way to work with AI agents is to treat them like smart interns:
  • Give them clear rules and high-quality data.
  • Double-check their work.
  • Never let them make critical decisions alone.

Continue Reading

Subscribe to our newsletter
for the latest updates

You will be updated on the latest developments and informed about new blogs being published.