AI Agents: Microservices with God Complexes

I have been reading a lot of all the AI innovations that the new Agentic AI promises. Not just watching Muppet show episodes. I particular like the integrations of said agents. MCP, A2A or the new LOKA. It looks a lot like the beginning of Microservices. So, I see the potential benefits of agentic integration, these are compelling, but I also see significant risks and challenges. They are of course related to security, ethics, complexity, and control. How do we best support accurate and efficient exchange of hybrid (structured plus unstructured, formal plus natural) information?

However, not doing anything with AI and Agents has it’s own set of consequences, potentially leaving you vulnerable and at a competitive disadvantage. So what do need to take in to consideration?

Security, Ethics, Complexity and Control

I mentioned these 4 horsemen before. I think that autonomous AI agents introduce a complex array of potential disaster. Yikes, when I write it down like that it sounds ominous.

Security

Ever since learning about the shift that red teams undertake to keep their companies secure I believe that connecting autonomous agents expands the potential attack surface. Since agents often need access to multiple systems to perform their tasks, a compromise of a single agent could have a much broader impact than compromising a single traditional application. Agents themselves can become targets for malicious actors through techniques like prompt injection (tricking the agent via malicious inputs), data poisoning (corrupting training data), or model inversion (extracting sensitive information from the model), potentially leading to hijacked agents performing unauthorized actions or leaking data. There are too many examples of this.

Authenticating and authorizing agents presents unique challenges, requiring robust management of Non-Human Identities (NHIs) – API keys, service accounts, OAuth tokens – that goes beyond typical user authentication methods. At least for the ones that I encounter in tpyical Salesforce implementations. There is also the risk of granting agents overly broad permissions (over-privileged access), allowing them to perform unintended or harmful actions if compromised or misdirected. Secure data sharing protocols and robust cybersecurity measures are therefore critical.

Ethical Concerns

Not agent specific, but AI systems trained on biased data can amplify existing societal biases, leading to unfair or discriminatory outcomes. The black box nature of some complex AI models can lead to a lack of transparency in how agents arrive at decisions, eroding user trust and making it difficult to identify or rectify ethical issues.

Establishing clear ethical guidelines, ensuring agent actions align with human values and societal norms, and maintaining accountability for agent behavior are critical but challenging tasks. The potential for misuse by malicious actors at scale is for me a significant concern. At the same time, you run the risk of this as well in your normal day to day business. The only thing I wonder about is how do I test and keep testing if my Agent is still behaving ethical. Maybe we need an AI Vibe check?

Complexity and Integration

Just like orchestrating multiple API calls where the results can come back at different intervals, designing multi agent interactions are difficult. How do we maintain coherence and consistency in the distributed actions is a major challenge. Even without Agents, integrating with existing legacy systems and fragmented data landscapes poses it’s own technical hurdles. Many organizations report struggling with data integration, a key prerequisite for effective AI. Achieving seamless communication between agents, from different vendors on different platforms, requires these new MCP and A2A interoperability protocols.

Another point I want to make, after the sycophant ChatGPT’s rollback, AI models themselves suffer from wildly different performance. High performance on some tasks but failing unexpectedly on others. This inconsistent performance is not something I want to see in a Enterprise environment.

Luckily someone already wrote a great article on the topic of Business Continuity Planning: https://medium.com/@malcolmcfitzgerald/autonomous-ai-agents-building-business-continuity-planning-resilience-345bd9fdb949 Control and Governance

The very autonomy that makes agents powerful also introduces risks related to control. The non-deterministic nature of AI reasoning processes can be problematic in mission-critical applications where predictability is paramount.

The one that I want to highlight is called Goal drift. It’s where an agents objectives subtly shift over time as it learns from new data, can lead to misalignment with original intentions if not carefully monitored.

Robust governance frameworks are essential! I want to see plans on continuous monitoring. I want detailed auditing capabilities and maybe very old fashioned but I want mechanisms for human oversight and intervention. With all the new ways of automating work I want to ensure transparency and explainability in agent decisionmaking.

Closing of

The autonomy inherent in AI agents is a double-edged sword. With great power comes great responsibilities! These new capabilities, introduces risks that differ from our traditional, deterministic software.

Managing these risks requires a holistic approach that addresses not only the technology itself but also the ethical considerations, operational processes, and governance structures surrounding it.

Failure to do so can lead to technical failures, security breaches, ethical lapses, and ultimately, erosion of trust and thus damage to your reputation.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.