Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

“Agents will be embedded in business processes with direct impact on operations, data, and compliance,” wrote Eric Broda and John Y Miller  in KYA — Know Your Agent, a recent article in their Agentic Mesh Substack. Broda is the founder of Broda Group Software, co-founder of The Agentic Mesh Company, and co-author of Agentic Mesh, a book published in February 2026; Miller is an Agentic AI Systems Builder and Podcast Host.

What Is KYA and How Does It Work?

In the early days of the digital economy, organizations learned that trust could not be assumed — it had to be engineered. That realization led to the development of frameworks like Know Your Customer (KYC) and Know Your Business (KYB), thus enabling institutions to verify identities, manage risk, and operate at scale in increasingly complex environments. “KYC and KYB exist for a simple reason: when a bank or regulated firm lets someone open an account, move money, or access sensitive services, the firm needs confidence about who is on the other end and what risks they bring.”

A similar challenge is now emerging, but with a new kind of actor: AI agents. “Agents will be embedded in business processes with direct impact on operations, data, and compliance. This creates three engineering requirements: limit what the agent can do, record what it did, and explain why it did it.”

In the emerging age of AI, enterprises face a fundamental question: how do you establish trust, accountability, and controls when the “actor” is no longer a person or an organization, but an intangible asset, i.e., software acting with increasing autonomy?

Know Your Agent (KYA) is one answer to that question. It extends the principles of identity, risk management, and governance from people and institutions to AI agents operating at scale — ensuring that their behavior is transparent, auditable, and aligned with organizational objectives.

KYC and KYB provide a starting model for defining KYA.

“KYC focuses on a person; KYB focuses on a company. But both are built around the same steps: verify identity, understand risk, apply proportionate controls, and keep evidence that those steps were taken. The goal is to reduce predictable failure modes — fraud, misuse, and regulatory breaches — before they occur.” As the Wikipedia KYC entry explains, “guidelines and regulations require financial services professionals to verify the identity, suitability, and risks involved in maintaining a business relationship with a customer,” such as anti–money laundering (AML) and counterterrorism financing (CTF) regulations.

Broda and Miller further explain the key goals of KYC and KYB:

  • Verify identity. Establish who the customer is and assess their risk profile.
  • Manage risks. Reduce illicit finance and fraud risks from individuals and organizations.
  • Apply controls. Use risk-based due diligence, screening, and monitoring.
  • Provide evidence and auditability. Maintain records for audits and regulatory compliance.

They then follow a similar approach in explaining the key goals of KYA for AI agents:

  • Runtime identity. Establish which agent instance is operating, who owns it, and what version is running, so actions can be tied to an accountable source.
  • Risk management. Reduce the risk that an agent makes unauthorized or policy-violating decisions due to misconfiguration, excessive permissions, or incomplete information.
  • Apply controls. Ensure the agent can only use approved capabilities, within clear boundaries, in ways that can be reviewed after the fact to prevent surprises.
  • Evidence and auditability. Maintain records showing what the agent was allowed to do, what it actually did, and why, so audits and incident reviews are grounded in facts.

KYA and the Evolution of Digital Identity

KYA brings to mind the major changes that have taken place over the past few decades as our economies have become increasingly digital. For centuries, financial and business processes were based on face-to-face interactions and physical documents. The transition to a digital economy required fundamentally different processes and identity management systems.

In a world increasingly governed by digital transactions and data, traditional approaches to security and identity proved inadequate. As a result, data breaches, identity theft, and large-scale fraud have become more common.

What Is Digital Identity?

In 2016, the World Economic Forum (WEF) explained the evolution of digital identity in a very good report, A Blueprint for Digital Identity.” The report laid out a framework for creating digital identity systems, discussed their benefits, and argued that financial institutions should play a leading role in their development. The report also included a clear primer on identity and its role in everyday life.

What is identity? Think about logging into a website, making an online purchase, or boarding a plane. While identity is constantly being verified around us, we rarely think about it unless something goes wrong.

 Identity is a collection of attributes that describe an entity and determine the transactions in which that entity can legitimately participate. Identities can be assigned to three main types of entities:

  • Individuals, the entities we most commonly associate with identity;
  • Legal entities, such as corporations, partnerships, and trusts; and
  • Assets, whether tangible (e.g., smartphones, cars) or intangible (e.g., patents, software, datasets—and now AI agents).

Is Know Your Employee (KYE) a Better Model for KYA?

While KYC and KYB provide useful analogies, Broda and Miller argue that an even closer model is Know Your Employee (KYE). “Agents behave like workers in your enterprise. We see an employee as an internal actor you onboard, empower, supervise, and hold accountable over time. The same applies to agents, which are increasingly participating in real business processes.”

Organizations have long applied a set of well-established practices to managing employees:

  • Identity and onboarding. Confirm identity, role, and accountability.
  • Authorization and access. Grant access based on role and revoke it when roles change.
  • Policies and constraints. Define acceptable behavior and address violations.
  • Monitoring and governance. Evaluate performance and investigate incidents.

KYA applies a similar set of practices to managing AI agents:

  • Identity and onboarding. Verify the agent’s identity, owner, version, and declared purpose.
  • Authorization and access. Assign narrow, task-specific permissions that are temporary and revocable.
  • Policies and constraints. Encode rules that agents cannot bypass, since they will otherwise repeat mistakes.
  • Monitoring and governance. Maintain a reliable record of decisions, tools used, and data accessed to support audits, remediation, and recertification.

Can an AI Agent Be Trusted?

“Trust in KYA starts with a simple proposition: purpose plus proof. By default, an agent should never be ‘trusted.’ Instead, it should be trusted only when its purpose is clearly defined and bounded, and when there is concrete evidence that it operates within those bounds.”

“Trusting a single agent is not enough; you must trust the ecosystem in which it operates,” the article adds. In the near future, enterprises may have more AI agents than employees. Even well-designed agents can produce harmful outcomes if the surrounding environment is weak — if identities can be spoofed, permissions are too broad, tool access is uncontrolled, or actions are not recorded.

“As the number of agents grows, no single team can manually review every agent, every change, and every action. Trust must be built through repeatable standards that can be applied consistently, with shared evidence that travels across organizational boundaries.” This requires common identity and permission models, standard ways to declare purpose and constraints, and consistent logging practices.

“KYA matters because agents participate directly in business processes, and at scale even small mistakes can become large incidents. When thousands of agents interact with data, tools, and workflows, the risk is no longer a single bad output but a cascade of unauthorized actions. KYA is the discipline that keeps that authority bounded and accountable.”

Conclusion

The article concludes that organizations can draw on decades of experience managing employees. “We see KYE — Know Your Employee — practices as the ideal starting point for launching an enterprise’s KYA journey.”

The rise of agentic AI marks a shift from systems that support human decisions to systems that increasingly make and execute decisions themselves. In such an environment, traditional governance models — designed around human actors — are no longer sufficient.

“KYA is the practical work of making agents safe to operate inside real systems. It starts with knowing exactly which agent is acting, what it is allowed to do, and what rules it must follow, and it ends with being able to reconstruct outcomes from evidence rather than recollection.” Implementing such a KYA framework will not be simple. It will likely require new technical architectures, updated organizational practices, and evolving regulatory expectations.

As AI agents become increasingly embedded in core business processes, Know Your Agent may become as fundamental to enterprise operations as knowing your employees. In the end, KYA is not just about managing an important new technology. It is about building the trust required to make agentic systems a reliable part of the enterprise.

Posted in , , , , , , , , , , , ,

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading