Excessive agency is a vulnerability that occurs when a large language model (LLM) is granted unnecessary or overly permissive abilities to interact with other systems. When an LLM can call external tools, plugins, or functions (its "agency"), this vulnerability allows it to perform actions that are unintended, unauthorized, and potentially harmful. An attacker can exploit this by using prompt injection or other manipulation techniques to trick the LLM into using its granted agency for malicious purposes. The core issue is not just that the LLM can take actions, but that the scope of those actions is too broad and poorly controlled.
Why Android developers should care
Granting an LLM excessive agency within your Android application can lead to severe security incidents:
- Unauthorized system access: An attacker could command the LLM to access, modify, or delete files on the local Android-powered device (for example, user documents, app data) or connected network resources if the model has the agency and corresponding Android permissions to do so.
- Data exfiltration: The LLM could be tricked into reading sensitive data from local app databases (like Room), SharedPreferences, or internal APIs, and then exfiltrating that information to an external source (for example, sending it using an API call or email function). The model could also leak sensitive information it absorbed during its training phase, or sensitive information the user provides in their prompt.
- Compromise of other functions/systems: If the LLM has agency over other functions (for example, sending SMS, making calls, posting on social media using implicit intents, modifying system settings, making in-app purchases), an attacker could hijack these functions to send spam, spread disinformation, or perform unauthorized transactions, leading to direct financial loss or user harm.
- Denial of service: The LLM could be instructed to perform resource-intensive actions repeatedly within the app or against backend services, such as running complex database queries or calling an API in a loop, leading to app unresponsiveness, battery drain, excessive data usage, or even a denial of service for backend systems.
Mitigations for Android app developers
Mitigation for excessive agency in Android apps focuses on applying the principle of least privilege to every tool and function the LLM can access or trigger.
Limit the AI's toolbox (granular versus open-ended functions):
- Provide minimal tools: The LLM should only have access to the specific tools (functions, APIs, intents) that it absolutely needs to do its job within your app. If it doesn't need to be able to browse the web or send an email, don't expose those capabilities to it.
- Use simple, single-purpose tools: It's better to give the LLM a tool that can only do one specific thing (like "read a specific type of user setting") instead of a powerful, open-ended tool that could do anything (like "execute any shell command").
Restrict the AI's power
- Fine-grained Android permissions: When an LLM-triggered function interacts with Android system resources or other apps, verify that your app only requests and holds the absolute minimum Android permissions required.
- Per-user permissions: When the LLM performs an action on behalf of the user, it should do so with that user's specific permissions and context, not with a broader, potentially administrative app-level account. This verifies the LLM can't do anything the user wouldn't be allowed to do themselves.
Keep a human in charge (user consent for critical actions)
- Require user approval: For any important or risky actions that an LLM might suggest or attempt to perform (for example, deleting data, making in-app purchases, sending messages, changing critical settings), always require explicit human approval using a confirmation dialog in your UI. Think of it as needing a manager to sign off on a major decision.
Trust but verify (input/output validation and robust backends)
- Backend security: Don't just rely on the LLM to decide if an action is allowed. Any backend services or APIs that the LLM-triggered functions connect to should have their own robust authentication, authorization, and input validation to double-check every request and verify it's legitimate and within expected parameters.
- Clean up data: Just like with other vulnerabilities, it's critical to sanitize and validate both the input that goes into the LLM and the parameters generated by the LLM for function calls to catch any malicious instructions or unexpected outputs before any action is executed.
Summary
Excessive Agency is a critical vulnerability where an LLM has overly broad permissions to interact with other systems or functions, allowing it to be tricked into performing harmful actions. This can lead to unauthorized data access, system compromise, financial loss, or user harm in Android applications. Mitigation relies heavily on the principle of least privilege: strictly limit the tools and Android permissions available to the LLM, verify each tool has minimal and specific functionality, and require human approval for all high-impact operations.