Imagine calling a customer service line and reaching an AI system so sophisticated that you cannot tell, at least initially, that you are not speaking with a human. The AI answers your questions, processes your request, expresses sympathy for your frustration, apologizes for any inconvenience. It performs the functions of a customer service representative with remarkable competence. And yet something is missing. Something essential is absent from the encounter. What you have reached is not a customer service representative at all but rather the appearance of one, a position that looks occupied from the outside while being fundamentally vacant of what that position requires.
In previous posts in this series, we established that AI lacks moral agency entirely: it cannot perceive moral reality, cannot deliberate about values, cannot exercise judgment, cannot bear responsibility. We also established the critical distinction between AI as tool (where humans initiate and retain decision authority) and AI as role (where AI initiates action in positions within relationship structures). The Vacancy Problem emerges at the intersection of these insights. When organizations place AI into roles that carry relationship expectations and moral obligations, those positions become structurally vacant. The role appears filled because AI occupies it functionally. But the role is empty of what it requires morally.
What Roles Actually Require
Roles in organizational contexts are not merely functional positions. They exist within webs of relationship carrying moral weight. When a customer calls for service, they are not merely seeking information transfer or transaction completion. They are encountering an organization through a person who represents it. The expectations attached to the customer service role include functional competence, certainly, but they also include something more. Customers expect to encounter someone who can perceive their individual situation, who can judge when standard procedures serve them and when they do not, who can choose to help beyond what rules strictly require, who can apologize with genuine regret when the organization has failed them.
Consider what a human customer service representative can do that AI cannot. The human can recognize genuine distress and respond to it as a moral claim, not merely as a category of inquiry. The human can judge that a particular customer’s circumstances make standard policy inappropriate and exercise discretion to deviate from it. The human can perceive that the organization’s procedures have caused real harm and feel responsibility for addressing that harm. The human can apologize and mean it, not as script but as acknowledgment of failure and commitment to make things right. These are not additional features that advanced AI might eventually provide. They are exercises of moral agency that no AI, regardless of sophistication, can perform.
The same analysis applies to other roles organizations increasingly fill with AI. A hiring screener is not merely a filter but a gatekeeper with obligations to candidates: to evaluate them fairly, to recognize potential that credentials may not capture, to avoid allowing bias to determine who receives opportunity. A performance evaluator is not merely a measurement tool but a judge whose assessments affect careers and livelihoods, carrying obligations to see employees as individuals, to evaluate context as well as metrics, to exercise wisdom about what performance data does and does not reveal. A medical triage system is not merely a prioritization algorithm but a decision-maker determining who receives care first, carrying obligations to see patients as persons, to weigh factors algorithms cannot capture, to exercise judgment when circumstances exceed protocols.
The Structure of Vacancy
When AI occupies these roles, the positions continue to exist in organizational relationship structures. Customers still expect the customer service role to be occupied. Job candidates still expect fair evaluation. Employees still expect meaningful performance assessment. Patients still expect judgment about their care. The role remains present. What becomes absent is the moral agency that the role requires. This creates a structural vacancy: a position that appears occupied but is empty of what occupancy would actually mean.
AI can simulate the appearance of moral presence. It can process language patterns associated with empathy and generate empathetic-sounding responses. It can apply rules with exceptions built in by designers. It can produce outputs that look like judgment emerged from deliberation. These simulations may be convincing. A customer interacting with AI may believe they are speaking with someone who cares about their situation. But the belief is mistaken. No one cares about that customer’s situation from inside the AI system. The system processes inputs and generates outputs according to patterns it was trained to produce. Care, judgment, genuine apology: these require moral presence that AI does not possess.
The vacancy problem is particularly acute because the appearance of occupancy obscures the absence. When a position is obviously vacant, humans know to work around it. When a position appears occupied, humans engage with it expecting what occupancy would provide. A customer who knows they are reaching AI adjusts expectations accordingly. A customer who believes they are reaching a person invests trust that the interaction cannot reciprocate. The vacancy is not merely unfilled; it is falsely filled in ways that exploit expectations attached to genuine role occupancy.
Addressing the Vacancy Problem
The Vacancy Problem does not mean AI should never occupy roles. It means that AI roles must be designed within systems where moral agency remains present and accountable. When organizations place AI in customer service roles, they must ensure that human moral agents remain accessible to customers who need them, that humans maintain oversight of AI operations, that accountability structures connect AI outcomes to humans who bear genuine responsibility. The vacancy exists at the point of AI contact; governance must ensure that vacancy does not extend through the entire system.
This requires more than theoretical human availability. It requires designed human touchpoints where stakeholders can access moral agents when AI proves inadequate. It requires that humans staffing these touchpoints have genuine authority to exercise judgment and override AI outputs. It requires that reaching humans is not made deliberately difficult through design choices that prioritize deflection over access. It requires ongoing assessment of whether stakeholders actually reach humans when they need to, not merely whether the theoretical possibility exists.
The Vacancy Problem also requires organizational acknowledgment of what deploying AI into roles actually means. Organizations do not merely adopt new technology when they place AI in role capacity. They choose to remove moral presence from positions in relationship structures where humans expect it. This choice requires justification. It requires explicit consideration of what obligations the role carries and how those obligations will be fulfilled when AI cannot fulfill them. It requires honesty about what stakeholders lose when they encounter AI rather than humans in positions that matter to them.
Subsequent posts in this series will develop the Two Conditions required for ethical AI deployment: structural accountability ensuring human moral agency remains present, and directional alignment ensuring AI deployment serves relational flourishing. The Vacancy Problem establishes why these conditions matter. When AI occupies roles, something goes missing that only moral agents can provide. Governance must ensure that this vacancy does not leave stakeholders abandoned in positions where moral presence should exist. The question is not whether AI performs role functions adequately but whether the system as a whole maintains the moral presence that stakeholder relationships require.






