PRINCIPAL AGENCY THEORY

0
1360

SEMINAR ON

PRINCIPAL AGENCY THEORY

 

ABSTRACT

Principal-agent theory encapsulates a tradition of rational choice modeling, in which some actor(s) (the principal(s)) uses whatever actions are available, to provide incentives for some other actor(s) (the agent(s)) to make decisions that the principal most prefers. Be­cause principal-agent theory focuses on the responsiveness of the agents decisions to the principal’s goals, and how this responsive­ness is mediated by actions available to each actor as well as insti­tutional settings in which they interact, it is a natural framework to study accountability in political institutions. This essay gives a basic overview of principal-agent theory and briefly reviews its applica­tion in two domains of political science: bureaucratic accountability to higher-level political actors, and electoral accountability of repre­sentatives to constituents. The essay emphasizes that principal-agent theory is in fact a highly flexible family of models, rather than an overarching set of assumptions and results.

Public accountability is a function of the capabilities of principals to judge the performance of their agents (Achen and Bartels 2002; Healy and Malhotra 2010; Lenz 2012; Lupia and McCubbins 1998). But it is also in part a function of institutions themselves. To make sense of the re­lationship between accountability and institutional structure, it is useful to make use of a theoretical framework that can express widely varying institutional details and express their consequences for accountability in commensurate terms. Principal-agent theory has become a widely used paradigm for analyzing public accountability. This is because it provides a flexible framework for modeling innumerable variations in institutional arrangements, and comparing their potential for inducing desirable be­havior by agents.

Applications of principal-agent theory in the study of public account­ability have become sufficiently common and widespread that it is hope­less to attempt a comprehensive review of the literature in so short a space, and this chapter will not attempt such a review. Instead it attempts to give a sense of the conceptual flavor of principal agent analysis, as well as in­sights gleaned from two of the original and still more common substantive areas of its application in political science – control of politicians through electoral institutions; and control of bureaucracies by legislative, execu­tive, and/or judicial actors.

 PRINCIPAL-AGENT THEORY:

INTRODUCTION

Analysis and evaluation of public accountability requires a specification of who is (or is supposed to be) accountable to whom. This is a core ingre­dient of principal-agent theory. In principal-agent models, some actor (or group of actors) called an agent undertakes an action on behalf of another actor (or group of actors) called a principal. The principal, for its part, can make decisions that affect the incentives of the agent to take any of its various possible actions. This process of structuring incentives for the agent is the central focus of principal agent theory. The decisions made by the principal that structure the agent’s incentives to take various ac­tions constitute a contract, in the language of principal-agent theory, and principal-agent theory is often taken as a specific area of contract theory more generally (Bolton and Dewatripont 2004).

Perhaps the most elemental point about principal agent theory is that it is not in fact a single overarching theory with a specific set of assumptions or conclusions. Principal-agent theory is more accurately described as a family of formal models addressing related concerns with similar styles of analysis. It is not much of a stretch to suppose that for any given actors labeled “principal” and “agent,” and any pattern of interaction between the two, a principal-agent model can be written down with that pattern as an equilibrium outcome – and modelers might consider it a parlor game of sorts to do it. Given that, one must be wary of claims to “test” principal- agent theory empirically in any broad sense. By the same token, it is al­ways possible to defend any status quo interaction between a “principal” and an “agent” as reflecting the greatest degree of accountability to which the agent can be held by the principal, given various informational asym­metries and commitment problems.

Specifying a member of the principal agent family of models requires specifying (1) what the agent(s) can do and how this affects the princi- pal(s), (2) what the principal(s) can do and how this affect the agent(s), and (3) who the principal(s) and the agent(s) are. In other words, prin­cipal agent models specify a set of actors, possible actions they can take, and how they evaluate consequences of those actions. In this respect a principal agent model is necessarily a game in the formal sense, and corre­spondingly principal agent models in contemporary literature are almost exclusively analyzed with the tools of noncooperative game theory.

While this chapter stresses the flexibility of principal-agent theory, this background suggests some limitations of it. First, it inherits the limitations of game theory as a tool for explaining behavior. Principal-agent analysis is also inappropriate for analyzing accountability of some actor to another, when the second is unaware of its dependence on the first and/or can do literally nothing to affect the behavior of the first. Finally, its very flexibil­ity is also a sort of limitation. Within its domain of application there does not seem to be any pattern of behavior that a principal-agent model can­not explain. While any particular model in this family may have empirical content, it is not clear what content the family as a whole has.

A fundamental distinction in types of principal agent models is be­tween those dealing with moral hazard or hidden actions, and those deal­ing with adverse selection or hidden information. In moral hazard prob­lems, the agent takes one of several possible actions that affect the prin­cipal’s utility, the principal and agent have different preferences over the possible actions the agent can take, and the principal cannot directly con­trol the agent’s action. However, to make the problem interesting the prin­cipal observes some information affected by or correlated with the agent’s action, and administers a reward or punishment (e.g., a bonus payment, reelection) based on that information.

In adverse selection problems, the agent is privy to some information that the principal needs to make a decision in her own interest, but the agent prefers that the information be used differently. In pure adverse selection models, the principal may be able to specify costs the agent must incur to take various decisions, or even outright prohibit certain decisions by the agent or require a particular one. All of these types of direct control over the agent’s action are unavailable in pure moral hazard models. In adverse selection, the problem for the principal is that it does not know how to use this control; it does not know which action to direct the agent to take.

Key to the analysis of both moral hazard and adverse selection prob­lems is incentive compatibility. If the principal is unrestricted in the types of contracts it can offer the agent – e.g., in moral hazard, bonus pay for results that are correlated with the principal’s preferred action – it is of­ten possible for the principal to induce the agent to take the principal’s own most preferred action. Incentive compatibility simply means that the principal must make it worth the agent’s while to behave in this way.

In general, incentive compatibility imposes constraints on the princi­pal: the principal must trade off the benefits of an improved decision from its own point of view, against the costs of inducing the agent to take that particular decision. If the agent also has an option to exit the relationship with the principal (an individual rationality or participation constraint), or if the agent’s liability for poor performance is limited in some way, then in general incentive compatibility constraints imply that the prin­cipal will generally not induce the agent to pursue the principal’s own most preferred course of action, even if the principal is able to do so in theory. The difference in the action the principal prefers to induce, given incentive compatibility (and participation) constraints, and the action the principal would take itself if it could (and had the information and capa­bilities of the agent) gives rise to agency loss. Principals must trade off agency loss against the cost of satisfying incentive compatibility. When these two costs to the principal are in direct conflict, as they typically are in principal-agent models, the principal generally does not wish to elimi­nate one cost at the expense of the other. Agency loss is what occurs when the agent gets a bit of extra slack from the principal to pursue its own interests rather than the principal’s.

Overall, then, principal-agent models suggest that agents need not in fact be perfect agents of their principals. Agency loss is not necessarily inevitable, in the sense that principals may (in some models) be able to eliminate it if they for some reason prefer to do so. But even in these models, principals typically would find it prohibitively costly to eliminate agency loss completely, so they choose to tolerate some. Therefore, the mere existence of agency loss does not imply that accountability of the agent is suboptimal or defective from the principal’s standpoint.

ELECTORAL AGENCY

The plenary power of the state raises serious threats to the welfare of cit­izens. What is to prevent the political officials holding the reins of state power from behaving opportunistically in policy making, elevating their own interests over those of voters? The promise of electoral democracy is that voters can hold political officials accountable for their policy choices, and thereby ensure a close connection between public will and public pol­icy. Setting aside imperfections in voters themselves (their attentiveness, their rationality, etc.), are electoral institutions themselves up to this task? That is, even in a world of perfect voter attention and rationality, do elec­toral institutions themselves limit the accountability of policy to voters interests?

This issue is exactly the one explored in principal-agent analyses of elections. Voters are the principal(s) and politicians are the agents. One of the classics of the genre is due to Ferejohn (1986), who developed a pure moral hazard model of electoral accountability that taps into key concerns about elections as instruments of control. The concern of political oppor­tunism and mitigating agency loss with elections is inherently dynamic, so Ferejohn’s model involves multiple periods an infinite horizon to be ex­act. In Ferejohn’s model an incumbent politician can exert effort on behalf of (homogenous, in the baseline case) citizens, who prefer more effort to less. But the value of effort changes randomly from one period to the next. This value is observed by the incumbent politician, but not by the voters, who only observe their overall utility in each period. For politicians effort is costly but holding office has some value irrespective of how much effort the politician supplies. In equilibrium voters use a simple retrospective voting rule: they reelect the incumbent for another period if their period t utility exceeds a specific threshold. But voters incur significant agency loss to politicians. If the value of effort to voters is too small, the incum­bent shirks completely, knowing even very high effort will not be enough to put voters’ utility over its retrospective threshold. If the value of effort to voters is large enough, politicians exert some effort but only the mini­mal amount necessary to be reelected. This effort gets smaller when the value of effort gets larger, because the incumbent can work less hard and still satisfy the voters’ retrospective threshold.

Of course, the magnitude of agency loss in some environment can only be judged relative to an alternative institutional arrangement. In an impor­tant paper, Persson, Roland, and Tabellini (1997) show that separation of powers institutions combined with checks and balances can mitigate the agency loss identified by Ferejohn. In essence, these authors argue, separation of powers when political actors have inherent preference conflicts (as between a legislature and executive) can lead to better information for voters about the possible value of policy making, and therefore allows vot­ers to better tailor their voting threshold to the situation at hand. Persson, Roland, and Tabellini also show that these beneficial effects of separation of powers depend on checks and balances – in their case, political actors must come together for the purpose of determining policy.

BUREAUCRATIC AGENCY

One of the earliest, and still most robust, principal-agent literatures in po­litical science takes bureaucrats as agents of some constellation of polit­ical principals – most often Congress, the president or executive actors, and/or courts. The 1970s saw profound scholarly disillusionment in po­litical science, law, and economics with the bureaucratic policymaking ap­paratus created mostly since the Great Depression. In political science, scholars such as Theodore Lowi and Hugh Heclo argued that the bureau­cratic institutions created by ostensibly well-meaning Congresses had be­come ungovernable, and that Congress had reached a point of abdicating its responsibilities to govern by creating one new bureaucracy after an­other while constraining them with little more than an directive to “Go forth and do good.” Given this abdication, bureaucrats may or may not “do good,” but they do so at best according to their own ideals and con­ception of what that means – not according to any vision that is guided by, or accountable to, Congress. Whatever doubts one may express about the democratic pedigree or legitimacy of the U.S. Congress, it certainly occupies a seat closer to the governed than most bureaucratic policy mak­ers do, and so this interpretation of bureaucratic architecture throws into doubt the public accountability of much of modern social and economic policy.

William Niskanen (1971) articulated similar concerns. Niskanen ad­vanced several interrelated postulates in an economic model of bureau­cratic service delivery: that bureaucrats seek to maximize their budgets; that bureaucrats know the value that legislators attach to the services they provide; that bureaucrats know more than those legislators about the cost of services they provide; and that bureaucrats essentially make take it or leave it budget offers to the legislature. The implication of this set of as­sumptions, Niskanen showed, is that bureaucrats extract rents from the legislature. Bureaucrats are willing to do so, owing to the assumption

about the bureaucrat’s objective; they are able to do so, owing to the asym­metric information and proposal rights they enjoy (or are assumed to en­joy) over the legislature. Niskanen’s model gave rise to a school of thought explaining the growth in government spending as well as the ostensible inefficiency of bureaucratic service provision. The implications for public accountability are obvious and serious: service delivery by specific bu­reaucracies, and government budgets as a whole, are not meaningfully held in check by Congress, and the actual government at the ground level is not accountable in any serious way to others than those making the de­cisions and spending the money.

The first wave of self-aware applications of principal-agent theory in the analysis of bureaucratic policy making took issue with the interre­lated concerns raised by Lowi, Heclo, Niskanen, and others. The common theme these scholars articulated is that the U.S. Congress does not seem to be doing well by bureaucrats. The simple observation of Morris Fior­ina (1979) was that it is difficult to square the assumption that Congress is good at minding its own interests (Mayhew 1974) with the argument that bureaucrats systematically undermine those interests. The reason is that, whatever the institutional arrangements by which bureaucrats in­teract with Congress, Congress designed them. It would be bizarre for Congress to actively design institutions that lead to the undermining of Congress’s own interests, and yet that is what the disillusioned scholar­ship on bureaucracy through the 1970s seemed to suggest.

PRINCIPAL-AGENCY THEORY AS RECEIVED FROM ECONOMICS: THE PRINCIPAL’S PROBLEM

Several articles written in the 1970s did much to define the theory of principal- agency in the field of economics. In the first article (Spence &Zeckhauser 1971), the theory was framed as a question about insurance, but it turned out to be central to understanding the effects of information asymmetries in other settings.

THE TRADE-OFF IN RISK AND IN INCENTIVES

A risk-averse automobile driver would be willing to pay a premium for insurance from a risk-neutral insurance company. The risk of an accident is determined jointly by the driver’s behavior and by actions beyond her control. Because the driver is more risk-averse than the insurance company, the only efficient outcome would be full insurance for the driver at an appropriate premium. This would be possible, Spence &Zeckhauser (1971) point out, if the insurance company could monitor the driver’s behavior directly (for instance, by a device that records the automobile’s speed at all times). With no “hidden action,” the company could write a contract that fully insures the driver as long as she obeys the rules set out in the contract.

However, the information asymmetry between insurer and driver does not allow such a solution. Because there is no feasible way for the insurance company to monitor the driver’s behavior, full insurance will invite her to take risks that she would not take if she were uninsured. The unmonitored driver may well break the speed limit, talk on the cell phone, and run red lights (all at the same time!). This tendency to take actions that increase the insurance company’s risk is “moral hazard,” a recurring concern in PAT.

THE CANONICAL PRINCIPAL-AGENT MODEL

By 1980, economists working from the insurance model had defined the issues, concerns, and canonical results of PAT (Holmstrom 1979, Shavell 1979). To be defined as a principal-agency model, a model must have the following features, which I subsequently refer to as its core assumptions:

  1. Agent impact. The agent takes an action that determines (along with a risky variable) a payoff to the principal. For instance, an auto salesman’s monthly sales are determined in part by factors beyond his control, but in part by the energy, personality, and effort of the agent himself.
  2. Information asymmetry. The principal can readily observe the outcome but not the action of the agent. Monitoring of agent actions may be theoretically possible, but gathering complete information is regarded as prohibitively expensive. The owner of the car lot may not be able to tell whether the salesman’s bad sales in a given month are due to the economy, bad luck, or the agent’s own inadequate effort. The employer’s problem is the same as the insurance company’s in the previous example. Given the information asymmetry, can incentives substitute for monitoring?
  3. Asymmetry in preferences. The agent’s preferences are assumed to differ from the principal’s. For example, the actions that benefit the principal’s well-being may be costly to the agent, resulting in a preference for shirking.

Another key difference is that the agent is assumed to be more risk-averse than the principal.

  1. Initiative that lies with a unified principal. The principal acts rationally based on a coherent set of preferences, and is able to move first by offering a contract.
  2. Backward induction based on common knowledge. Principal and agent share knowledge about the structure of the game, effort costs, probability distri­bution of outcomes, and other parameters. Just as important, they share common knowledge of the agent’s rationality; both know that the agent will prefer any incentive package with an expected utility slightly more than the agent’s opportunity cost. This leads to backward induction by the principal. The principal can infer the agent’s best response function from known pa­rameters and use backward induction to identify the best possible outcome, subject to that function.
  3. Ultimatum bargaining. The principal is presumed to be able to impose the best possible solution from the agent’s correctly inferred best response func­tion. Or as Sappington (1991) says, “The principal is endowed with all of the bargaining power in this simple setting, and thus can make a ‘take-it-or- leave-it’ offer to the agent” (p. 47).

These six core assumptions lead to two primary results:

  1. Outcome-based incentives. The principal chooses to use outcome-based incentives to overcome in part the problems of moral hazard, despite op­erating at an informational disadvantage with the agent. This necessarily transfers risk to the risk-averse agent.
  2. Efficiency tradeoffs. Moral hazard limits both the benefits to the principal and the efficiency of the transaction as a whole. The risk-averse salesman will demand a higher average compensation package to compensate for her extra risk, and the owner’s profits will therefore suffer. Just as in the insur­ance problem, efficiency in incentives must be traded off against efficiency in risk-bearing, and the second-best solution (i.e., the best trade-off) must involve paying the risk-averse agent a risky, outcome-based bonus (Shavell 1979).

CONGRESSIONAL CONTROL THROUGH INCENTIVES

Prior to Weingast’s work, the most common perspective on congressional oversight was that it was ineffectual. The evidence for this was the small fraction of resources going into congressional oversight, the haphazard nature of the oversight activities that did take place, the lack of expertise by members of Congress and their staffs, and the disregard of bureaucrats for members of Congress.

Weingast used PAT to provide a different interpretation of these same empirical facts. The evidence documented lack of congressional monitoring—not an absence of control. If Congress does not spend much time monitoring and reprimanding bureaucrats, then that may mean the bureaucrats are adequately motivated by incentives to act in Congress’s best interests—just as a hefty commission can keep a sales agent performing well with virtually no intervention by the car lot owner.

In the first article, “Bureaucratic Discretion or Congressional Control,” Weingast& Moran (1983) claim that congressional committees “possess suffi­cient rewards and sanctions to create an incentive system for agencies” (p. 768). Congressional committees may be totally ignorant of bureaucratic behavior be­cause bureaucratic behavior is irrelevant to Congress; Congess doesn’t care how the bureaucrats sell cars, it just wants the cars to be sold. In effect, members of Congress shape bureaucratic behavior without monitoring it, by offering implicit contracts based on the observable effects of that behavior.

The threat of ex post sanctions creates ex ante incentives for the bureau to serve a congressional clientele. “This has a striking implication: the more effective the incentive system, the less often we should observe sanctions in the form of con­gressional attention through hearings and investigations. Put another way, direct and continuous monitoring of inputs rather than of results is an inefficient mecha­nism by which a principal constrains the actions of his agent” (Weingast& Moran 1983, p. 769).

In the 1984 paper, Weingast is precise about the nature of the incentives avail­able to congressional committees: bureaucratic competition for (ultimately) limited budgetary appropriations, congressional influence over the appointment of top bu­reaucratic officials, and the threat of ex post sanctions in the form of congressional hearings and investigations (Weingast 1984).

If Congress does not hold hearings, how does it measure outcomes? McCubbins& Schwartz (1984) solve this part of the puzzle. Members of Congressare interested only in the bottom line—whether their constituents are getting what they want from bureaucrats. Constituents are normally uninformed about the bu­reaucrats’ behaviors, but they do know whether they are getting the services they want. When bureaucrats fail to supply those services, constituents “pull the fire alarm”—complain loudly to their member of Congress. Legislators are able to economize on oversight resources by ignoring all those bureaucratic services about which they hear no fire alarms (McCubbins& Schwartz 1984).

For principal-agency theorists, bureaucratic independence and congressional “dominance” are observationally equivalent as far as monitoring and sanctions are concerned. We should see little of either if bureaucrats are independent; but we should also see little if bureaucratic behavior is shaped by congressionally imposed incentives. Therefore, it is necessary to look beyond monitoring and sanctions to bureaucratic outputs, to determine if they can be shown to vary with congressional preferences. In the case of the Securities and Exchange Commission, Weingast argues that its imposition of deregulation was in response to congres­sional representation of the interests of large institutional investors. With respect to the Federal Trade Commission (FTC), Weingast& Moran (1983) show more convincingly that the ideological preferences of the Senate and the subcommittee chairman (as measured by Americans for Democratic Action scores) were signifi­cantly associated with the FTC’s emphasis over time on consumer-oriented credit (p. 789). In other words, a more conservative Senate led to a less consumer-oriented FTC.

Although neither of these empirical forays could be regarded as the final word on the subject, Weingast’s articles constitute an enormous contribution to the study of congressional oversight and public bureaucracy by exemplifying quanti­tative research directed at precise questions (e.g., what are the political and other determinants of bureaucratic outputs?) derived from rigorous theory. Almost single- handedly, these articles raised the bar for academic research in the area of bureau­cracy. Weingast (1984) offers the “congressional dominance” hypothesis: “The mechanisms evolved by Congress over the past one hundred years comprise an in­genious system for control of agencies that involves little direct congressional mon­itoring of decisions but which nonetheless results in policies desired by Congress” (p. 148).

Weingast specifically acknowledges that congressional oversight is not perfect. He cites Holmstrom (1979) on the inevitability of agency costs, which in this con­text he interprets as equivalent to “bureaucratic discretion.” However, he notes that the existence of bureaucratic discretion “does not imply that the system fails to serve congressional interests. If creating too many agencies implied that congress­men lost control over the policy decisions valuable for reelection, then they would not do so” (p. 154). The ultimate indicator of whether members of Congress are getting what they need out of bureaucracies is the incumbent reelection rate. The high rate of incumbent reelection (over 90% in the House) is the result of many factors, but it is consistent with the conclusion that legislators are able to shape bureaucratic outputs to their purposes—hence they must control bureaucrats.

MULTIPLE PRINCIPALS

But does the assumption of a unified principal acting on coherent preferences (Assumption 4) really apply to oversight? What difference would it make if it didn’t? Moe (1984) expresses several doubts, chiefly the problem of multiple principals.

As Moe (1984, 1987a) observes, separation of powers and highly competi­tive partisan politics guarantee that bureaucratic agencies will be in a contentious environment of warring principals. Interest groups are pitted against each other, both parties strive for majority status, oversight committees compete with ap­propriations committees, and Congress jealously vies with the President. “In fact, politicians impose constraints on one another in a competitive effort to see to it that their own interests are protected from the intrusions of politician-opponents” (Moe 1984, p. 769). And in the context of warring principals, the ability of bureaucratic agents to use information asymmetries to their own advantage is enhanced.

In an obvious example, congressional committees must compete with the White House for control of bureaucracies. Moe (1985) reports a significant influence of the presidency on the decisions of the National Labor Relations Board (NLRB). Moe (1987b) gives a detailed account of the fine-tuned interaction of interest groups, president, and Congress as regards the NLRB. Over time, the institutional compromise has been for interest groups, and Congress, to limit themselves to a competition over appointments of bipartisan commissioners to the NLRB. Because neither labor nor business could dominate, given the partisan competitiveness and separation of powers, a procedural deal was struck. Business and labor each use their influence to veto commissioners who strongly favor the other party. The re­sult of this stalemate is that “the two sides have a common interest in ensuring that labor law is in the hands of experienced, knowledgeable people who under­stand the issues” (p. 261). The competition between “principals” inevitably results in the appointment of relatively centrist, pragmatic commissioners dedicated to professionalism of the Board and its staff.

 

PRINCIPAL-AGENCY THEORY REEXAMINED

In the last half of this paper, I examine explicit attempts to test and/or modify basic assumptions of PAT, especially the assumptions of common-knowledge and ulti­matum bargaining. First, I argue that PAT has been structured as a form of ultima­tum game, and that this provides a clue to empirical limitations of principal-agency models. Second, I argue that an awareness of credible commitment problems and moral hazard call for paradoxical reformulations of PAT.

FROM ULTIMATUM BARGAINING TO BILATERAL BARGAINING

Principal-agency models are based on a set of assumptions that make the principal- agency relationship a special case of an ultimatum game. The ultimatum game is a simple one in which Player 1 decides how to divide $10 between herself and Player 2, subject only to Player 2’s approval. Player 2 can make no counterproposals, and so once Player 1 has made a decision, Player 2 can choose only between accepting Player 1’s proposal and vetoing—which results in zero for both players.

The assumption of common knowledge of rationality allows Player 1 to “know” how Player 2 will respond: Player 2 should always accept any proposal that gives him more than zero. Knowing this, Player 1 can choose an allocation that gives her­self virtually all of the $10. This outcome is the only subgame-perfect equilibrium of the ultimatum game.

The subgame-perfect equilibrium prediction of the ultimatum game is possibly the most famous falsified prediction in game theory. It is virtually never observed (Camerer 2003, pp. 48-59). In dozens of experiments in many cultural and eco­nomic settings, Player 1 almost never offers anything close to the minimum share of the prize, and when she does, Player 2 is likely to reject the offer, resulting in a zero payoff for both. The typical Player 1 will choose an allocation that gives Player 2 a significant share of the prize, often approaching or equaling an equal split.

The reason for this lies squarely with violations of the common knowledge of rationality. Contrary to that assumption, Player 1 is not certain that Player 2 will choose a dime over nothing. Each Player 1 believes that Player 2 could be vindictive—and the impressive number of rejections of unequal splits validates that belief. Without that common knowledge, various radical reformulations of noncooperative game theory are required (see Camerer 2003, p. 110-12).

The similarities between the ultimatum game and the principal-agent problem are striking. The principal, like Player 1 in the ultimatum game, moves first, with knowledge of the agent’s effort costs. As Sappington (1991) says, “The principal is endowed with all of the bargaining power in this simple setting, and thus can make a ‘take-it-or-leave-it’ offer to the agent” (p. 47). The common-knowledge assumption once again does a lot of the work. It implies that the principal can offer an incentive package that gives the agent nothing more than his opportunity costs, with perfect confidence that it will be accepted. This guarantees that the principal can guarantee for herself the lion’s share of the resources, subject only to the agent’s rationality constraint.

PRINCIPAL-AGENCY THEORY AND COOPERATION

Experimental evidence of a more meaningful negotiation process than that pre­sumed by the ultimatum game is not necessarily surprising. After all, the subgame- perfect equilibrium of the principal-agent game is inevitably Pareto-suboptimal— meaning that it leaves money on the table. In fact, the more profound the informa­tion asymmetries, and the more risk-averse the agent, the more money is left on the table. Reaching a more efficient outcome is sufficiently motivating to generate a meaningful discussion between principal and agent.

Furthermore, the conditions for effective negotiation are often met in the case of political actors. For instance, members of congressional committees and bureaucrats are generally in their roles for long periods of time. It is reasonable to assume that they would be at least as open to mutually beneficial negotiation as would two undergraduate experimental subjects who communicate anonymously by computer.

Cooperation is often modeled by means of a repeated prisoners’ dilemma game. Radner (1985) noticed that the principal-agency game (like a prisoners’ dilemma) has a unique, suboptimal outcome as a one-shot game, and asked what would happen if the game were repeated. The conclusion, consistent with the Folk The­orem, is that an infinite number of outcomes, including efficient outcomes, are sustainable if the shadow of the future is sufficiently large.

Cooperation is easy to visualize with a trigger strategy. The principal adopts the strategy, “Pay the agent the flat wage that he prefers as long as there is no evidence of shirking; if there is such evidence, always insist on a risky output- based compensation scheme.” The agent responds with, “Always supply a generous high effort with a flat wage; but give the self-interested minimum to any risky compensation plan.”

Scholz (1991) is one of the few political scientists to take seriously the possibil­ity of cooperation in a principal-agent relationship. In the relationship that Scholz studies, the principal is the occupational safety regulatory agency, and the agent is the regulated firm. Consistent with PAT, Scholz argues that the one-shot game between the two results in the “deterrence equilibrium,” in which the firm offers minimal compliance and the agency provides maximal, inflexible enforcement. This is a Nash equilibrium because each side’s choice is the best response to the other’s choice: The agency doesn’t want to provide any flexibility to a firm that is exhibiting minimal compliance, and the firm doesn’t want to be more cooperative (responding to hazards not clearly defined in the laws, making hidden information available, etc.) with an agency that is showing every disposition to use all such concessions punitively.

However, as Scholz argues, both could be better off if both were more cooper­ative. The firm could make an honest effort to comply in exchange for an agency that was willing to overlook minor technical violations. The firm would find itself in less legal trouble, and the agency could report more real improvement in safety conditions.

PRINCIPAL-AGENCY THEORY REFORMULATED: CREDIBLE COMMITMENT PROBLEMS

One universally applied aspect of PAT is that the principal’s problem consists of inducing the agent to act in the principal’s interests. Clearly, the problem of inducing the agent not to act in the principal’s interests is not “the principal’s problem” as conventionally conceived. Yet, in credible commitment models, the principal’s self-interest is the problem, and the solution is to ensure that the agent is unresponsive to those interests.

CREDIBLE COMMITMENT: THE PRINCIPAL’S OTHER PROBLEM

Writing two years before the Cuban missile crisis, Schelling (1960) was concerned with making nuclear deterrence believable. In a simple game-tree version of the problem, the Soviet Union can either invade Europe or not. In response to an invasion, the U.S. can either use its nuclear weapons or not. The problem is that the former course of action would result in the worst case for the United States. For that reason, the Soviet Union can use backward induction to conclude that self-interest would prevent the United States from using its nuclear deterrent. The subgame-perfect equilibrium of the game is for the Soviet Union to invade and for the U.S. to acquiesce. Nuclear deterrence, in the hands of any reasonable individual, is a hollow threat.

The solution that Schelling proposed was to hire an agent—but not an agent who would be easily recognized as such by advocates of the economic theory of agency. Schelling’s agent must have preferences entirely different from those of the principal—in particular, a willingness to use nuclear weapons and thereby engender nuclear holocaust. But finding such a person is only half the solution. The agent must be insulated from pressure from the principal, so that the principal’s last-minute (postinvasion) attempts to prevent the agent from wreaking havoc are powerless. The useful agent must be out of the principal’s control, and prefer mutual annihilation to acquiescence.

A person with the right preferences was in fact the Chairman of the Joint Chiefs of Staff at that time: Curtis “Boom-boom” LeMay. During the Cuban missile crisis, his was the voice that was most insistent on an air attack, presumably ending in a nuclear exchange.

After the crisis, when Premier Khrushchev had backed down and the missiles were returning to the Soviet Union, LeMay argued for an air strike against Cuba anyhow. While President Kennedy was experiencing enormous relief at the ter­mination of the immediate threat of nuclear annihilation, LeMay was demanding exactly the nuclear exchange that Kennedy had been working so hard to avoid.

 

CONCLUSION

Principal-agent theory has proven to be a flexible and useful approach for interpreting the effects of institutional arrangements on accountability of public decision makers and public policy. Continuing development of this approach is the subject of a rich and ongoing literature. Particularly active areas at present include intrinsic motivation of agents (Prendergast 2007, Besley 2007, Gailmard and Patty 2007); the power of legal precedent in multi-tiered court systems (Carrubba and Clark n.d.); and delegation in

parliamentary systems (Huber and Shipan 2002; Dewan and Myatt 2010).

The “theory” is better understood as a family of models with a related perspective, than as a single encompassing theory with a specific set of assumptions and conclusions. It is hard to imagine how the theory as a whole, rather than a specific model, could be falsified. Principal-agent theorists in political science have been reasonably adept at postulating configurations of information asymmetries, incentive problems, and con­tracting limitations to rationalize, as the best that a principal can expect to attain under the circumstances, a wide variety of seemingly puzzling and pathological institutions and behaviors. By the same token, when used in this particular way, principal-agent theory does not help us to be more de­manding of our public officials. Instead this approach to principal-agent analyses of institutions asks how it could be that these pathologies might nevertheless result from principals doing the best job they can at holding their agents accountable.

 

REFERENCES

Achen, Christopher and Larry Bartels (2002). Blind Retrospection: Elec­toral Responses to Droughts, Flu, and Shark Attacks. Presented at the An­nual Meeting of the American Political Science Association, Boston, MA.

Ashworth, Scott and Ethan Bueno de Mesquita (2009).Electoral Selection, Stratetic Challenger Entry, and the Incumbency Advantage. Journal of Politics 70: 1006-1025.

Austen-Smith, David and Jeffrey Banks (1998).Electoral accountability and incumbency.In Models of Strategic Choice in Politics (P. Ordeshook, Ed.). Ann Arbor, MI: University of Michigan Press.

Banks, Jeffrey and RangarajanSundaram (1993).Moral hazard and ad­verse selection in a model of repeated elections. In Political Economy:

Institutions, Information Competition, and Representation (W. Barnett, et al., Eds.). New York: Cambridge University Press.

Banks, Jeffrey and RangarajanSundaram (1998).Optimal Retention in Agency Problems. Journal of Economic Theory 82: 293-323.

Banks, Jeffrey and Barry Weingast (1992).The Political Control of Bureau­cracies under Asymmetri Information. American Journal of Political Sci­ence 36: 509-524.

Barro, Robert (1973). The Control of Politicians: An Economic Model. Pub­lic Choice 14: 19-42.

Bendor, Jonathan and Adam Meirowitz (2004).Spatial Models of Delega­tion. American Political Science Review 98: 293-310.

Bendor, Jonathan, Serge Taylor, and Roland Van Gaalen (1987).Politicians, Bureaucrats, and Asymmetric Information. American Journal of Political Science 31: 796-828.

Bernheim, Douglas and Michael Whinston (1986).Common Agency.Econo- metrica 54: 923-942.

Besley, Timothy and Anne Case (2003). Political Institutions and Policy Choices: Evidence from the United States. Journal of Economic Literature 41: 7-73.

Besley, Timothy (2007). Principled Agents?The Political Economy of Good Government. New York: Oxford University Press.

Bolton, Patrick and Mathias Dewatripont (2004).Contract Theory. Cam­bridge, MA: MIT Press.

Bueno de Mesquita, Ethan and Matthew Stephenson (2007).Regulatory Quality with Impefect Oversight. American Political Science Review 101: 605-620.

Calvert, Randall, Mathew McCubbins, and Barry Weingast (1989).A The­ory of Political Control and Agency Discretion. American Journal of Polit­ical Science 33: 588-611.

Canes-Wrone, Brandice, Michael Herron, and Kenneth Shotts (2001). Lead­ership and Pandering: A Theory of Executive Policymaking. American Journal of Political Science 45: 532-550.

Carpenter, Daniel (2001).The Forging of Bureaucratic Autonomy. Prince­ton, NJ: Princeton University Press.

Carrubba, Clifford and Tom Clark (n.d.). Rule Creation in a Political Hier­archy. American Political Science Review, forthcoming.

DeFigueiredo, Rui J.P., Pablo Spiller, and Santiago Urbiztando (1999).An Informational Perspective on Administrative Procedures. Journal of Law, Economics, and Organization 15: 283-305.

Dessein, Wouter (2002). Authority and Communication in Organizations. Review of Economic Studies 69: 811-838.

Dewan, Torun and David Myatt (2010).The Declining Talent Pool of Gov­ernment. American Journal of Political Science 54: 267-286.

Dixit, Avinash, Gene Grossman, and Elhanan Helpman (1997). Common Agency and Coordination: General Theory and Application to Govern­ment Policy Making. Journal of Political Economy 105: 752-769.

Epstein, David and Sharyn O’Halloran (1994). Administrative Procedures, Information, and Agency Discretion: Slack vs. Flexibility. American Jour­nal of Political Science 38: 697-722.

Epstein, David and Sharyn O’Halloran (1999).Delegating Powers. New York: Cambridge University Press.

Fearon, James (1999). Electoral Accountability and the Control of Politi­cians. In Adam Przeworski, Bernard Manin, and Susan Stokes (eds.), Democracy, Accountability, and Representation. New York: Cambridge University Press.

Ferejohn, John (1986). Incumbent Performance and Electoral Control. Pub­lic Choice 50: 5-25.

Fiorina, Morris (1979). Control of the Bureaucracy: A Mismatch of Incen­tives and Capabilities. In The Presidency and the Congress: A Shifting Bal­ance of Powers? William Livingston, Lawrence Dodd and Richard Schott, eds. Austin, Texas: Lyndon B. Johnson School of Public Affairs, Lyndon Baines Johnson Library.

Fox, Justin and Kenneth Shotts (2009).Delegates or Trustees?A Theory of Political Accountability. Journal of Politics 71: 1225-1237.

Gailmard, Sean (2002). Expertise, Subversion, and Bureaucratic Discre­tion. Journal of Law, Economics, and Organization 18: 536-555.

Gailmard, Sean (2009). Multiple Principals and Oversight of Bureaucratic Policy Making. Journal of Theoretical Politics 21: 161-186.

Gailmard, Sean and John Patty (2007). Slackers and Zealots: Civil Service, Bureaucratic Discretion, and Policy Expertise. American Journal of Politi­cal Science 51: 873-889.

Gailmard, Sean and John Patty (2012).Formal Models of Bureaucracy. Annual Review of Political Science 15: 353-377.

Gailmard, Sean and John Patty (2013).Learning While Governing: Exper­tise and Accountability in the Executive Branch. Chicago: University of Chicago Press.

Healy, Andrew and Neil Malhotra (2010). Random Events, Economic Losses, and Retrospective Voting: Implications for Democratic Competence. Quar­terly Journal of Political Science 5: 193-208.

Heclo, Hugh (1977). A Government of Strangers: Executive Politics in Washington. Washington, DC: Brookings Institution Press.

Holmstrom, Bengt (1984). On the Theory of Delegation. In Marcel Boyer and Richard Khilstrom, editors, Bayesian Models in Economic Theory. New York: North Holland.

Holmstrom, Bengt and Paul Milgrom (1991).Multitask Principal Agent Analyses. Journal of Law, Economics, and Organization 7: 24-52.

Horn, Murray (1995).The Political Economy of Public Administration. New York: Cambridge University Press.

Horn, Murray and Kenneth Shepsle (1989).Administrative Process and Organizational Form as Legislative Responses to Agency Costs. Virginia Law Review 75: 499508.

Huber, John and Charles Shipan (2002). Deliberate Discretion? The Insti­tutional Foundations of Bureaucratic Autonomy. New York: Cambridge University Press.

Kiewiet, D. Roderick and Mathew McCubbins (1991).The Logic of Dele­gation. Chicago: University of Chicago Press.

Laffont, Jean Jacques and Jean Tirole (1993).A Theory of Incentives in Procurement and Regulation. Cambridge, MA: MIT Press.

Laffont, Jean Jacques and David Martimort (2001). The Theory of Incen­tives: The Principal-Agent Model. Princeton, NJ: Princeton University Press.

Lenz, Gabriel (2012). Follow the Leader: How Voters Respond to Politi­cians’ Performance and Policies. Chicago: University of Chicago Press.

Lowi, Theodore (1979). The End of Liberalism (2nd ed.). New York: Nor­ton.

 

Leave a Reply