Narratives and Expert Information in Agenda-Setting: Experimental Evidence on State Legislator Engagement with Artificial Intelligence Policy

by Daniel S. Schiff & Kaylyn Jackson Schiff

Previous scholarship has investigated how policy entrepreneurs use narratives and expert information to influence policy agendas. In particular, narratives can be powerful tools for communicating policy problems and solutions, while expert information can help clarify complicated subject matters and increase confidence in policy proposals. This raises a question: can policy entrepreneurs effectively use narratives to influence policymakers even in complex, technical policy domains where we might think the technical details might be traditionally most important?

We explore this question in the context of artificial intelligence (AI) policy – an emerging policy domain that is highly technical and multi-faceted, with social, ethical, economic implications. Because the agenda for AI policy is still in the process of development, it presents a ripe case for understanding agenda setting and policy influence efforts. In partnership with a leading AI think tank, The Future Society (TFS), we conducted a field experiment on state legislators across the United States. Emails about AI policy were sent to 7,355 legislative offices. Legislators were randomly assigned to receive an email containing either a narrative strategy, an expertise strategy, or generic, neutral information. We also considered two ways of issue framing: ethical and economic/competition (see Figure 1). 

Legislators were presented with either a fact sheet or story, and invited to register for and attend a webinar about AI for state legislators, which we hosted in December 2021. For example, legislators (or their staffers) might read an email message about an individual falsely arrested due to facial recognition, or between a geopolitical contest between the US and China.

We measured link clicks and webinar registration and attendance as proxies for policymaker engagement. Using these data on engagement with the emails, we tested the following hypotheses:

  • Policy Entrepreneur Effectiveness Hypothesis: The provision of narratives or expertise by policy entrepreneurs will increase policymaker attention to and engagement with the policy issue at hand.
  • Dominance of Narratives Hypothesis: The provision of narratives will induce greater policymaker engagement than the provision of expertise.
  • Dominance of Expertise Hypothesis: The provision of expertise will induce greater policymaker engagement than the provision of narratives.
  • Strategies by Issue Framing Hypothesis: Policymakers will respond with greater engagement to narratives when they are provided issue frames emphasizing the ethical and social dimensions of AI as compared to issue frames emphasizing the economic and technological competitiveness dimensions of AI.
  • Prior Experience Hypothesis: Compared to legislators in states with greater prior experience in AI policymaking, legislators in states with less experience with AI will respond with greater engagement to the expertise treatment.

Consistent with the Policy Entrepreneur Effectiveness Hypothesis, we found that narrative strategies and expert information increased engagement with the emails (see Figure 3). Interestingly, comparing the narrative and expertise treatments, we found no statistically significant differences in their effects on engagement, suggesting that narratives are as effective as expert information even for this complex policy domain. 

Figure 3. Both expert information and narratives engaged state legislators as compared to a more generic ‘control’ message, with increased engagement of 30 or more percentage points.

Contrary to our expectations, framing the issue to emphasize ethical or economic dimensions of AI also did not affect engagement, suggesting that the use of strategies like narratives can be effective even when AI policy is framed in very different ways. We had hypothesized that narratives might be especially effective when an ethics-focused policy frame of AI is promoted, but it appears narratives are just as effective when geopolitical and strategic dimensions of AI policy are emphasized. 

Finally, legislators with no prior experience with AI policy were more likely to engage with the emails than legislators who had considered or passed AI policy in the past, and state legislatures with higher capacity (e.g., more staff, longer sessions) were far more likely to the email messages, an important note for those seeking to reach out to policymakers..

Our findings show that narratives can influence policymakers as much as expertise, even in complicated policy domains like AI. It is worth noting that our data was collected in 2021 before the introduction of large language models (LLMs), like OpenAI’s ChatGPT, which gained unprecedented public attention. This development has surely influenced the salience of AI policy. We suggest that future research should consider this development. Nevertheless, our work makes important contributions by extending the NPF to new contexts and investigating narratives using field experiments, a novel research approach in the field.

You can read the original article in Policy Studies Journal at

Schiff, Daniel S. and Kaylyn Jackson Schiff. 2023. “ Narratives and Expert Information in Agenda-setting: Experimental Evidence on State Legislator Engagement With Artificial Intelligence Policy.” Policy Studies Journal 51(4): 817–842. https://doi.org/10.1111/psj.12511.

About the Authors

Dr. Daniel Schiff is an Assistant Professor of Technology Policy at Purdue University’s Department of Political Science and the Co-Director of GRAIL, the Governance and Responsible AI Lab. He studies the formal and informal governance of AI through policy and industry, as well as AI’s social and ethical implications in domains like education, manufacturing, finance, and criminal justice.

Follow him on X/Twitter: @Dan_Schiff (@purduepolsci and @Purdue)

Kaylyn Jackson Schiff is an Assistant Professor in the Department of Political Science at Purdue University and Co-Director of the Governance and Responsible AI Lab (GRAIL). Her research addresses the impacts of emerging technologies on government and society. She studies how technological developments are changing citizen-government contact, and she explores public opinion on artificial intelligence in government.

Follow her on X/Twitter: @kaylynjackson

Leave a comment