This research note deploys data from a simulation experiment to illustrate the very real effects of monolithic views of technology potential on decision-making within the Homeland Security and Emergency Management field. Specifically, a population of national security decision-makers from across the United States participated in an experimental study that sought to examine their response to encounter different kinds of AI agency in a crisis situation. The results illustrate wariness of overstep and unwillingness to be assertive when AI tools are observed shaping key situational developments, something not apparent when AI is either absent or used as a limited aide to human analysis. These effects are mediated by levels of respondent training. Of great concern, however, these restraining effects disappear and the impact of education on driving professionals towards prudent outcomes is minimized for those individuals that profess to see AI as a full viable replacement of their professional practice. These findings constitute proof of a “Great Machine” problem within professional HSEM practice. Willingness to accept grand, singular assumptions about emerging technologies into operational decision-making clearly encourages ignorance of technological nuance. The result is a serious challenge for HSEM practice that requires more sophisticated solutions than simply raising awareness of AI.
Keywords: artificial intelligence; cybersecurity; experiments; decision-making.