Special session on AI-driven Decision Making (AIDM)

Scope

The last years have seen a plethora of phenomena of a multifactorial nature increasingly affecting our personal and professional lives. From this perspective, uncertainty and unintended consequences (Aria & Cuccurullo, 2022; Ravindran & Shah, 2023), in addition to challenges and implications triggered by phenomena such as global health (Bowen et al., 2012; Frenk & Moon, 2013), economic crises (Laskovaia et al., 2019; Meder et al., 2013), and digital transformation (to name some), must be addressed, managed and governed through dynamic capabilities (Karimi & Walter, 2015), holistic understanding (vom Brocke & Mendling, 2018), and effective change management and implementation (Hanelt et al., 2021; Wessel et al., 2021). Specifically, in order to explore, investigate and manage several aspects of such phenomena in a more holistic way, the decision management discipline underling the need of moving through ephemeral boundaries between artificial, social, and natural sciences (Yassine & Chelst, 2018).

Moreover, decision-making (DM) processes are mutating within private, public, and non-profit sectors, as well as in relation to their legacy environments and digital ecosystems (Cipriano & Za, 2023; Davidson & Chismar, 2007; Papagiannidis et al., 2023), aiming to achieve effectiveness, efficiency, ecology, innovations, sustainability, and promptness. At the same time, tackling and problematising the complex interactions and co-occurrence of the different factors within the cognitive, social, and evolutionary domains is critical. From this perspective, it is not surprising that artificial intelligence (AI) systems are increasingly integrated into decision-making tasks (Gomez et al., 2023). AI technologies enable the transformation of work and workplaces (Zimmer et al., 2023), smart supply chains (Aliahmadi & Nozari, 2023), as well as how quality management practices are adopted in the digital era (Saihi et al., 2023). AI-based technologies are being increasingly implemented inside public and private companies (Smacchia & Za, 2022; Upadhyay et al., 2022). Therefore, they are significantly affecting decisions, how to accumulate tacit and explicit knowledge (Harfouche et al., 2023), augmenting or complementing knowledge, or producing governance practices (Perry & Uuk, 2019). In parallel, as AI becomes widely accepted and implemented in different sectors, it has the power to affect an increasingly large number of people (Zuiderwijk et al., 2021). The rapid and pervasive adoption of AI amplifies the risk of unintended consequences in its decision outcomes, and such unforeseen repercussions could lead to severe adverse outcomes (Mikalef et al., 2022). These aspects were further reinforced after the general introduction of Generative AI models, such as Chat GPT. The discourse surrounding the impact of AI has intensified, generating a spectrum of viewpoints that range from extremely positive to profoundly negative (Sabherwal & Grover, 2024). As a consequence, many are the implications concerning knowledge imbalance and information asymmetry between the different actors and domains (Boulanin & Lewis, 2023), as well as the AI governance practices (Papagiannidis et al., 2023), the mitigation of AI risks (Adam et al., 2022), not to mention the evaluation of security metrics (Aliahmadi & Nozari, 2023; Alufaisan et al., 2021), ethical (Cheruvalath, 2023; Dignum, 2023), biases, and responsibility issues (Boulanin & Lewis, 2023).

One of the interests of this special track is related to the design and evaluation of AI-driven decision making. At the same time, this track also raises the question of whether and how more rational and mindful decision-making processes could depend on human-driven and AI-driven policy influence.

We welcome submissions addressing our interest in AI-driven decision-making, seeking studies on how humans and AI can collaborate to make decision tasks. We invite studies that i) work on all levels of analysis, from the individual up to the societal, and ii) that report current and relevant research results on the use, design and development of AI tools, as well as the interactions between user and AI tools in achieving decisions. We also welcome submissions providing in-depth cases of implementation and use of AI-driven decision-making processes in specific organisations (profit, public, and non-profit based) and identifying its consequences (especially unintended). We also ask that research outline how AI-driven decision-making processes can co-create value and lead to innovation and higher economic, ecologic, or social sustainability levels.

  • Adam, H., Balagopalan, A., Alsentzer, E., Christia, F., & Ghassemi, M. (2022). Mitigating the impact of biased artificial intelligence in emergency decision-making. Communications Medicine, 2(1), 149.
  • Aliahmadi, A., & Nozari, H. (2023). Evaluation of security metrics in AIoT and blockchain-based supply chain by Neutrosophic decision-making method. Supply Chain Forum: An International Journal, 24(1), 31–42.
  • Alufaisan, Y., Marusich, L. R., Bakdash, J. Z., Zhou, Y., & Kantarcioglu, M. (2021). Does Explainable Artificial Intelligence Improve Human Decision-Making?
  • Aria, M., & Cuccurullo, C. (2022). Comprehensive Science Mapping Analysis. In Package ‘Bibliometrix’ (3.2.1).
  • Boulanin, V., & Lewis, D. A. (2023). Responsible reliance concerning development and use of AI in the military domain. Ethics and Information Technology, 25(1), 8.
  • Bowen, K. J., Friel, S., Ebi, K., Butler, C. D., Miller, F., & McMichael, A. J. (2012). Governing for a healthy population: Towards an understanding of how decision-making will determine our global health in a changing climate. International Journal of Environmental Research and Public Health, 9(1), 55–72.
  • Cheruvalath, R. (2023). Artificial Intelligent Systems and Ethical Agency. Journal of Human Values, 29(1), 33–47.
  • Cipriano, M., & Za, S. (2023). Non-profit organisations in the digital age: A research agenda for supporting the development of a digital transformation strategy. Journal of Information Technology, 0(ja).
  • Davidson, & Chismar. (2007). The Interaction of Institutionally Triggered and Technology-Triggered Social Structure Change: An Investigation of Computerized Physician Order Entry. MIS Quarterly, 31(4), 739.
  • Dignum, V. (2023). Responsible Artificial Intelligence-from Principles to Practice: A keynote at TheWebConf 2022. ACM SIGIR Forum, 1–6.
  • Frenk, J., & Moon, S. (2013). Governance Challenges in Global Health. New England Journal of Medicine, 368(10), 936–942.
  • Gomez, C., Unberath, M., & Huang, C.-M. (2023). Mitigating knowledge imbalance in AI-advised decision-making through collaborative user involvement. International Journal of Human-Computer Studies, 172(December 2022), 102977.
  • Hanelt, A., Bohnsack, R., Marz, D., & Antunes Marante, C. (2021). A Systematic Review of the Literature on Digital Transformation: Insights and Implications for Strategy and Organizational Change. Journal of Management Studies, 58(5), 1159–1197.
  • Harfouche, A., Quinio, B., Saba, M., & Saba, P. B. (2023). The Recursive Theory of Knowledge Augmentation: Integrating human intuition and knowledge in Artificial Intelligence to augment organizational knowledge. Information Systems Frontiers, 25(1), 55–70.
  • Karimi, J., & Walter, Z. (2015). The role of dynamic capabilities in responding to digital disruption: A factor-based study of the newspaper industry. Journal of Management Information Systems, 32(1), 39–81.
  • Laskovaia, A., Marino, L., Shirokova, G., & Wales, W. (2019). Expect the unexpected: examining the shaping role of entrepreneurial orientation on causal and effectual decision-making logic during economic crisis. Entrepreneurship & Regional Development, 31(5–6), 456–475.
  • Meder, B., Le Lec, F., & Osman, M. (2013). Decision making in uncertain times: what can cognitive and decision sciences say about or learn from economic crises? Trends in Cognitive Sciences, 17(6), 257–260.
  • Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A. (2022). Thinking responsibly about responsible AI and ‘the dark side’ of AI. In European Journal of Information Systems (Vol. 31, Issue 3, pp. 257–268). Taylor and Francis Ltd.
  • Papagiannidis, E., Enholm, I. M., Dremel, C., Mikalef, P., & Krogstie, J. (2023). Toward AI Governance: Identifying Best Practices and Potential Barriers and Outcomes. Information Systems Frontiers, 25(1), 123–141.
  • Perry, B., & Uuk, R. (2019). AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk. Big Data and Cognitive Computing, 3(2), 26.
  • Ravindran, S., & Shah, M. (2023). Unintended consequences of lockdowns, COVID-19 and the Shadow Pandemic in India. Nature Human Behaviour.
  • Sabherwal, R., & Grover, V. (2024). The Societal Impacts of Generative Artificial Intelligence: A Balanced Perspective. Journal of the Association for Information Systems, 25(1), 13–22.
  • Saihi, A., Awad, M., & Ben-Daya, M. (2023). Quality 4.0: leveraging Industry 4.0 technologies to improve quality management practices – a systematic review. International Journal of Quality & Reliability Management, 40(2), 628–650.
  • Smacchia, M., & Za, S. (2022). Association for Information Systems Societal Impact of Information Systems Artificial Intelligence in Organisation and Managerial Studies: A Computational Literature Review Artificial Intelligence in Organisation and Managerial Studies: A Computational. ICIS 2022 Proceedings. 6, 0–17.
  • Upadhyay, N., Upadhyay, S., & Dwivedi, Y. K. (2022). Theorizing artificial intelligence acceptance and digital entrepreneurship model. International Journal of Entrepreneurial Behaviour and Research, 28(5), 1138–1166.
  • vom Brocke, J., & Mendling, J. (2018). Frameworks for Business Process Management: A Taxonomy for Business Process Management Cases. In Read (pp. 1–17).
  • Wessel, L., Baiyere, A., Ologeanu-Taddei, R., Cha, J., & Blegind Jensen, T. (2021). Unpacking the Difference Between Digital Transformation and IT-Enabled Organizational Transformation. Journal of the Association for Information Systems, 22(1), 102–129.
  • Yassine, A., & Chelst, K. (2018). Opportunities for Decision Analysis in Engineering Management. IEEE Engineering Management Review, 46(2), 151–161.
  • Zimmer, M. P., Baiyere, A., & Salmela, H. (2023). Digital workplace transformation: Subtraction logic as deinstitutionalising the taken-for-granted. The Journal of Strategic Information Systems, 32(1), 101757.
  • Zuiderwijk, A., Chen, Y. C., & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3).

Topics

  • AI-driven decision-making in risk management.
  • AI-driven decision-making: adoption and diffusion.
  • AI-driven decision-making: success or failures, challenges and opportunities.
  • AI-driven decision-making and related bias, ethical implications, unintended consequences, and the dark side.
  • The role of explainable and responsible AI in the decision-making processes.
  • Human-AI augmentation in the decision making processes.
  • Societal impacts of AI-driven decision making (e.g., for addressing healthcare, humanitarian crises, sustainability, resilience, etc.).
  • AI-driven decision-making for crowdsourcing, co-production, and co-creation of value.
  • AI-driven decision making for Information Systems Development.
  • Analysing (and forecasting) the relationship between AI solutions, decision-making processes, and organisational, environmental, or ecosystem change (legacy and digital).
  • Analysis of the interaction of actors (individuals, groups, organisations and networks) and AI tools during the change of decision-making processes.
  • AI, decision-making, and organisational drivers of resilient change.
  • The role of AI in enhancing creativity and innovation in decision-making.
  • The future of governance within AI decision-making (e.g., policy formulation, governance processes and democratic participation).
  • The impact of generative AI in strategic decision-making.
  • Decision making for crowdsourcing, co-production, and co-creation of value.
  • AI-driven decision making for Information Systems Development.
  • Analysing (and forecasting) the relationship between AI solutions, decision-making processes, and organisational, environmental, or ecosystem change (legacy and digital).
  • Analysis of the interaction of actors (individuals, groups, organisations and networks) and AI tools during the change of decision-making processes.
  • AI, decision-making, and organisational drivers of resilient change.
  • The role of AI in enhancing creativity and innovation in decision-making.
  • The future of governance within AI decision-making (e.g., policy formulation, governance processes and democratic participation).
  • The impact of generative AI in strategic decision-making.

AIDM 2025 welcomes empirical, experimental, theoretical, and methodological papers that address these or related questions from various disciplines and perspectives. Specifically, we invite researchers and scholars to submit papers that contribute to developing research methods and theoretical frameworks providing exploratory analysis of technologies and applications that are at the core of recent advances in AI-driven decision-making. Applied papers should discuss new initiatives, best practices, and lessons learned in the broader social science area so that practitioners, academicians, and information professionals can benefit from the emerging knowledge gained. Strictly empirical, computational, and lab or field experimental studies are also welcome. Finally, we encourage papers on quantitative and qualitative methods addressing interdisciplinary research approaches. Analytical and methodological papers should contribute to formulating quantitative hypotheses, meticulously gathering and thoroughly interpreting data, and planning or conducting experiments or surveys, providing clear guidance for researchers who want to use a particular research method. Theoretical papers should develop or refine theoretical frameworks in the field of study. We welcome papers that propose new theoretical models or frameworks, as well as papers that critique and refine existing ones. Furthermore, theoretical papers should be based on a sound theoretical foundation and offer instructions to researchers on how to replicate them. Papers will be judged on novelty, significance, correctness, and clarity.

Organising Committee

  • Stefano Za (University of Chieti-Pescara)
  • Michele Cipriano (Catholic University of the Sacred Heart, Piacenza)
  • Marco Smacchia (University of Chieti-Pescara)

Programme Committee

  • Massimiliano Agovino (University of Naples ″Parthenope″)
  • Alessio Maria Braccini (University of Tuscia)
  • Lea Iaia (University of Turin)
  • Alessandra Lazzazara (University of Milan)
  • Agnese Rapposelli (University of Chieti-Pescara)
  • Stefan Schmager (University of Agder, Norway)

Contact

For details on any aspect of the AIDM session, please contact stefano.za@unich.it. The scientific and social programme, links to online sessions, and time conversions will be available on the DECON website. Further announcements will be personally communicated to the corresponding authors via email.