Advances in artificial intelligence (AI) have prompted leaders in education, science, politics, and publishing to revisit their policies and practices. Those advances are only made possible because of access to large amounts of data. As we turn our attention to evaluating AI, the interconnected nature of AI and the way in which information is produced and disseminated is becoming increasingly clear. As AI tools become integrated into the knowledge production process such as through claim checking and by reviewer identification, how does that shape the knowledge being produced?
The quality of many AI models is determined to a large degree on the data on with which the model is trained. Given that, how can we construct information ecosystems that ensure that computational models use the best available knowledge? Efforts in open science/open scholarship will likely play a role, but can we reimagine scholarly communications more broadly to evolve subscriptions, paywalls, and licensing to accommodate the scholarly use of AI? What can we learn from existing public/private partnerships and/or consortia that can inform future ecosystems policies, laws and financial arrangements that will bridge current gaps?
The scholarly communication life cycle comprises 1) research, data collection, and analysis; 2) authoring; 3) peer review; 4) publication; 5) discovery, reproducibility, and dissemination and 6) impact measurement. AI has been explored (and deployed) in several of these steps such as by identifying reviewers, detecting plagiarism and through claim checking, and generating bibliographies. How does the use of AI tools in scholarly communication shape the knowledge being produced? Although efforts have been made to increase the Findability, Accessibility, Interoperability, and Reuse (FAIR) of digital assets what additional steps or research policies are needed to ensure that scientific evidence is accessible by automated systems?
This issue explores the intersection of scholarly communication, research policy, and artificial intelligence. We welcome a broad array of submissions to this Research Topic, reflecting work at all stages of development including thought pieces, literature reviews, discussions of theory and practice, and case studies that could explore (but are not limited to):
• Evaluation strategies and frameworks that speak to the co-dependency between AI models and the quality (as opposed to the quantity) of data available.
• Successful public/private partnerships that provide AI ecosystems
• Ecosystems that include a variety of research objects to enable a broader participation in AI. Are Large Language Models (aka, ChatGPT) learning from a global knowledge base or disproportionately from some geographic locations, languages, or cultures? What are the implications for scholars across the globe? Are their implications for scholars at under-resourced institutions?
• How do we (librarians, scientists, and scholars) best facilitate the scholarly use of AI as a tool for knowledge production and how do we adequately assess the outputs before deploying such tools?
• How can barriers to reproducibility be eased on AI trained on data with restrictions?
If you have any queries, or would like to discuss submitting to the collection please feel free to get in touch with the Topic Editor team, or the journal at 'researchmetrics@frontiersin.org'.
Keywords:
AI, Artificial Intelligence, Scholarly Communication, Research Policy
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Advances in artificial intelligence (AI) have prompted leaders in education, science, politics, and publishing to revisit their policies and practices. Those advances are only made possible because of access to large amounts of data. As we turn our attention to evaluating AI, the interconnected nature of AI and the way in which information is produced and disseminated is becoming increasingly clear. As AI tools become integrated into the knowledge production process such as through claim checking and by reviewer identification, how does that shape the knowledge being produced?
The quality of many AI models is determined to a large degree on the data on with which the model is trained. Given that, how can we construct information ecosystems that ensure that computational models use the best available knowledge? Efforts in open science/open scholarship will likely play a role, but can we reimagine scholarly communications more broadly to evolve subscriptions, paywalls, and licensing to accommodate the scholarly use of AI? What can we learn from existing public/private partnerships and/or consortia that can inform future ecosystems policies, laws and financial arrangements that will bridge current gaps?
The scholarly communication life cycle comprises 1) research, data collection, and analysis; 2) authoring; 3) peer review; 4) publication; 5) discovery, reproducibility, and dissemination and 6) impact measurement. AI has been explored (and deployed) in several of these steps such as by identifying reviewers, detecting plagiarism and through claim checking, and generating bibliographies. How does the use of AI tools in scholarly communication shape the knowledge being produced? Although efforts have been made to increase the Findability, Accessibility, Interoperability, and Reuse (FAIR) of digital assets what additional steps or research policies are needed to ensure that scientific evidence is accessible by automated systems?
This issue explores the intersection of scholarly communication, research policy, and artificial intelligence. We welcome a broad array of submissions to this Research Topic, reflecting work at all stages of development including thought pieces, literature reviews, discussions of theory and practice, and case studies that could explore (but are not limited to):
• Evaluation strategies and frameworks that speak to the co-dependency between AI models and the quality (as opposed to the quantity) of data available.
• Successful public/private partnerships that provide AI ecosystems
• Ecosystems that include a variety of research objects to enable a broader participation in AI. Are Large Language Models (aka, ChatGPT) learning from a global knowledge base or disproportionately from some geographic locations, languages, or cultures? What are the implications for scholars across the globe? Are their implications for scholars at under-resourced institutions?
• How do we (librarians, scientists, and scholars) best facilitate the scholarly use of AI as a tool for knowledge production and how do we adequately assess the outputs before deploying such tools?
• How can barriers to reproducibility be eased on AI trained on data with restrictions?
If you have any queries, or would like to discuss submitting to the collection please feel free to get in touch with the Topic Editor team, or the journal at 'researchmetrics@frontiersin.org'.
Keywords:
AI, Artificial Intelligence, Scholarly Communication, Research Policy
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.