What makes a single scientist, a research lab, or an entire university excellent? Is it better to publish a dozen papers of moderate quality or a single seminal paper in a top-tier journal? Although the debate over the trade-off between quantity and quality has been long standing, recent evidence suggests that both factors are important for scientific excellence. However, while quantity can be objectively measured by indicators such as the number of publications, research projects founded, or patents filed, assessing quality is controversial and creates more problems than it solves.
There is no doubt that objective, reliable, and valid assessment of the quality of scientific publications is notoriously difficult, tedious, and time-consuming. Ideally, submitted manuscripts should be evaluated by real experts using numerical scales and standardized protocols with high inter-rater reliability. Unfortunately, a general trend in scientometrics, bibliometrics and science-of-science assumes that the term “impact” is synonymous with scientific quality. The use of the impact factor to assess the quality of publications has been criticized by several authors. Recent studies have even empirically rejected the common assumption that researchers systematically cite papers that have influenced their work. Unfortunately, the number of citations a publication has received is still considered a widely accepted indicator/ proxy for its quality. However, the intellectual landscape of science is far too rich and complex to be simply reduced to the number of publications or citations. There is an urgent need to address this gap both theoretically and through empirical applications.
An unprecedented amount of freely available bibliographic data, full scientific texts, and patent applications offers us a great, previously untapped opportunity to build a large-scale ecosystem for the study of science. Computational analysis of this vast amount of data offers enormous potential, both to answer long-standing scientific questions and to drive the development of new iMetrics methods.
In this special issue, entitled Measuring Novelty in Science, we aim to provide a framework for theoretical exploration and data-driven modeling of the concept of scientific novelty/originality/quality.
Keywords:
Scientific creativity, scientific innovation, computational creativity, Quantity–quality dilemma, scientific performance
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
What makes a single scientist, a research lab, or an entire university excellent? Is it better to publish a dozen papers of moderate quality or a single seminal paper in a top-tier journal? Although the debate over the trade-off between quantity and quality has been long standing, recent evidence suggests that both factors are important for scientific excellence. However, while quantity can be objectively measured by indicators such as the number of publications, research projects founded, or patents filed, assessing quality is controversial and creates more problems than it solves.
There is no doubt that objective, reliable, and valid assessment of the quality of scientific publications is notoriously difficult, tedious, and time-consuming. Ideally, submitted manuscripts should be evaluated by real experts using numerical scales and standardized protocols with high inter-rater reliability. Unfortunately, a general trend in scientometrics, bibliometrics and science-of-science assumes that the term “impact” is synonymous with scientific quality. The use of the impact factor to assess the quality of publications has been criticized by several authors. Recent studies have even empirically rejected the common assumption that researchers systematically cite papers that have influenced their work. Unfortunately, the number of citations a publication has received is still considered a widely accepted indicator/ proxy for its quality. However, the intellectual landscape of science is far too rich and complex to be simply reduced to the number of publications or citations. There is an urgent need to address this gap both theoretically and through empirical applications.
An unprecedented amount of freely available bibliographic data, full scientific texts, and patent applications offers us a great, previously untapped opportunity to build a large-scale ecosystem for the study of science. Computational analysis of this vast amount of data offers enormous potential, both to answer long-standing scientific questions and to drive the development of new iMetrics methods.
In this special issue, entitled Measuring Novelty in Science, we aim to provide a framework for theoretical exploration and data-driven modeling of the concept of scientific novelty/originality/quality.
Keywords:
Scientific creativity, scientific innovation, computational creativity, Quantity–quality dilemma, scientific performance
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.