📣 From Data Creator to Data Reuser: Distance Matters arxiv.org/abs/2402.07926

From Data Creator to Data Reuser: Distance Matters

Sharing research data is necessary, but not sufficient, for data reuse. Open science policies focus more heavily on data sharing than on reuse, yet both are complex, labor-intensive, expensive, and require infrastructure investments by multiple stakeholders. The value of data reuse lies in relationships between creators and reusers. By addressing knowledge exchange, rather than mere transactions between stakeholders, investments in data management and knowledge infrastructures can be made more wisely. Drawing upon empirical studies of data sharing and reuse, we develop the metaphor of distance between data creator and data reuser, identifying six dimensions of distance that influence the ability to transfer knowledge effectively: domain, methods, collaboration, curation, purposes, and time and temporality. We explore how social and socio-technical aspects of these dimensions may decrease -- or increase -- distances to be traversed between creators and reusers. Our theoretical framing of the distance between data creators and prospective reusers leads to recommendations to four categories of stakeholders on how to make data sharing and reuse more effective: data creators, data reusers, data archivists, and funding agencies. 'It takes a village' to share research data -- and a village to reuse data. Our aim is to provoke new research questions, new research, and new investments in effective and efficient circulation of research data; and to identify criteria for investments at each stage of data and research life cycles.

arXiv.org

📣 Interdisciplinary Papers Supported by Disciplinary Grants Garner Deep and Broad Scientific Impact arxiv.org/abs/2303.14732

Interdisciplinary Papers Supported by Disciplinary Grants Garner Deep and Broad Scientific Impact

Interdisciplinary research has emerged as a hotbed for innovation and a key approach to addressing complex societal challenges. The increasing dominance of grant-supported research in shaping scientific advances, coupled with growing interest in funding interdisciplinary work, raises fundamental questions about the effectiveness of interdisciplinary grants in fostering high-impact interdisciplinary research outcomes. Here, we quantify the interdisciplinarity of both research grants and publications, capturing 350,000 grants from 164 funding agencies across 26 countries and 1.3 million papers that acknowledged their support from 1985 to 2009. Our analysis uncovers two seemingly contradictory patterns: Interdisciplinary grants tend to produce interdisciplinary papers, which are generally associated with high impact. However, compared to disciplinary grants, interdisciplinary grants on average yield fewer papers and interdisciplinary papers they support tend to have substantially reduced impact. We demonstrate that the key to explaining this paradox lies in the power of disciplinary grants in propelling high-impact interdisciplinary research. Specifically, our results show that highly interdisciplinary papers supported by deeply disciplinary grants garner disproportionately more citations, both within their core disciplines and from broader fields. Moreover, disciplinary grants, particularly when combined with other similar grants, are more effective in producing high-impact interdisciplinary research. Amidst the rapid rise of support for interdisciplinary work across the sciences, these results highlight the hitherto unknown role of disciplinary grants in driving crucial interdisciplinary advances, suggesting that interdisciplinary research requires deep disciplinary expertise and investments.

arxiv.org

📣 A Maturity Model for Urban Dataset Meta-data arxiv.org/abs/2402.05211

A Maturity Model for Urban Dataset Meta-data

In the current environment of data generation and publication, there is an ever-growing number of datasets available for download. This growth precipitates an existing challenge: sourcing and integrating relevant datasets for analysis is becoming more complex. Despite efforts by open data platforms, obstacles remain, predominantly rooted in inadequate metadata, unsuitable data presentation, complications in pinpointing desired data, and data integration. This paper delves into the intricacies of dataset retrieval, emphasizing the pivotal role of metadata in aligning datasets with user queries. Through an exploration of existing literature, it underscores prevailing issues such as the identification of valuable metadata and the development of tools to maintain and annotate them effectively. The central contribution of this research is the proposition of a dataset metadata maturity model. Deriving inspiration from software engineering maturity models, this framework delineates a progression from rudimentary metadata documentation to advanced levels, aiding dataset creators in their documentation efforts. The model encompasses seven pivotal dimensions, spanning content to quality information, each stratified across six maturity levels to guide the optimal documentation of datasets, ensuring ease of discovery, relevance assessment, and comprehensive dataset understanding. This paper also incorporates the maturity model into a data cataloguing tool called CKAN through a custom plugin, CKANext-udc. The plugin introduces custom fields based on different maturity levels, allows for user interface customisation, and integrates with a graph database, converting catalogue data into a knowledge graph based on the Maturity Model ontology.

arxiv.org

📣 Can ChatGPT evaluate research quality? arxiv.org/abs/2402.05519

Can ChatGPT evaluate research quality?

Purpose: Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task. Design/methodology/approach: Test the extent to which ChatGPT-4 can assess the quality of journal articles using a case study of the published scoring guidelines of the UK Research Excellence Framework (REF) 2021 to create a research evaluation ChatGPT. This was applied to 51 of my own articles and compared against my own quality judgements. Findings: ChatGPT-4 can produce plausible document summaries and quality evaluation rationales that match the REF criteria. Its overall scores have weak correlations with my self-evaluation scores of the same documents (averaging r=0.281 over 15 iterations, with 8 being statistically significantly different from 0). In contrast, the average scores from the 15 iterations produced a statistically significant positive correlation of 0.509. Thus, averaging scores from multiple ChatGPT-4 rounds seems more effective than individual scores. The positive correlation may be due to ChatGPT being able to extract the author's significance, rigour, and originality claims from inside each paper. If my weakest articles are removed, then the correlation with average scores (r=0.200) falls below statistical significance, suggesting that ChatGPT struggles to make fine-grained evaluations. Research limitations: The data is self-evaluations of a convenience sample of articles from one academic in one field. Practical implications: Overall, ChatGPT does not yet seem to be accurate enough to be trusted for any formal or informal research quality evaluation tasks. Research evaluators, including journal editors, should therefore take steps to control its use. Originality/value: This is the first published attempt at post-publication expert review accuracy testing for ChatGPT.

arxiv.org

📣 Quantifying Similarity: Text-Mining Approaches to Evaluate ChatGPT and Google Bard Content in Relation to BioMedical Literature arxiv.org/abs/2402.05116

Quantifying Similarity: Text-Mining Approaches to Evaluate ChatGPT and Google Bard Content in Relation to BioMedical Literature

Background: The emergence of generative AI tools, empowered by Large Language Models (LLMs), has shown powerful capabilities in generating content. To date, the assessment of the usefulness of such content, generated by what is known as prompt engineering, has become an interesting research question. Objectives Using the mean of prompt engineering, we assess the similarity and closeness of such contents to real literature produced by scientists. Methods In this exploratory analysis, (1) we prompt-engineer ChatGPT and Google Bard to generate clinical content to be compared with literature counterparts, (2) we assess the similarities of the contents generated by comparing them with counterparts from biomedical literature. Our approach is to use text-mining approaches to compare documents and associated bigrams and to use network analysis to assess the terms' centrality. Results The experiments demonstrated that ChatGPT outperformed Google Bard in cosine document similarity (38% to 34%), Jaccard document similarity (23% to 19%), TF-IDF bigram similarity (47% to 41%), and term network centrality (degree and closeness). We also found new links that emerged in ChatGPT bigram networks that did not exist in literature bigram networks. Conclusions: The obtained similarity results show that ChatGPT outperformed Google Bard in document similarity, bigrams, and degree and closeness centrality. We also observed that ChatGPT offers linkage to terms that are connected in the literature. Such connections could inspire asking interesting questions and generate new hypotheses.

arxiv.org

📣 The Howard-Harvard effect: Institutional reproduction of intersectional inequalities arxiv.org/abs/2402.04391

The Howard-Harvard effect: Institutional reproduction of intersectional inequalities

The US higher education system concentrates the production of science and scientists within a few institutions. This has implications for minoritized scholars and the topics with which they are disproportionately associated. This paper examines topical alignment between institutions and authors of varying intersectional identities, and the relationship with prestige and scientific impact. We observe a Howard-Harvard effect, in which the topical profile of minoritized scholars are amplified in mission-driven institutions and decreased in prestigious institutions. Results demonstrate a consistent pattern of inequality in topics and research impact. Specifically, we observe statistically significant differences between minoritized scholars and White men in citations and journal impact. The aggregate research profile of prestigious US universities is highly correlated with the research profile of White men, and highly negatively correlated with the research profile of minoritized women. Furthermore, authors affiliated with more prestigious institutions are associated with increasing inequalities in both citations and journal impact. Academic institutions and funders are called to create policies to mitigate the systemic barriers that prevent the United States from achieving a fully robust scientific ecosystem.

arxiv.org

📣 Does the Use of Unusual Combinations of Datasets Contribute to Greater Scientific Impact? arxiv.org/abs/2402.05024

Does the Use of Unusual Combinations of Datasets Contribute to Greater Scientific Impact?

Scientific datasets play a crucial role in contemporary data-driven research, as they allow for the progress of science by facilitating the discovery of new patterns and phenomena. This mounting demand for empirical research raises important questions on how strategic data utilization in research projects can stimulate scientific advancement. In this study, we examine the hypothesis inspired by the recombination theory, which suggests that innovative combinations of existing knowledge, including the use of unusual combinations of datasets, can lead to high-impact discoveries. We investigate the scientific outcomes of such atypical data combinations in more than 30,000 publications that leverage over 6,000 datasets curated within one of the largest social science databases, ICPSR. This study offers four important insights. First, combining datasets, particularly those infrequently paired, significantly contributes to both scientific and broader impacts (e.g., dissemination to the general public). Second, the combination of datasets with atypically combined topics has the opposite effect -- the use of such data is associated with fewer citations. Third, younger and less experienced research teams tend to use atypical combinations of datasets in research at a higher frequency than their older and more experienced counterparts. Lastly, despite the benefits of data combination, papers that amalgamate data remain infrequent. This finding suggests that the unconventional combination of datasets is an under-utilized but powerful strategy correlated with the scientific and broader impact of scientific discoveries.

arxiv.org

📣 Relational hyperevent models for the coevolution of coauthoring and citation networks arxiv.org/abs/2308.01722

Relational hyperevent models for the coevolution of coauthoring and citation networks

The interest in network analysis of bibliographic data has grown substantially in recent years, yet comprehensive statistical models for examining the complete dynamics of scientific networks based on bibliographic data are generally lacking. Current empirical studies often focus on models restricting analysis either to paper citation networks (paper-by-paper) or author networks (author-by-author). However, such networks encompass not only direct connections between papers, but also indirect relationships between the references of papers connected by a citation link. In this paper, we extend recently developed relational hyperevent models (RHEM) for analyzing scientific networks. We introduce new covariates representing theoretically meaningful and empirically interesting sub-network configurations. The model accommodates testing hypotheses considering: (i) the polyadic nature of scientific publication events, and (ii) the interdependencies between authors and references of current and prior papers. We implement the model using purpose-built, publicly available open-source software, demonstrating its empirical value in an analysis of a large publicly available scientific network dataset. Assessing the relative strength of various effects reveals that both the hyperedge structure of publication events, as well as the interconnection between authors and references significantly improve our understanding and interpretation of collaborative scientific production.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.