SIPS 2025 Online
One of the core motivations behind Open Science is the reuse and verification of others’ research outputs. What does this look like in practice? There might be challenges: Do I have the right tools to open a file or replicate an experiment? How can I make use of metadata? Especially when dealing with specialized software for the needs of psychologists, there may not always be a pre-determined workflow.
This session invites researchers to hack their own solutions to replicating research outputs. Every level of experience is welcome, you can bring the data you are interested in or follow a suggestion provided by the facilitator. We want to document our difficulties and solutions to foster the discourse around reusing research. This can inform not only our curiosity about our colleagues’ work but also help us reflect how to make our own research more accessible in the future.
There is broad consensus that research assessment needs to be reformed. Initiatives like the Coalition for Advancing Research Assessment (CoARA) advocate for a qualitative evaluation, complemented by transparent indicators, and they promote better research practices such as methodological rigor and reproducibility. The Research Quality Evaluation (RESQUE) framework, supported by the German Psychological Society (DGPs), offers a structured approach to assessing research based on qualitative criteria and responsible metric use. Developed within an open community, it defines quality standards across psychological research domains. Following an introduction to the framework and its principles, a hands-on session will allow attendees to assess their own work using multiple quality criteria. The tool then generates a personalized research profile, useful for job applications, selection processes, or academic websites. The workshop aims to promote hands-on experience and discussion on responsible research assessment, and we hope to gain valuable feedback from a users’ perspective.
Researchers are commonly advised to expect a project to take longer than originally planned. By the same token, it is frequently impossible to reproduce a study without communicating with the original authors. We argue that both problems originate with a lack of understanding of the labor involved in completing a project. By labor we mean everything that needs to be done to address a research question. However, this is a theoretical postulation. In this hackathon we will attempt to map out all of the labor involved in completing a research project. We will use the Heliocentric Model of Open Science Documentation to identify the components of the project, and to structure the labor required to complete them. We will also use narrative structure and problem solving schemas to describe the labor/outputs for comprehension by 3rd parties. Future directions will be determined by the group at the hackathon.
Background: Implementation science (IS) applies psychological theory and behaviour-change strategies to implement evidence-based interventions in real-world settings. Scoping reviews are used in IS to map literature and identify gaps. Discrepancies are common between systematic reviews and their preregistered protocols; their extent in scoping reviews is unclear. Their more flexible methodology may increase deviations from plans, potentially compromising the trustworthiness of findings.
Aim: This study will examine the prevalence, extent, nature and justifications of discrepancies between scoping reviews and their protocols, using IS as an exemplar.
Methods: A meta-scientific study of reviews from five journals is underway. Reviews with available protocols are assessed. Methodological details will be extracted using a tool informed by scoping review guidelines. Data will be coded for the number, extent and type of discrepancies, and justifications reported.
Discussion: The findings can inform review guidance, particularly on tracking and reporting protocol-review discrepancies.
While teaching research methods and statistics has traditionally relied on in-person, computer lab-based workshops, innovative strategies are needed to support online learning. Given students have requested flexible, interactive, and accessible learning tools over textbooks, we created an open-access research-focused resource library. This qualitative study evaluated student and staff experiences and opinions of the library. A thematic analysis of semi-structured walk-through interviews (12 students, 2 staff) is underway. Preliminary analysis indicates participants value the library’s open-access nature, comprehensive content, and strong alignment with university subjects. Participants also highlighted future improvements such as including in-section definitions, additional content including specific statistical analyses and qualitative methods, and more visuals to enhance understanding. These findings contribute to our understanding of what makes online statistical resources effective and engaging for psychology students and staff, with implications as to how educators can address students’ struggles with statistics.
While research literacy is essential to students’ academic and professional careers, psychology undergraduates find research subjects uninteresting, irrelevant, and anxiety-provoking. Students’ negative attitudes toward research subjects impact subject engagement and, subsequently, discipline performance and attrition. However, in line with utility-value interventions, highlighting the relevance of research subjects to careers might increase students’ interest in research subjects as well as its perceived value to their careers. This presentation will overview a project that aims to bridge subject content and real-world career success using “alumni spotlights”, created as podcasts showcasing how psychological literacy (specifically, research knowledge) is fundamental to diverse psychology careers. Findings from a mixed-methods case study involving a pre-/post-term survey and individual interviews will be shared. Implications will be discussed concerning the impact of showcasing the utility of psychological literacy via alumni career journeys on students’ interest in, and perceived value of, research subjects.
Metavaluation is a novel mechanism designed from first principles to overcome the collective action problem in academia by offering direct rewards for diverse contributions, including peer-reviews themselves. In this talk, I’ll give a brief overview of the framework, highlighting the key innovation: the use of pairwise comparisons as both an inclusive review protocol and a standard unit of value, serving to scale the relative value of all other contributions in a decentralised and autonomous manner. I’ll share data from several communities, within academia and beyond, as a testament to the models capacity to adapt to diverse contexts and foster coordination through interoperable metrics. I’ll close with a vision for how the model could connect diverse communities in the research space and beyond, fostering a virtuous cycle of value creation, before inviting participants to join us in a hackathon applying the framework to SIPS itself.
Open Research practices are essential for improving transparency, reproducibility, and integrity in psychological science. However, a persistent challenge remains: researchers often learn about Open Research but struggle to adopt these practices in their own work. The TROPIC project addresses this gap by developing a sustainable Open Research training programme, supporting researchers at all career stages. A key strength of the project is its focus on portfolio building, ensuring that participants move beyond theoretical understanding to actively integrate Open Research practices into their workflow. Our three-day workshop guides researchers from all methodological backgrounds (qualitative, quantitative, mixed methods) in creating a tangible Open Research portfolio, including an ORCID ID, OSF account, data management plan, and study preregistration. Additionally, the programme covers research integrity, open vs. questionable research practices, and academic fraud detection. This talk will highlight the impact of portfolio-based training and strategies for fostering a lasting Open Research community.
Current open, reliable, and transparent standards promote practices such as pre-registration and Registered Reports, or open data, code, and protocols.
These practices are crucial for ultimately achieving scientific transparency, but are they enough?
In this talk, we argue that:
-
Questionable research practices will prevail no matter what, if only due to human error.
-
Pre-registration and registered reports do not adapt to every possible research design.
-
Studies will likely deviate from their intended plan (and we may never know when or why).
-
How a research project evolves is also relevant scientific knowledge.
This talk advocates for making public the whole process of developing all research outcomes, in a "collaborative open-source-like" fashion.
This information may help oneself and others understand and diagnose the validity of scientific conclusions, prevent fraud and unintended bias, and improve iteratively.
We call this way of doing science "Radical Transparency".
This session will explore the potential of open science practices within the psychological sciences, focusing on India as an example, extending our discussion to encompass all Low- and Middle-Income Countries (LMICs). By examining barriers, attitudes, and implementation strategies of open science, we aim to illuminate the unique challenges and opportunities present in the Global South. This discussion is also part of an ongoing survey to better understand attitudes toward open science in India. It will not only highlight the necessity of open science for enhancing scientific rigor and transparency but will also delve into the socio-cultural nuances that influence these practices. Participants can also engage in a rich dialogue about making research more inclusive, accessible, collaborative, and interdisciplinary. We will discuss how open science and accessible research can drive empowerment, systemic change, and enhance the integration of diverse academic disciplines in the research community.
Thriving research cultures are integral for developing researchers’ methodological knowledge and research profile. However, the current climate of financial insecurity across the higher education sector has compounded time constraints, subsequently creating challenges to devoting precious time to learning and developing new research, methods, and collaborations. This is particularly challenging for early career researchers who are trying to upskill in research methods, learning to teach and create an emerging research profile. In this unconference, we will discuss approaches we have taken at the University of Dundee, Scotland, noting successes and the challenges encountered. For example, department-wide weekly research seminars, brown bag sessions, writing groups and building a research community into teaching via poster mini conferences. We also invite other researchers to contribute to a conversation about their challenges and what has worked (or not) to help build a broad and thriving research community that fosters researcher development and interdisciplinary collaboration.
Join SIPS Executive Board members in discussing the accomplishments and challenges of SIPS and psychological science, with the aim of establishing goals for SIPS's future.
Pair programming is a collaboration technique widely used in the software industry – it involves two people working together on one programming task. One person is the driver, suggesting solutions and typing the code; the other person is the navigator, helping with problem-solving and spotting mistakes. After a short time, they swap roles.
Pair programming can improve the reproducibility of your own data analysis. It is also useful in teaching - it makes data analysis and statistics courses more interactive, and also more scalable (students help each other first, before coming to the instructor for help).
In this interactive workshop, we will give you a taste of pair programming, with tasks in R, Python and Excel. You will be paired with another person and given a set of small challenges to solve together. You will practice both pair programming roles and you will get a chance to reflect on your experience.
Join us for this social event in which you will get to do "speed networking" with other SIPSers! We promise it'll be fun!
Integrating artificial intelligence (AI) in qualitative research offers academic scholars a variety of opportunities and challenges. AI can streamline qualitative analysis by using natural language processing and machine learning. However, ethical considerations such as AI bias, privacy, (lack of) theoretical underpinnings, and preservation of human-centered analysis demand critical examination.
This hackathon invites researchers from all stages to collaboratively explore the role of AI in advancing qualitative methodologies. We aim to produce a paper that identifies AI’s capabilities and limitations within qualitative research, emphasizing the impact on methodological rigor and data integrity. Through this collaboration, we will identify the implications of and propose strategies for ethical AI use in qualitative research. We will discuss usage of open science practices to investigate AI in qualitative research. By fostering interdisciplinary dialogue and critical collaboration, this hackathon seeks to shape the future of ethical and rigorous AI integration in qualitative research methodologies.
The FORRT Replication Hub (FReD) is the largest and most comprehensive open-access database of replication studies, supporting meta-research, scientific transparency, and Open Science education. With over 3000 replications, it provides an invaluable resource for researchers, educators, and policymakers. Attendees will have opportunities for continued collaboration, with contributions being recognized in future publications and project acknowledgments. This hackathon aims to scale and enhance FReD by engaging attendees in three collaborative tasks:
1.Expanding the Database
* Participants will code and add new replication studies from their respective fields, enriching FReD’s interdisciplinary scope.
* Attendees will provide feedback on coding instructions to improve accessibility for new contributors.
2.Developing Mini-Summaries for Teaching & Research
* To bridge replication research and education, we will draft concise, standardized summaries of key effects and replications, integrating critiques and implications.
3.Crowdsourcing Strategies for Outreach & Impact
We will use this session to discuss how we teach programming and related skills to our students who likely did not choose to study Psychology in order to learn how to code or do statistics. We believe that even though those students did not arrive at university with the expectation of becoming programmers, we have the opportunity to instil enthusiasm by teaching with joy and kindness. Since students are the researchers of tomorrow, equipping them with these skills will improve the quality and reproducibility of Psychological science in the long term. We will be looking for your case studies, stories, tips and tricks, experiences, metaphors, and materials.
The organizers are writing an edited book around "Teaching Programming Across Disciplines"
[ https://pairprogramming.ed.ac.uk/book/ ]. If participants would like to keep working on their contributions after SIPS, their output could become a chapter for this book.
Measurement is the foundation for any field of science. Many social scientists take for granted that survey instruments measure the supposed construct of interest such as depression, intelligence, and happiness. Social science research on COVID-19 has made fast progress in studying psychological attitudes and behaviors, but the same fast progress comes at the expense of disregarding measurement recommendations that researchers propose to improve the field. Rating scales may fail to capture respondent's underlying attitudes because of differences in the statement wording, the response options available, and whether they form a composite. Discussion regarding the origins of SARS-CoV-2 has evolved over time and differences in measurement can obscure the magnitude of the supposed change in public attitudes and beliefs regarding the topic. This talk proposes an open-source database that compiles survey item statements relevant to measuring beliefs and attitudes on the origins of SARS-CoV-2.
Selecting appropriate stimuli is a crucial step in experimental research, yet it is often underestimated. Piloting ensures that the stimuli effectively elicit the intended responses while also identifying potential issues before the main study. In our research conducted in Czechia, we created aggressive and neutral social media posts in Czech language and used the Prolific online platform to assess their perceived aggressiveness. This pilot study allowed us to refine and shortlist the most suitable stimuli for our future experiment study, ensuring they aligned with our research objectives. By testing engagement and perception early, we minimized biases, improved validity, and enhanced the overall quality of our study. This talk highlights the importance of piloting stimuli in experimental research and how strategic stimulus selection strengthens the reliability of findings.
The PsycPEERs (Psychology students Promoting Equity, Empowerment, and Representation in Science) Fellowship supports students of color newly navigating the psychology major and minor. The program employs a nested mentorship model, fostering peer connections and networking, resources for academic success, and professional development strategies. Fellows gain access to monthly guest speakers—psychologists of color from various disciplines—who provide insights into graduate school, minority stressors, and academia’s hidden curriculum. PsycPEERs aims to empower underrepresented ethnic minority students by offering resources, networking opportunities, and discussions on equity in psychology. An evaluation of the program will assess past and present student experiences, informing future development and enhancement.
Most researchers likely agree that research findings should be shared with the general public, but how does this translate to sharing findings of meta research? In times of conspiracy beliefs and misinformation, we want to maintain or increase public trust in science. How can we align these two interests?
We have conducted a study to investigate how public trust in science is affected by the way in which we communicate about scientific integrity. In our study, we find that communication about the replication crisis, questionable research practices and open science reduce public trust in science. What implications does this have for communicating about the findings of meta research? Should we discuss the replication crisis and questionable research practices openly, even if it might reduce trust in science?
In my talk, I will present the results of a study that aims to answer these questions.
The current SIPS president will open the conference with the story of how they became involved in SIPS.
Panel 1: The Future of Open Science – Challenges and Opportunities. This panel will discuss the next steps for open science, including cultural shifts, incentives, and technological advancements.
Speakers: Anne Scheel, Nicholas Coles, Barnabas Szaszi, Lukas Röseler, Stephanie Lee
Moderator: Harry Clelland
Some key questions for discussion include:
How can we create sustainable incentives for open science practices?
What are the biggest cultural barriers to adopting open science, and how can they be addressed?
How do emerging technologies, such as AI and blockchain, impact the future of open science?
How can early-career researchers be better supported in adopting open science practices?
Metavaluation is a novel mechanism designed from first principles to address the collective action problem in academia by offering direct rewards for diverse contributions—including peer reviews. The Metavaluation prototype, developed by the Open Heart Mind (OHM) community, is a free, open-source tool created to demonstrate the model and make it accessible to a wide range of communities.
In this hackathon, participants will use the Metavaluation prototype to nominate SIPS contributions and determine their relative value through an inclusive peer-review process. Every participant will have the opportunity to vote for their preferred contributions and be rewarded for their input. By the end of the session, we will generate a set of multidimensional scores that reflect the unique qualities of each contribution—including the evaluations themselves. This dataset may serve as a replicable template for future investigations, potentially fostering a virtuous cycle of value creation within the SIPS community.
Registered reports are a powerful new tool for increasing the replicability and transparency of psychological science (Chambers & Tzavella, 2022; Soderberg et al., 2021). However, learning new methods and skills takes time – and often involves making mistakes along the way. Therefore, this unconference aims to crowdsource and share lessons learned (both mistakes and successes) from conducting registered reports, as well as discuss any questions that remain. To begin the session, the organizers will draw upon our lab’s ongoing and forthcoming registered reports, and then welcome attendees to share their own experiences. We hope to have an open discussion on the challenges and opportunities of registered reports, so that we can learn from one another (and our mistakes) to do better science.
In its effort to counter the infamous replication crisis, psychological research has advanced open science practices, including rigorous reporting standards. Journal guidelines have increasingly specified the methodological details that researchers are required to report in the interest of promoting transparency and facilitating replication, such as sample size justification and the distinction between confirmatory and exploratory analyses. However, it seems that the disclosure of incentivization practices for participant recruitment has evaded the same treatment in terms of transparency. In this study, we analyzed 4,406 individual studies published across 1,905 articles in four prominent psychology journals over five years (2017–2021). Our findings reveal that 36.9% of studies failed to report the incentives they employed, reflecting a significant lack of transparency. To strengthen replicability and trust in psychological research, we call for consistent reporting regarding the incentives used to recruit research participants.
R has become a popular statistical software in social sciences over the past decade, offering powerful capabilities for data analysis, visualization, and reproducible research. However, learning R can be an intimidating prospect for many social scientists who may lack prior programming experience, which is a lot of us. The abundance of available learning resources, while helpful, can also be overwhelming for beginners trying to determine where to start. Many of these resources are also not geared toward social science. This session aims to provide an accessible introduction to R specifically tailored for social scientists, with the goal of building a solid foundation and boosting confidence in working with the software. Key focus areas include: Project Setup and Workflow, Data Visualization, R-Basics for Social Scientists and Generating Reproducible Workflows and Reports. The goal is to empower researchers to continue exploring R's potential for social science applications with greater confidence.
Despite the need to routinely conduct and publish replication research, journals and funders tend to prioritize meta-research on replicability rather than primary research on what is replicable. When replication studies are published, they are often reported in batches of dozens or even hundreds, which leads to researchers neglecting quality assurance for individual studies and sets unattainable standards for singular replication studies that cannot be run online. We propose the creation of an interdisciplinary publication platform for replication research to facilitate the publication and discussion of replications and reproductions.
Main objective of the Estimating replicability of Polish psychology project, the conduct of preregistered analyses of the impact that the evaluation process could have on "publish or perish culture". This concept is operationalized as a tendency toward publishing more, but less reliable results due to selective reporting, harking, and other strategies that result in publication bias.
To fill a persistent gap in empirical evidence of causal relations between incentives, and it’s potential effects of reliability of scientific results, we use quasi-experimental approach to evaluate change of public policy on scientific policy of Polish government.
To measure publication bias we use Z-Curve method, and a new method, called Likelihood Ratio Test for Publication Bias, developed during the project. As a result, our research, provide many empirical insides on how publication pattern changed due to changing institutional environment, and new metaanalitical tools for purpose of new researches.
You may have heard of FAIR data - findable, accessible, interoperable, and reusable. But what does this mean in practice? You can save yourself hundreds of hours of work with consistent choices that make your data easier (1) for a person (including yourself!) to understand, and (2) for using software tools to help avoid errors and duplicated effort.
Psych-DS is like spellcheck for data: a system of rules for organizing a collection of data, with automatic tools for checking those datasets. If you have a folder with one or more files of row-and-column data in it, you can use Psych-DS!
This workshop will teach you to create a Psych-DS dataset directory, using the web-based validator (https://psych-ds.github.io/validator/). We'll discuss using Psych-DS during data collection, for sharing, and when dealing with sensitive data.
No experience is necessary - Join us!
Feedback is central to enhancing research quality but often fails to meet the needs of diverse researchers. Feedback is often subject to gatekeeping through privilege, and delayed until research completion, emphasising outcomes over process. The open science movement has introduced a number of novel feedback practices, creating an opportunity to review timely and less privilege-dependent feedback mechanisms. Following a Leverhulme Trust-funded global, transdisciplinary mixed-methods survey mapping feedback strategies across the research lifecycle, we’re partnering with the Framework for Open and Reproducible Research Training (FORRT) to create a community-maintained ‘living’ e-book. Developing this resource, the focus of our Hackathon, will provide guidance, participants’ ratings, and qualitative advice on each feedback strategy across the research cycle. This work aims to widely disseminate accessible opportunities to improve the quality, relevance, and frequency of feedback in research, strengthening the credibility and validity of scientific findings while promoting inclusivity.
We at the Leibniz Institute for Psychology (ZPID), an open science institute for psychology in Germany, are revising our preregistration platform “PreReg”. To tailor it even more closely to the needs of the psychological research community, we want to involve the community in all steps of the development process. We have already conducted a survey to find out which features are considered important, and we are currently working on a prototype that we want to scrutinize together with the hackathon participants. Specifically, we want to conduct a joint test of the platform to collect issues and ideas for improvements. Additionally, we want to consider with the participants which metadata preregistrations should contain to ensure they fulfill the FAIR principles.
With our hackathon, we want to allow participants to help shape our preregistration platform directly. To recognize the participants’ contributions, we will thank everyone on the “PreReg” website.
Most studies on the prevalence of questionable research practices (QRPs) have focused on the same limited set of 12 to 15 behaviors, leaving their broader landscape unexplored. The recently resubmitted manuscript “Bestiary of Questionable Research Practices in Psychology” (https://osf.io/preprints/psyarxiv/fhk98_v1) identifies and categorizes 40 QRPs, offering a far more comprehensive framework for understanding research integrity threats. This hackathon brings together researchers interested in launching a multinational study to measure the prevalence of these QRPs across different academic cultures. In this session, participants will refine methodological approaches, discuss survey design, and form an international research team to implement a large-scale data collection. By expanding the focus beyond well-known QRPs like HARKing and optional stopping, we aim to produce the most systematic assessment of QRP prevalence to date. Join us to help shape this global initiative for more transparent psychological science!
Attention deficit hyperactivity disorder (ADHD) is increasingly recognized in adults, raising questions about whether adult-diagnosed ADHD represents a delayed childhood diagnosis or a distinct condition. Undiagnosed childhood ADHD may lead to greater risks for adverse mental and physical health outcomes due to prolonged untreated symptoms. This study aims to disentangle the causal effects of childhood- versus adulthood-diagnosed ADHD on health outcomes using multivariable Mendelian randomization (MVMR). Leveraging genetic instruments associated with ADHD, we will assess the independent effects of childhood and adulthood ADHD on mental health (e.g., depression, anxiety) and physical health (e.g., cardiovascular disease, obesity). MVMR addresses potential confounding and reverse causation, enabling robust causal inference. By differentiating the impacts of early versus late diagnosis, this study will provide insights into ADHD’s developmental and health trajectories, informing early intervention and treatment strategies.
There is a reliable tendency for scientists to adopt new discoveries. As we know ten years after the founding of SIPS, this doesn't apply to methods. New methods may mean publishing less, or erasing one's legacy. Worse, once new methods are avoided, the barrier to adoption grows.
I will argue that science largely didn't change and it was a matter of self-preservation. It will, therefore, not change in the future. P-hacking, base rates, and publication bias weren't even new.
We don't know the solution, but reform needs to be scientific and update these outdated assumptions: that researchers will change once they know, and will act in good faith. Further, we should test an untested assumption, that an expert with bias is better than a non-expert without it. The results may suggest a "Red team," or at least "jury system," of science.
Science communication (SciComm) is a vital activity for making psychological science accessible to the public. Social video formats, such as YouTube videos and live streams, have been highlighted for their potential to engage a wide and broad audience who may not engage with traditional public outreach and engagement activities. In this talk, we introduce the use of a virtual avatar (aka “VTubing”) as a medium for digital science communication. As active science communication content creators, we will present our collective experiences in using virtual avatars to communicate science in a diversity of interactive and passive ways. We will also discuss the benefits and challenges associated with using a virtual avatar for SciComm. Throughout, we will provide practical examples and tips for getting started on your own SciComm VTuber journey.
Media multitasking—simultaneously engaging with multiple digital streams—is a common behavior with significant cognitive implications. However, studying it experimentally poses a key challenge: how can we balance experimental control with ecological validity? Additionally, what range of media multitasking activities should be considered? This session invites an open discussion on innovative methodologies to address these issues. We will explore approaches such as dynamic task designs, immersive real-world simulations, and adaptive experimental paradigms that better capture natural media consumption patterns. By refining our methods, we can generate findings that are both scientifically rigorous and applicable to real-world contexts. Let’s collaborate to advance the experimental study of media multitasking!
While BIPOC students experience poorer academic outcomes than their white peers (Banks et al., 2019; Hurtado & Alvarado, 2015; Ong et al., 2013), diverse peer interactions, mentorship, and belonging have been shown to support students of color (Hussain & Jones, 2021). The PsycPEERS fellowship was developed to support BIPOC students by providing new psychology students of color mentorship and academic support from older students, community with other psychology students of color, and professional development opportunities. This fellowship design emphasized students’ existing strengths, skills, knowledge and capacities. Originally, the fellowship was designed to include 12 new student fellows, 2 peer mentors, 1 tutor, and 3 faculty who coordinated the program. While the program has provided meaningful support for some BIPOC students, it has experienced low enrollment (2-4 fellows) and retention of fellows. The goal of this hackathon is to generate 3 new ideas for enrollment and retention among first-year BIPOC students.
Peer review shapes which research gets funded, published, and shared, influencing the scientific community and society. Despite its importance, reviewers often receive little training, with little focus on addressing biases embedded in the process. PREreview aims to change this. This workshop is a condensed version of the three-part Open Reviewers Workshop, using materials from The Open Reviewers Toolkit. With a focus on equity, diversity, and inclusion, participants will explore the basics of open, preprint peer review and an introduction to recognizing biases in the scholarly publishing process.
Target Audience: Researchers of all career levels, particularly Early Career Researchers entering the peer review process.
Learning Objectives:
- An introduction to how systems of oppression manifest in the manuscript review process
- An introduction to strategies to self-assess and mitigate bias in the context of manuscript review
- A general overview of community-driven open review processes via PREreview and other services
To determine participants' racial identities, researchers often ask them to choose from a set of predefined race categories (e.g., “Black”, “White”). Characterizing a participant group using these categories (e.g., “X% were Black”) may suggest that race is objectively determined, or that participants would spontaneously describe their identities as such (e.g., “X% identified as Black”). Four studies, involving 572 participants, indicated that when asked to choose from a set of traditional race categories, many felt that their identities were not represented by the categories they had to settle for and reported negative feelings about being referred to by those categories. Open questions in which participants created their own labels avoided these issues. However, free-format answers are problematic to code, compile, and analyze. In this session, attendees will brainstorm ideas for developing practices that respect participants’ identities, ensure methodological feasibility, and promote a non-essentialist view of race.
This workshop is designed for researchers working with longitudinal data to develop predictive models using machine learning. Longitudinal data involve repeated measurements from the same individuals, such as daily surveys or passively collected data via smartphones and wearable devices.
Participants will explore key challenges in evaluating prediction models, including selecting appropriate training and test sets. The workshop will review common use case scenarios and validation methods, demonstrating their alignment with different research goals. Focus will be placed on potential pitfalls when evaluating models and recommendations to avoid them.
Attendees will also be introduced to a tool (https://github.com/AnnaLangener/Justintime) designed to help researchers assess misalignments between validation strategies and research objectives.
Panel 2: The Evolution and Current Landscape of Open Science. This panel will explore the historical development of open science, key milestones, and ongoing challenges.
Speakers: Katie Corker, Julia Strand, Michèle B. Nuijten, Aurélien Allard, tba
Moderator: Kailey Lawson
Some key questions we aim to discuss include:
How has the open science movement evolved over time, and what were its major turning points?
What have been the most significant successes and setbacks in the adoption of open science practices?
What challenges still remain, and how can they be addressed at different levels (individual, institutional, and systemic)?
How do philosophical perspectives shape the principles and implementation of open science?
What lessons can we learn from past efforts to increase transparency and reproducibility in research?
Social gathering: Online Open Science Escape Room
Join other SIPS attendees for a puzzle-solving hour as we play through this game in small groups: https://norment.github.io/ecrm20_escaperoom/about/ We will assign groups during the event so you can come alone or with others!
(If you've already played through this, you can still join and hang out with the moderators in the main Zoom room!)