key: cord-0622538-z43y876p authors: Carver, Jeffrey C.; Heckman, Sarah; Sherriff, Mark title: Training Computing Educators to Become Computing Education Researchers date: 2021-10-11 journal: nan DOI: nan sha: 71817c89e5680f9c72c5743b5cc9ffc120f367dc doc_id: 622538 cord_uid: z43y876p The computing education community endeavors to consistently move forward, improving the educational experience of our students. As new innovations in computing education practice are learned and shared, however, these papers may not exhibit the desired qualities that move simple experience reports to true Scholarship of Teaching and Learning (SoTL). We report on our six years of experience in running professional development for computing educators in empirical research methods for social and behavioral studies in the classroom. Our goal is to have a direct impact on instructors who are in the beginning stages of transitioning their educational innovations from anecdotal to empirical results that can be replicated by instructors at other institutions. To achieve this, we created a year-long mentoring experience, beginning with a multi-day workshop on empirical research methods during the summer, followed by regular mentoring sessions with participants, and culminating in a follow-up session at the following year's SIGCSE Technical Symposium. From survey results and as evidenced by eventual research results and publications from participants, we believe that our method of structuring empirical research professional development was successful and could be a model for similar programs in other areas. Members of the computing education community invest a large amount of effort developing new methods to improve the educational experience of their students. These educators hope by reporting these experiences in conference and journal venues, other members of the computing education community will adopt, or attempt to adopt, their methods into their own environments. This approach has worked quite well for some educational innovations. Because of the potential for widespread adoption and the impact that such adoption could have on future generations of computer science students, it is important for these research reports to include adequate evidence that the educational innovation is effective at achieving its stated goal. Excellent educators already reflect on their teaching experience and actively seek to improve from semester to semester. Sometimes they share these reflections with the larger computing education community in the form of the experiences described above. By increasing the rigor of the analysis of their classroom interventions an educator can move from reflective teaching to the Scholarship of Teaching and Learning (SoTL) [7] . SoTL includes: asking appropriate research questions, designing and conducting studies to answer those questions, applying appropriate analyses to the results, providing details about where the work fits into the existing body of knowledge, and describing how to interpret the results in light of the threats to validity [7, 14] . Many of the reports on excellent teaching practices in the computing education community may meet the standards of scholarly work [7] , which includes items like novelty, enough detail for replication, and being peer reviewed. However, much of the work lacks rigor, including missing key items like appropriate study methods, related work, and threats to validity [2, 7, 32, 39] . Recent reviews of the computing education literature show that while there is empiricism in computing education, the literature is lacking the rigor appropriate for SoTL [2, 7, 17, 32, 39] . These shortcomings slow the progress of SoTL within the computing education research (CER) community. To help remedy this situation, we developed a program called Designing Empirical Education Research Studies (DEERS) to train Computer Science educators on the concepts of human-based empirical research. DEERS includes a intense summer cohort workshop followed by a year of one-on-one mentoring. Our experience has shown that many in the computing education community conducted their dissertation research on topics that did not involve human subjects (e.g. algorithms or networks). Therefore, educators often need to learn the concepts involved in conducting computing education research, which is heavily human-based. The goals of DEERS are: • To introduce computing educators to the concepts of humanbased empirical research • To help computing educators define their research questions and study design in a way that will be most helpful to the larger community • To mentor computing educators through execution of their study to publication of the research report • To increase the level of sophistication about human-based empirical research throughout the computing education community The design of DEERS was heavily influenced by literature related to SoTL and to computing education research. We utilized many of the resources below, along with our backgrounds, in the development and evolution of DEERS. SoTL is the application of empiricism to computing education research. We define empiricism as "validation based on observation of an intervention". An empirical validation draws conclusions based upon observed evidence rather than argumentation, proof, or some other means [36] . Empirical validation does not solely involve experimentation or the scientific method, but incorporates the "method of science" [14] . The method of science considers both inductive and deductive paradigms for gathering evidence to answer a research question [14] . An increase in empiricism may help move CER to a recognized sub-area of computer science [10] . A large portion of the computing education literature includes scholarly work on excellent teaching practice [7] . These experience reports are novel, include details for adoption, and are peer reviewed (like this paper). While experience reports are valuable and describe an "educational approach or tool, the context of use, and provide a rich reflection on what did or didn't work, and why, " 1 more rigorous research that directly assesses a research question, can move the field of computing education from reflective teaching to SoTL providing the foundation for computing-specific educational theory [14] . However, much of published CER work lacks reporting of elements appropriate for research, suggesting the need for additional work to support computing education practitioners and researchers in development of research studies and reporting the results of those studies. Reviews of computing education literature have found gaps in reporting of research questions [17, 22, 32] , related work, lack of detail in the methods [19, 23] , study context and participants [28, 32] , threats to validity [2, 19] , research ethics [19] , and connection of results to theories (when appropriate) [24, 37] . However, these studies do not necessarily make a distinction between research and experience reports in their findings [18] . An additional challenge is that replication is lacking in CER, partially due to challenges in reporting and to a bias towards original 1 From the SIGCSE Technical Symposium 2022 call for papers. work [1, 26] . A literature review on educational data mining by Ihantola et al. [19] found that only 5 studies (7% of their reviewed papers) were replications. A broader literature review, specifically on replication in CER by Hao et al. [16] , found only 2.38% of 2,269 articles were considered replications, with only 63% of those being successful. Further, many research studies do not contain a point of comparison [2, 17] . Additionally, studies will frequently create new measurement instruments rather than utilize common and validated instruments [25] . Educational research organizations provide guidelines to support the development of empirical studies and reporting on educational research [3, 4, 11, 35] . Additional recommendations on high quality reporting, and therefore high quality research design, come from the CER community [12, 17, 19, 27, 28] and related computing fields like software engineering [8, 20, 21, 33] . We have conducted DEERS annually from 2016 through 2021. We plan to continue offering DEERS in its current form at least through the summer of 2022 and potentially longer, contingent on funding. In our time running DEERS, the format of the workshop has evolved as we have learned more about how to best structure a professional development experience for faculty interested in pursuing their own empirical CER projects. We have offered DEERS six times, with four instances run in-person and two instances run fully online due to COVID. In addition, we organized two mini-versions as workshops at the SIGCSE Technical Symposium. DEERS is structured as a year-long mentoring experience, beginning with a multi-day summer workshop on empirical research methods, followed by regular one-on-one mentoring sessions with participants, and culminating in a follow-up session at the following year's SIGCSE Technical Symposium. Our discussion here focuses on the in-person workshop. We discuss the online version in Section 4. Each cohort of participants begin the program with a two-and-a-half day, in-person summer workshop. We modeled the structure and schedule of the summer workshop after a typical short conference, including group meals and social events, such as tours of the local area. With funding support from the NSF, participants receive a stipend to cover travel to the host institution. We arrange for housing at a local hotel within walking distance of the workshop venue. The schedule for the in-person workshop alternates between presentations from the organizers on aspects of empirical research, independent work where participants develop their own projects, and small group discussion where participants share and receive feedback. Based on our observations and participant feedback, the small group discussions are one of the most valuable aspects of DEERS. Each group has three or four participants, one organizer, and one or more alumni of DEERS. Giving participants the opportunity to evaluate and provide feedback on other participants' projects can be just as valuable as receiving feedback on their own work, as they are learning to better identify the needs of empirical research projects. We repeat this cycle of short lesson, independent work, and group discussion and feedback for each of the key steps in designing an empirical research project. The goal is for participants to leave the workshop with a refined research question, a research plan, and feedback on any potential IRB protocol needs. After returning to their own institutions, participants begin work on their research projects. Throughout the academic year, mentors meet regularly with participants, offering guidance and feedback as they execute the research plan. The cohort comes back together at the next SIGCSE Technical Symposium for a follow-on short workshop on data analysis. At that point, participants begin plans for a paper submission for either the coming SIGCSE Technical Symposium or another conference. One of our objectives is to reach faculty that might not normally have the time or resources to pursue educational research or those that do not have much experience with social and behavioral empirical research practices. To accomplish this goal, we begin by broadly inviting the SIGCSE community to apply to DEERS via the general members mailing list. We also recruit participants through word of mouth from previous participants (after cohort 1) and by presenting our work at the NSF Showcase at the SIGCSE Technical Symposium. The applicants propose a potential research question or topic they are interested in exploring along with their goals and motivations for applying for DEERS. We use this information to select a broad range of faculty and projects for each cohort and to ensure that we have the proper expertise to aid them with their projects. Our overall acceptance rate to DEERS has been 45%, including ineligible applicants -those from outside the United States (due to NSF funding restrictions) and graduate students. Table 1 provides the demographics of the accepted participants. Overall, 60% were faculty at smaller, primarily teaching institutions, 17% were teaching-track faculty at R1 universities, and 22% were tenuretrack faculty at R1 universities. As we discuss in our future work, we hope to provide a separate experience for graduate students interested in CER. We structured DEERS to systematically walk the participants through the creation, refinement, execution, and analysis of an empirical research project. In the early iterations of DEERS, we attempted to go through every aspect of a research project during the in-person workshop. After participant feedback and realizing we needed to cover the planning aspects of empirical research deeper, we moved our content regarding data analysis to the follow-on workshop, which is when most participants were finishing collecting data. The content and schedule below is what we use for the in-person workshops. 3.3.1 Module 1: Scholarship of Teaching and Learning. • Length: 1 hour • Learning Outcome: Understand the role of SoTL and how it differs from simply teaching • Activities: Short lecture The SoTL module helps participants understand how to move from scholarly teaching to the scholarship of teaching and learning. The module also provides an overview of DEERS including the other modules and mentoring over the next year. • Length: 3 hours • Learning Outcomes: Learn the desirable aspects of a good research question and refine their own research question • Activities: Short lecture, guided worksheet, group discussion and feedback A successful empirical research project must start with a specific and well-scoped research question. During this session, we discuss the importance of establishing the research question before beginning the study and then give a short lesson on key aspects of a good research question: 1) interesting to the community; 2) answerable; 3) repeatable; 4) measurable; and 5) appropriately scoped. We ask all participants to come to the first day of DEERS with a prospective research question. After the presentation, the participants have independent time to refine their question before moving into small groups to share their questions and receive feedback. While some participants merely tweak their research questions, sometimes they identify entirely new research directions during these small group conversations, as alums and fellow participants help them identify a more interesting or effective question in the same topic area. The participants end the day with their refined research question and homework to do some background reading. • Length: 4 hours • Learning Outcomes: Learn the details of empirical research, including variables, data types, quantitative vs. qualitative research, basic study designs, grouping participants into treatments, threats to validity, and pilot studies • Activities: Short lecture, guided worksheets, group discussion and feedback There are many interrelated aspects important to developing a welldesigned empirical study. During this session, we begin with an overview of empirical research focused on human subjects. We then discuss the different types of independent and dependent variables and how researchers should consider them in their study design. After a discussion of qualitative and quantitative data types, we then describe a number of basic study designs, highlighting the strengths and weaknesses of each one. We conclude the session with a discussion about common threats to validity and how researches can reduce those threats during study design, where possible. Then, starting from the research question defined in the first session, the participants have a homework assignment to develop an initial study design that identifies the key variables, lists out potential data sources, notes potential designs, and identifies threats to validity. Ethics and the Institutional Review Board. • Length: 1 hour • Learning Outcomes: Understand the ethics associated with design human-subject studies and understand the role of the Institutional Review Board (IRB) and how to interact with them at a given institution • Activities: Short lecture, individual investigation of local IRB policies and procedures For many participants in DEERS, educational research is their first experience working with human subjects. During this session, we discuss some of the key concepts related to designing and conducting ethical human subjects research, including those issues that arise specifically when the human subjects are students in their own classrooms. We cover topics including risk, beneficence, compensation, deception, and debriefing. We then spend time discussing informed consent, including what it is and how to ethically obtain it from participants. Finally, we cover some of our lessons learned in working the IRBs at various institutions, including the types of documentation required and some of common issues that arise with IRB protocols. Based on this information, the participants revisit the study design created previously to make any adjustments necessary based on considerations of ethics. • Length: 3 hours • Learning Outcomes: Identify data collection instruments and techniques that apply to a participant's given research question and learn how to properly gather, clean, and secure potentially identifiable information • Activities: Short lecture, guided worksheet After designing the study, the next step is to identify and operationalize variables to answer the research question. This module covers considerations about whether data items are needed, reasonable to gather, and ethical to gather. We discuss variables, including their reliability and validity. We also suggest data collection instruments, like validated attitudinal surveys, and validation techniques for other data collection instruments. We then discuss key considerations of data management, particularly related to privacy and confidentiality, and automation of data collection. Finally, there is a module on qualitative data collection that includes materials on participant observations, think-aloud studies, interviews, and surveys. After the presentation, the participants use a guided worksheet to create a data collection plan by 1) operationalizing the variables; 2) determining the instrumentation necessary to gather the data; and 3) developing a plan to collect the data in their study context. Analyzing data from human-subjects research can present challenges for researchers who are not familiar with this process. This session, which we conduct as a follow-up private session at the SIGCSE Technical Symposium focuses on helping the participants with questions related to the analysis of their data, whether it is statistical or qualitative. We chose to move this session out of the summer workshop into a follow-up SIGCSE Technical Symposium meeting because we learned that during the summer workshop, the participants were receiving enough information regarding research questions and study design, that it was not reasonable to add one more topic. In addition, many of the questions related to data analysis do not appear until researchers have actual data they are trying to analyze. Therefore, in this session, we provide a brief overview of some of the most common statistical techniques used in human-subjects research. We then facilitate a discussion and Q&A among the participants to help them get their questions answered so they can proceed with the data analysis process. For the 2020 and 2021 iterations of DEERS, we converted the summer in-person workshop and the follow-up session at the SIGCSE Technical Symposium to an online format due to travel restrictions from COVID. During the academic year, we already conducted our one-on-one mentoring virtually, so its format did not change. However, many of the projects that were underway for the 2020-2021 academic year had to be adjusted due to challenges at each participant's institution. After converting our own courses to an online format, we had a reasonable understanding of what would and would not work when we moved DEERS to a virtual format. First, we knew we did not want full-day virtual sessions that tried to mirror the in-person experience. However, we wanted to make sure the virtual sessions captured the most important aspect of the in-person version of DEERS -the small group discussions. Our experience showed small group break out rooms in a virtual meeting tool can be effective if all participants are engaged. Because DEERS consisted of a small group of faculty who had applied to join, we assumed we would have an engaged group. When we were co-located, a typical session consisted of a short presentation, individual work, and group discussion for feedback. For the online version, we recorded the presentations into 20-to-30 minute videos and hosted them on our website 2 , along with our lesson notes. Further, we provided all of the individual worksheets to participants at the beginning of the summer workshop. We then changed our schedule to hold only one question and answer session and one small group discussion session each day and extended the workshop to a fourth day. The optional one-hour question and answer sessions in the morning allowed participants to join if they desired to ask questions. The afternoon small group discussion sessions lasted two hours. We asked the participants to review the lesson videos and worksheets before coming to the group discussion session so they could fully participate. We began each small group discussion session with a quick review of the material in the video before going into small group breakout rooms to discuss the progress each person had made on their research project design. Overall, we found the online model to be successful. Creating the online course material, including videos and lecture notes, was extremely valuable, culminating in a public, self-paced course and reference for anyone interested in human-based empirical research. The small group interactions were still generally successful, but carried with them the same drawbacks of any online group video chat interaction, including "Zoom fatigue, " distractions, and technical issues. Conducting the workshop virtually did allow for participants who normally could not travel during the summer due to child care needs. However, we did see an effect from the loss of group-building that occurs when all participants are co-located. The online cohorts do not seem to be as cohesive a group as the in-person cohorts. Prior workshops lead to several collaborations, but we have not seen the number of post-workshop collaborations with online participants. One of the most important practices that we implemented as part of DEERS is our ongoing mentorship of the participants, which helps provide accountability. During each summer workshop, we match each participant up with one of the three mentors primarily based upon mutual interest. These mentoring relationships then continue formally for the next year (some have continued even beyond that). Each mentor/mentee pair develops a plan for regular meetings over the course of the year. These meetings serve an important accountability role for the participants. While the participants may have the best intentions of conducting their studies in the next academic year, we have seen instances where the busyness of course preparation and other tasks can push the educational research study down on the priority list. The regular meetings help the participants prioritize their study and make slow, steady progress. In addition to providing accountability, the meetings also provide the participants with a regular time to ask questions and receive feedback on their research questions, study design, and study progress. A key aspect separating scholarly teaching from the scholarship of teaching and learning is planning. DEERS's focus on planning a research study is critical to developing successful research studies. By taking the time to determine a solid research plan with consideration of institutional context, research questions, and researcher time, we can support successful follow through. One of the most significant challenges in CER work is the time involved to complete a study. Because most studies incorporate some type of classroom intervention, the participant needs at least a semester or quarter worth of time to run the study. Several studies that require baselines or multiple comparisons may take one or two academic years to completely gather the data. Analysis and manuscript preparation takes additional time. One of the benefits of DEERS is that we can set expectations about study timelines up front and the accountability from the one-on-one mentoring supports the long timescales. Therefore publications from and impact of DEERS lag each cohort's participation by at least a year, more likely two. Those seeking support to offer similar experiences should set appropriate expectations about the timelines involved and the length of time before impactful output may be observed by funding agencies. CER is challenging due to study time-scales, institutional contexts, lack of control, and other competing foci, especially for educators with significant teaching responsibilities. Not every DEERS participant completed their study. Some were unable to run their study because the class was canceled or their responsibilities changed. However, 37% of alumni from the first five cohorts served as reviewers for SIGCSE Technical Symposium 2021. Others have reviewed for other SIGCSE Technical Symposium and CER venues. Their training in CER has contributed to the community through service. Many DEERS participants have gone on to publish on their research studies developed for the workshop [5, 6, 9, 13, 15, [29] [30] [31] 38] . In several cases, their participation in DEERS lead to additional CER work beyond their original study including successful grant proposals. We surveyed the participants three times about their experience -(1) before the first session of the summer workshop, (2) on the last day of the summer workshop, and (3) via a reflection survey sent to all previous cohorts in the summer of 2020. The pre-workshop survey focused on establishing the needs of the current cohort of participants, asking questions regarding their proposed research question(s), the classes where the study would occur, and previous experience with CER. Key results from this survey include: • Fifty percent of participants indicated they had some degree of previous experience with conducting CER work. • Similarly, roughly half of the participants had previously published in a CER venue. • One quarter of participants reported receiving funding to do CER work in the past. • When asked what part of the research process the participants were most interested in learning more about, the answers included all of the topics we offered, with many indicating something akin to "everything. " • The most common response, however, was that participants wanted to learn more about data collection and data analysis techniques. For the post-workshop survey, we asked questions related to how the participant's proposed project evolved during the workshop, what resources they needed from us throughout the year to be successful, and what did and did not work well during the workshop. Nearly all participants reported that the workshop helped them better clarify and scope their project into something they felt was accomplishable. Similarly, nearly all participants indicated the things they most needed from us during the academic year was accountability for working on their project and assistance with understanding the proper statistical techniques needed for evaluating their results. Our reflection survey that went to all participants from 2016-2020 contained three questions: • If the DEERS project (including the summer workshop, the follow-on SIGCSE meeting, and the individual mentoring) has helped your computing education research, please provide a brief description of how it has been beneficial to you. Please be as specific as possible. • Please list any publications (either published, submitted, or in progress) that have occurred as a result of your participation in DEERS. Next to each one, please indicate its status (in progress, submitted, published). • Do you have any suggestions on how we could improve DEERS for future years? Roughly 27% of participants responded (15/54), spread across all previous years. The alums reported DEERS had a significant impact on their projects, their motivation, and overall excitement with CER work: • "I would say that overall the three major things the program provided me were 1) accountability, 2) feedback, and 3) confidence. The accountability came from creating a timeline, wanting to contribute to an ROI to the DEERS program, and from the post-summer check-ins with my mentor. The feedback happened at every stage of the research process and was most helpful during conception/design of the study and during framing of the paper once data had been collected. I wouldn't have had nearly as much iteration in research question and study design without DEERS. The confidence came from DEERS being an entry point into actually doing CSEd research -without the experience I would have felt like more of an outsider. " • "I am extremely grateful for the summer workshop and for all the things that I learned there. Working with my advisor has been a blessing for a new faculty member like myself. I consider that when we accept a job in academia, we are blind to how to fulfill our newly adopted responsibilities. The DEERS workshop was instrumental in providing me with guidance on how to start and how to go from there. " • "The DEERS project has helped advance my CSEd research in several ways. First, it gave me the opportunity to get feedback on a proposed research effort aimed at using software development assignments to improve computing students' ability to construct correct mathematical proofs. This may result in a dissertation topic for a PhD student arriving this fall. Second, the DEERS workshops have given me a much deeper and more organized understanding of the various experimental methods used in social science and education research. This has made me a better researcher, but has also enabled me to provide better service in the peer review process. Finally, the workshops have given me a deeper understanding of the roles of research questions and hypotheses in guiding research, and in particular of their relationships to each other. This has made me a better researcher both within CSEd and in my other areas of interest. Perhaps most importantly, it has made me a more effective research advisor by giving me a framework within which to help my students see their research more clearly within the overall body of knowledge. " There were a few suggestions for improvements, which included thoughts on the limitations of the online format for 2020, creating a research reading group and ideas that we are exploring in our future work. After having run DEERS for six years, we see multiple opportunities to move the project forward. As we went through applications to join DEERS each year, we observed a number of graduate students who were interested in making CER a core part of their dissertation work. Our stated purpose when we created DEERS was to help faculty who normally would not have the time, resources, or mentoring necessary to start a CER project be able to do so. As such, we tailored our material, delivery, and in-person experience for faculty and did not think it was appropriate to bring in graduate students because they would likely need more mentoring and/or there could be a conflict with their graduate advisor. To address this observation, we plan to design and offer a modified version of DEERS for graduate students and their advisors. By including both the student and the advisor, we hope to build on the existing mentoring relationship. We will primarily target students and advisors who do not have extensive experience with human-based empirical research. Another of our goals with DEERS was to help researchers execute human-based empirical research projects and reports that follow standard reporting norms to allow for replication of studies at other institutions. To continue supporting replication in CER, we are exploring creating workshops specifically focused on replicating existing published CER projects. We will choose one or two recent interesting projects from the literature and invite the investigators to come to the workshop to present their work. Workshop participants will then identify how they can adapt the project to their own environment and begin planning their replication study with a focus on maximizing the replication space to maximize scientific gain from aggregating results [34] . One goal is to remove some of the perceived stigma of replication studies, as it is important for research to be generalized to multiple environments to establish its actual efficacy [7, 14, 16, 34] . Replication in Computing Education Research: Researcher Attitudes and Experiences A (Updated) Review of Empiricism at the SIGCSE Technical Symposium Standard for Reporting on Empirical Social Science Research in AERA Publications Online Vs Face-to-Face Web-Development Course: Course Strategies, Learning, and Engagement Two-Stage Programming Projects: Individual Work Followed by Peer Collaboration Engaging in the Scholarhip of Teaching and Learning Towards Reporting Guidelines for Experimental Replications: A Proposal A Nifty Inter-Class Peer Learning Model for Enhancing Student-Centered Computing Education, and for Generating Student Interests in Co-Curricular Professional Development Valuing Computer Science Education Research What Works Clearinghouse. 2020. What Works Clearinghouse Standards Handbook Models and Methods for Computing Education Research Separation of syntax and problem solving in Introductory Computer Programming Supporting Guided Inquiry with Cooperative Learning in Computer Organization A Systematic Investigation of Replications in Computing Education Research A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature What is a SIGCSE Symposium Paper? SIGCSE Bull Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies Reporting Guidelines for Controlled Experiments in Software Engineering Evaluating Guidelines for Reporting Empirical Software Engineering Studies Methodological Rigor and Theoretical Foundations of CS Education Research Introductory Programming: A Systematic Literature Review Characterizing Research in Computing Education: A Preliminary Analysis of the Literature Review of Measurements Used in Computing Education Research and Suggestions for Increasing Standardization Discovering Empirically-Based Best Practices in Computing Education Through Replication, Reproducibility, and Meta-Analysis Studies Defining Requirements for a Repository to Meet the Needs of K-12 Computer Science Educators, Researchers, and Evaluators Improving Research and Experience Reports of Pre-College Computing Activities: A Gap Analysis Undergraduate Teaching Assistants in Computer Science: A Systematic Literature Review Capturing Student Feedback and Emotions in Large Computing Courses: A Sentiment Analysis Approach First Things First: Providing Metacognitive Scaffolding for Interpreting Problem Prompts Guidelines for Conducting and Reporting Case Study Research in Software Engineering Shall we Really do it Again? The Powerful Concept of Replication is Neglected in the Social Sciences CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials Writing good software engineering research papers Analysis of Research into the Teaching and Learning of Programming A Study of the Relationship Between a CS1 Student's Gender and Performance Versus Gauging Understanding and Study Tactics CS Educational Research: A Meta-Analysis of SIGCSE Technical Symposium Proceedings This material is based upon work supported by the National Science Foundation under Grant Nos. #1525373, #1525173, #1525028. This work was approved by the IRB at all author institutions. We would like to thank our workshop participants for their engagement and work in computing education research.