The course program consists of two 2-day interactive meetings in which you will present your research, collaborate in subgroups on group assignments, follow interactive guest lectures and contribute in (sub)group discussions.
The five guest lectures are centered around different themes. You will get familiar with the main theoretical concepts of these themes and how they relate to the overarching theoretical framework of formative and summative assessment. You will apply the presented theory to a practical case based on, for instance, a video fragment, a newspaper article, an online blog, or a description. You will also brainstorm on new research questions within each of these themes, which are both relevant and challenging for research and practice. The themes within the guest lectures are further described below.
Day 1 – April 15, 2019
9.30 – 12.00 Pitch your own research
In this session we will provide a general overview of the field of assessment research to establish your own positioning within this field. You will also get to understand how each other’s work relate to your own research. In small subgroups of researchers with relating research aims you will create a poster that will be used for a poster presentation during the group excursion at Day 2.
12.00 – 13.00 Break (and finalizing the poster if necessary)
13.00 – 15.30 Guest lecture 1 by Marije Lesterhuis (University of Antwerp) & Renske Bouwer (Vrije Universiteit Amsterdam)
Summative and formative uses of comparative judgment
Although competences like communication skills, creativity and problem solving are becoming increasingly important in today’s education, its assessment remains a struggle for both practitioners and researchers. In this session we will present a relatively new and promising method for the assessment of competences: comparative judgment.
You will get familiar with the theoretical principles of comparative judgment in order to understand its merits and limitations. You will also get the opportunity to experience how this method works by performing a competence-based assessment in D-PAC, a digital platform for the assessment of competences that is based on the principles of comparative judgement. D-PAC has been developed and tested in a large research project at the University of Antwerp. We will provide an overview of the research within this project, focusing on the reliability, validity and efficiency of comparative judgments, and how this method can be used for formative and summative purposes.
15.30 – 16.00 Recap and introduction of the final assignment
Day 2 – April 16, 2019
9.30 – 12.00 Guest lecture 2 by Dominique Sluijsmans, Zuyd Hogeschool
From pixels to portrait: Towards valid judgments about professional competence in higher education
In higher professional education (HPE), students are trained to become professionals. Examples of these professionals are midwifes, teachers, managers, and application designers. To determine if students are ready to enter professional life, an approach to assessment is needed which not only allows for appropriate decision-making on students’ professional performances (the summative purpose of assessment), but also fosters students’ professional learning (the formative purpose of assessment). Comparable to a curricular program, the assessments should be organized programmatically, where the focus should not be to optimize every single assessment, but to optimize the coherence between the methods in a program of assessments.
In this contribution, my main aim is to increase your understanding about the design principles of an assessment program in higher professional education, which leads to valid inferences about professional competence and in which the formative and summative functions of assessment are well-balanced. My contribution is successful when participants:
- understand the difference between formative and summative assessment;
- can translate these functions to a practical case;
- understand the main design principles of an assessment program;
- can recognize these principles in a practical case;
- understand the implications for teachers’ and teacher leaders’ professional development;
- can reflect on the implications of the previous for professional practice and academic research on assessment.
- experience that this part of the course contributed to their own research ideas and/or understanding of assessment in general.
Literature that will be addressed (selection; mandatory literature will be announced several weeks before the course):
- Andriessen, D., Sluijsmans, D., Snel, M., & Jacobs, A. (2017). Protocol Verbeteren en Verantwoorden van Afstuderen in het hbo 2.0. Den Haag: Vereniging Hogescholen. [only in Dutch]
- Andriessen, D., Sluijsmans, D. M. A., Snel, M., & Jacobs, A. (2017). Onderwijs ontwerpen is mensenwerk. Een reisverslag over het implementeren van Beoordelen is Mensenwerk in het hoger beroepsonderwijs. Den Haag: Vereniging Hogescholen. [only in Dutch]
- Sluijsmans, D. M. A., & Struyven, K. (2014). Quality assurance in assessment: An introduction to this special issue. Studies in Educational Evaluation, 43, 1-4. [only in Dutch]
- Sluijsmans, D. M. A., & Segers, M. (2016). Toetsrevolutie: Naar een feedbackcultuur in het hoger onderwijs. Culemborg: Phronese.
- Stiggins, R. J. (2009). Essential formative assessment competencies for teachers and school leaders. In H. L. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment. New York: Routledge.
- Van der Vleuten, C. P. M., & Schuwirth, L. W. T. (2005). Assessment of professional competence: from methods to programmes. Medical Education, 39, 309 -317.
- Van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J, Tigelaar, D., Baartman, L. K. J., Van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical Teacher, 34, 205-214.
- Van der Vleuten, C. P. M, Schuwirth, L. W. T, Scheele, F., Driessen, E. W., & Hodges, B. (2010). The assessment of professional competence: building blocks for theory development. Best Practice & Research Clinical Obstetrics and Gynaecology, 24, 703-719.
- Van Merriënboer, J. J. G., & Sluijsmans, D. M. A. (2009). Toward a synthesis between cognitive load theory and self-directed learning. Educational Psychology Review, 21, 1, 55-66.
- Van Zundert, M., Sluijsmans, D. M. A., Könings, K., & van Merriënboer, J. J. G. (2012). The differential effects of task complexity on domain-specific and peer assessment skills. Educational Psychology, 32(1), 127-145.
12.30 – 16.00 Group excursion to University of Applied Sciences Utrecht (HU)
In this group excursion you will get the opportunity to discuss with educational practitioners of the University of Applied Sciences Utrecht how they use assessment for summative and formative purposes in their classroom practice. How do they make decisions for assessments and assessment programs? What are constraints or struggles in designing and/or implementing assessments in practice? Do they use research findings for the optimization of their assessment program? Answers on these and other relating questions will provide a unique insight in what kind of research impacts educational practice, and, vice versa, what practice-based research questions are currently open for further research. To foster discussion, you will present your research in subgroups based on the posters that are designed at Day 1 and they will present some practical cases.
Day 3 – May 13, 2019
9.30 – 12.00 Guest lecture 3 by Jerich Faddar, University of Antwerp
Challenges in international large-scale assessments: The case of TIMSS (Trends in International Mathematics and Science Study)
Since many years different actors in the field of education are interested in comparisons across educational systems with the aim of exchanging ideas and informing policy. Such comparisons can focus on different aspects of education, but often it concerns the assessment of students’ performance. A well-known example of such an International Large-Scale Assessment is TIMSS (Trends in International Mathematics and Science Study), next to PISA or PIRLS. However, implementing TIMSS yields also many challenges.
In this session we will focus on the implementation of the TIMSS-study. You will deepen your understanding of what TIMSS is about, how it is implemented (with its quality standards), and what conclusions can be drawn from such a study as TIMSS.
13.00 – 15.30 Guest lecture 4 by Roos Van Gasse, University of Antwerp
How collaborative use of student data can promote professional development of teachers
The research field on data use has emerged over the past decades. The idea is that decisions based on (student) data are more valid and honest towards students than intuition-based decisions. Moreover, purposeful use of data can provide teachers with valuable handles to improve instruction and, as such, improve pupil learning.
My aim is to make you familiar with the data use idea in general and its opportunities for teaching and learning in particular.
Therefore, the learning goals are the following:
- Being familiar with the main principles of data use and the data use circle of inquiry
- Reflecting on the need for data use in educational practices and the role of collaboration in data use
- Reflecting on purposeful use of assessment data within education at pupil and teacher level
- Formulating innovative questions to move research and practice in data use steps forward
15.30 – 16.00 Recap
16.00 Informal afterwork drinks
Day 4 – May 14, 2019
9.30 – 12.00 Guest lecture 5 by Martijn Leenknecht, HZ University of Applied Sciences
Promoting students’ learning with (formative) assessment
Assessment can have a negative effect on the effectiveness of students’ learning, i.e., learning to the test, or neglecting feedback. Why not use those mechanisms, and turn those negative unintentional effects into intentional positive effects?
In this lesson we explore together how assessments affects students and their learning processes. We take a closer look at assessment- and student characteristics in order to gain a better understanding of why some assessment practices are beneficial for students’ learning and others are not. What is the way to go in educational research? How can research support practice in developing challenging assessment practices?
13.00 – 16.00 Peer group review of (pre)final assignments