full screen background image

Conference Programme

Programme Download


Pre-conference Workshops

Workshops registration

Workshops I – Wise assessment: Towards a community of practice

Speakers: Grahame Bilbow & Dai Hounsell
Time: 14:00 – 15:00
Venue: CPD-LG.18, Lower Ground (LG) Floor, Centennial Campus, The University of Hong Kong ( map)

This workshop focuses on the ongoing work of the Wise Assessment Project which is being undertaken in the Centre for the Enhancement of Teaching and Learning (CETL) at the University of Hong Kong (HKU). The chief goal of the project is to encourage, support and enhance communities of practice across the University in support of assessment for learning in the new curriculum.

While building on existing CETL/HKU networks, resources and expertise, the project is exploring new ways of facilitating dialogue across the university about ‘wise practice’ –colleagues’ insights, understandings and observations about how to assess in ways that optimise student learning, are well-attuned to subject and professional requirements, and make good use of available resources. The project also serves as a proving-ground for strengthening communities of practice on other important educational themes, and thus as a longer-term means of enhancing teaching and learning at HKU and more widely.

To date, priority has been given to surfacing and sharing effective assessment practices around four themes of strategic importance at HKU: learning from innovation in assessment in the Common Core curriculum; assessing experiential learning; high-impact feedback to students on their progress and performance; and the interrelated issue of students’ understanding of standards. Resource materials on these themes are being compiled, commissioned and edited in the form of Briefing papers and linked case examples, and ‘talking-head and ‘vox pop’ videos.

The workshop will be an opportunity to discuss the thinking underlying the approach to the project as well as to engage with a selection of the emerging resource materials.

Workshops II – Evidence of student learning outcomes – Why and how?

Speakers: Cecilia Chan & Michael Prosser
Time: 15:00 – 16:00
Venue: CPD-LG.18, Lower Ground (LG) Floor, Centennial Campus, The University of Hong Kong ( map)

Providing evidence of student learning is important for quality assurance, ensuring that institutions, teachers, and students are achieving the learning outcomes that they intend and claim to achieve. This kind of quality-assurance standard helps institutions continuously improve their academic programmes by ensuring that students are provided with the appropriate opportunities and support, and that the quality of provision is comparable to international best practice.

Quality auditing agencies and accreditation bodies have been focusing on assuring the quality of teaching and learning processes, adopting and implementing more reliable processes such as outcomes based approach to student learning. However, the focus is now more shifted to the assurance of student learning outcomes at the institutional and programme level, leading to the increasing attention on the assessment of student learning outcomes in terms of both direct and indirect evidence of learning outcomes. Discussion on the purposes of learning outcomes assessment and more importantly, how learning outcomes are assessed in higher education institutions are thus, hot topics in higher education.

When we talk about learning outcomes, teachers and students often focus on academic discipline knowledge, the concern mainly emphases on content knowledge students may learn and achieve. But what about the generic skills competency that most universities have embedded into their mission statements, educational aims and institution learning outcomes and are now a graduation requirement? How are these learning outcomes being assessed and reported?

In this workshop, we will discuss ways, processes and issues to collect, analyse, reflect and act upon the evidence of student learning in both academic discipline knowledge and generic skills competency.

Workshops III – Criteria, standards and judgment practices in assessing performance-based tasks in higher education: Opportunities from professional programmes

Speakers: Susan Bridges (HKU), Michael Botelho (HKU) & Claire Wyatt-Smith (ACU)
Time: 16:25 – 17:25
Venue: CPD-LG.18, Lower Ground (LG) Floor, Centennial Campus, The University of Hong Kong ( map)

Outcomes-based models in higher education recognize the centrality of standards-based assessment in fulfilling the goal of curriculum alignment. This workshop aims to take this mission forward by examining one assessment type: performance-based tasks. By definition, we consider these tasks to be-in-the-moment performances by students that may be assessed in real time or video recorded for post-performance assessment. Examples include professional practicum performances, clinical performances in simulated treatments or real patient care, demonstration of skills, teaching practicums and oral presentations such as moot courts, vivas, dramas, debates etc.

We will first examine the tensions between validity and reliability with performance-based tasks when considering their placement within an overarching, course or programme-level assessment strategy. Second, in considering in situ assessment of performance-based tasks, the notion of examiner judgment is central. Key to validity and reliability is making such judgments defensible, visible and accessible to students and examiners alike. Articulation of latent expertise and ‘connoisseur’ use of task performance criteria are key to this notion of accessibility. One widely adopted approach is the adoption of ‘rubric’ formats for the denotation of standards and explication of task-specific criteria. However, the standard table-format matrix used to as a template for assessment of tasks holds potential limitations for application and interpretation. ‘Boxing’ in multiple descriptors for single criterion may constrain views of student performance. They have potential to limit what an assessor ‘sees’ in the act of assessing performances, specifically, what the performance calls the assessor ‘to see’ that may not have been previously identified in the published criteria. The use of assessment grading intervals whether pass/fail or a A-E affects interpretation, reliability and the nature of feedback to students. Likewise, the ability to make ‘on-balance’ judgments may be limited by wholly pre-specified features of quality. The writing of clear yet nuanced descriptors or specifications, therefore, proves to be a continuing challenge in higher education, especially in performance-based tasks. Various models and approaches will be shared and developed in this workshop. We will also problematize the use of scalar attributes such as ‘excellent’, ‘good’, ‘unsatisfactory’ in denoting criteria and explore methods to best capture salient features considered by assessors to be central to task performance across levels.