A Generic Reference System Allowing Data-Fusion Within Continuous Improvement Processes of Engineering Education Programs

A Generic Reference System Allowing Data-Fusion Within Continuous Improvement Processes of Engineering Education Programs

G. Cloutier, D. Spooner (2014).  A Generic Reference System Allowing Data-Fusion Within Continuous Improvement Processes of Engineering Education Programs. 13.

National accreditation standards and international agreements redirect the thrust of engineering programs towards a “competencies” based education. The standard of the Canadian Engineering Accreditation Board calls for a continuous improvement process (CIP), which “demonstrates that program outcomes are being assessed in the context of the graduate attributes, and that the results are applied to the further development of the program." CIPs are closed loop systems, and require data-fusion from many sources of evaluation. This stretches many current evaluation practices beyond their capabilities. How can a professor construct a rubric specific enough to provide feedback to the student about the subject matter, yet general enough for its results to be merged with those of other courses evaluating the same competencies within the curriculum? How can such a CIP measure improvements without a reference system that displays stability over time and good correlation with other national standards? This paper addresses both questions and provides a proof of concept for the implementation of a generic system.

A reference system is proposed. It merges the criteria of the CDIO levels of proficiency and those of the European Qualification Framework (EQF), relying on the conclusions of the DOCET report. Echelons are defined within a five dimensional reference system: knowledge, cognitive process, complexity, autonomy, and commitment. Whenever possible, components borrow intensively from known taxonomies (Bloom-Anderson, Krathwohl) as well as international committees and agreements (International Engineering Alliance, Washington Accord). A seven-echelon scale is defined to promote and track student progress from beginner to professional.

To better structure the gathered data, all five components use key words/phrases regrouped in ranks, and the seven-echelon scale spans 1350 rank combinations. Each rank is expressed by associated keywords for existing taxonomies, and generic key phrases for the remaining components. These keywords and key phrases combine into descriptive texts, to express expectations. More than 3 million combinations are possible over the seven proficiency echelons: from over 1 million entry-level statements at echelon E1, down to over 160 thousands statements at graduate-level echelon E5 (echelons E6 and E7 pertaining to the new and experienced engineer). The expectations of courses covering different subject matters can then be compared, as they are built from ranked arguments for each component. Twenty rules are used to highlight polarities in the expectations that could affect the student outcomes.

Professors maintain their former evaluation habits. They are asked to construct generic statements that they feel appropriate, given their current practice. A natural linkage ensues, hopefully with increased collaboration to implement the change.

A proof of concept expectations generator was created in Excel, and is being tested on 10 undergraduate courses with the help of professors and teaching assistants. Successes, difficulties, and suggestions about the management of change as well as future developments to combine and manage the generated data are addressed. Some professors spontaneously proposed to embed the generic statements as evaluation parameters to their students, arguing it conveyed their true expectations beyond the subject matter. Others felt uneasy about the change, and wanted to develop their “private-generic” statements.

Proceedings of the 10th International CDIO Conference, Barcelona, Spain, June 15-19 2014

Authors (New): 
Guy Cloutier
Daniel Spooner
Pages: 
13
Affiliations: 
École Polytechnique de Montréal, Canada
Keywords: 
program outcomes
continual improvement processes
course-oriented evaluation
CDIO
Year: 
2014
Reference: 
Accreditation Board of Engineering and Technology. (2011). Criteria for Accrediting Engineering Programs 2012-2013. Retrieved from http://www.abet.org/DisplayTemplates/DocsHandbook.aspx?id=3143: 
Anderson, L.W. (Ed.), Krathwohl, D.R. (Ed.), Airasian, P.W., Cruikshank, K.A., Mayer, R.E., Pintrich, P.R., Raths, J., & Wittrock, M.C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman. : 
Bisagni, et al. (2010). DOCET – EQF-CDIO: a reference model for engineering education. Erasmus Mundus Programme – Action 4. Education and Culture DG, with the support of the European Commission. Retrieved from http://www.eqfcdio.org/results : 
Canadian Engineering Accreditation Board. (2013). Accreditation Criteria and Procedures. Retrieved from http://www.engineerscanada.ca/accreditation-resources : 
Cloutier, G., Hugo, R., & Sellens, R. (2010a). Mapping the Relationship Between the CDIO Syllabus and the 2008 CEAB Graduate Attributes. 6th International CDIO Conference, June 15-18, 2010, cole Polytechnique. Retrieved from http://www.cdio.org/ : 
Cloutier, G.M., Sellens, R.W., Hugo, R.J., Camarero, R., & Fortin, C. (2010b) Outcomes Assessment and Curriculum Improvement Through the Cyclical Review of Goals and Results – A Model to Satisfy CEAB2009 Accreditation Requirements. Proceedings of the Canadian Engineering Education Association. Retrieved from http://library.queensu.ca/ojs/index.php/PCEEA/article/view/3086 : 
Crawley, E., Malmqvist, J., Östlund, S., & Brodeur, D. (2007). Rethinking Engineering Education: The CDIO Approach. New York: Springer. : 
Education and culture DG. (2008). Descriptors defining levels in the European Qualifications Framework (EQF). The European Qualification Framework for Lifelong Learning. NC-30-08-272-EN-D. Retrieved from http://ec.europa.eu/eqf/documentation_en.htm : 
ENAEE. (2008). EUR-ACE Framework Standards for the Accreditation of Engineering Programmes. Retrieved from http://www.enaee.eu/eur-ace-system/eur-ace-framework-standards : 
Ford, J., Knight, J., & McDonald-Littleton, E. (2001). Lesson 8 – How we learn. In The University of Tennessee, Knoxville, Center for Literacy Studies, Learning Skills – A Comprehensive Orientation and Study Skills Course (p. 92) Retrieved from http://resources.clee.utk.edu/print/learning-skills.pdf : 
International Engineering Alliance. (2013) Graduate Attributes and Professional Competencies (ver. 3) Retrieved from http://www.washingtonaccord.org/IEA-Grad-Attr-Prof-Competencies.pdf : 
Knight, P. (2004). Assessment of complex learning: the Engineering Professors' Council's new thinking about first-cycle engineering degrees. European Journal of Engineering Education, 29(2), pp. 183-191: 
Krathwohl, D.R., Bloom, B.S., & Masia, B.B. (1964). Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook II: Affective Domain. New York: David McKay Co. : 
Miller, G.E. (1990). The assessment of clinical skills/competence/performance. Acad Med. 65(S63–S67). : 
Moore, I., & Williamson, S. (2008). Assessment of Learning Outcomes. The Higher Education Academy – Engineering Subject Center. : 
Plants, H.L., Dean, R.K., Sears, J.T., & Venable, W.S. (1980). A taxonomy of problem-solving activities and its implications for teaching. In Lubkin, J. L. (Ed.), The Teaching of Elementary Problem Solving in Engineering and Related Fields, ASEE: Washington, DC, pp. 21-34.: 
Walther, J. & Radcliffe, D.F. (2007) The competence dilemma in engineering education: Moving beyond simple graduate attribute mapping. Australasian Journal of Engineering Education, Vol. 13 N° 1. Retrieved from http://www.engineersmedia.com.au/journals/aaee/pdf/ajee_13_1_walther.pdf : 
Go to top
randomness