Construct validity of an instrument to assess assertive feedback in initial teacher training

Validez de constructo de un instrumento para evaluar la retroalimentación asertiva en la formación inicial del profesorado

María de la Luz Berlanga Ramírez and Luis Gibran Juárez-Hernández

DOI: https://doi.org/10.22550/REP80-3-2022-08

Feedback in the evaluation process has become more important in teaching practice since the start of the Covid-19 pandemic. The aim of the present study is to analyse the construct validity and reliability of the Socioformative Analytical Rubric for the Assessment of Assertive Feedback (RASERA). This instrument was applied to a sample of 525 students from normal schools in Mexico. Exploratory and confirmatory factor analysis were used to analyse its construct validity. Its reliability was analysed using Cronbach’s alpha. The results of the first analysis revealed the formation of two factors; the first, we called execution of assertive feedback and the second, representativeness of assertive feedback. These two factors explained more than 65% of the variance and all of the items with significant factor loadings were found in them (FL > 0.50). For its part, the CFA revealed a good fit of this model (Ratio χ2/df: 2.284; GFI: 0.909; RMSEA: 0.068; RMR: 0.035; CFI: 0.966; TLI: 0.955). For each factor, the average variance extracted, and the composite reliability were pertinent (AVE > 0.50 and CR > 0.70) and each item showed an adequate standardised factor load (SFL > 0.50). The reliability analysis gave optimal factor values (Cronbach’s alpha and McDonald’s omega > 0.85). We conclude that the RASERA instrument has adequate psychometric properties.

Anijovich, R., & Cappelletti, G. (2017). Más allá de las pruebas: la retroalimentación en la evaluación como oportunidad [Beyond testing: Feedback. in the evaluation as an opportunity]. Paidós.

Ato, M., López, J., & Benavente, A. (2013). Un sistema de clasificación de los diseños de investigación en psicología [A classification system for research designs in psychology]. Anales de Psicología, 29 (3), 1038-1059.

Berlanga Ramírez, M. L., & Juárez Hernández, L. (2020a). Diseño y validación de un instrumento para evaluar la retroalimentación asertiva en educación normal [Design and validation of an instrument to evaluate assertive feedback in Normal Education]. IE Revista de Investigación Educativa de la REDIECH, 11, e-791. https://doi.org/10.33010/ie_rie_rediech.v11i0.791

Berlanga Ramírez, M. L., & Juárez Hernández, L. (2020b). Paradigmas de evaluación: del tradicional al socioformativo [Evaluation paradigms: From the traditional to the socio-formative]. Diálogos sobre educación. Temas actuales en investigación educativa, 21, 1-14. https://doi.org/10.32870/dse.v0i21.646

Blunch, N. (2013). Introduction to structural equation modeling using IBM SPSS statistics and AMOS. Sage.

Bollen, K. A., & Long, J. S. (1993). Testing structural equation models. Sage.

Bordas, M. I., & Cabrera, F. Á. (2001). Estrategias de evaluación de los aprendizajes centrados en el proceso [Learning assessment strategies focused on the process]. revista española de pedagogía, 59 (218), 25-48.

Brown, T. A. (2015). Confirmatory factor analysis for applied research. Guilford publications.

Canabal, C., & Margalef, L. (2017). La retroalimentación: la clave para una evaluación orientada al aprendizaje [The feedback: A key to learning-oriented assessment]. Profesorado. Revista de Currículum y Formación de Profesorado, 21 (2), 149-170.

Carvajal, A., Centeno, C., Watson, R., Martínez, M., & Sanz Rubiales, A. (2011). ¿Cómo validar un instrumento de medida de la salud? [How to validate a health measurement instrument?]. Anales del Sistema Sanitario de Navarra, 34 (1), 63-72.

Castro, S., Paz, L., & Cela, M. (2020). Aprendiendo a enseñar en tiempos de pandemia COVID-19: nuestra experiencia en una universidad pública [Learning to teach in times of the COVID-19 pandemic: Our experience at Universidad de Buenos Aires]. Revista Digital de Investigación en Docencia Universitaria, 14 (2), e1271.

Charter, R. A. (2003). A breakdown of reliability coefficients by test type and reliability method and the clinical implications of low reliability. The Journal of General Psychology, 130 (3), 290-304. https://doi.org/10.1080/00221300309601160

Cheung, G. W., & Wang, C. (2017). Current approa- ches for assessing convergent and discriminant validity with SEM: Issues and solutions. Academy of Management Proceedings, 2017 (1), 12706. https://doi.org/10.5465/AMBPP.2017.12706abstract

Cho, E., & Kim, S. (2015). Cronbach’s coefficient alpha: wellknown but poorly understood. Organizational Research Methods, 18 (2), 207-230. https://doi.org/10.1177/1094428114555994

CIFE (2018). Instrumento «Cuestionario de satisfacciocon el instrumento» [Instrument "Questionnaire of satisfaction with the instrument"]. Centro Universitario CIFE. https://docs.google.com/forms/d/e/1FAIpQLSc8-jOiWYwG64QbnhRyGAg1ElTggq2aP1XiSg45pyN9XLbXNQ/viewform

Connell, J., Carlton, J., Grundy, A., Taylor Buck, E., Keetharuth, A. D., Ricketts, T., Barkham, M., Robotham, D., Rose, D., & Brazier, J. (2018). The importance of content and face validity in instrument development: Lessons learnt from service users when developing the Recovering Quality of Life measure (ReQoL). Quality of life research: an international journal of quality-of-life aspects of treatment, care, and rehabilitation, 27 (7), 1893-1902. https://doi. org/10.1007/s11136-018-1847-y

Contreras, G., & Zúñiga, C. G. (2019). Prácticas y concepciones de retroalimentación en for- mación inicial docente [Practices and concep- tions of feedback in initial teacher training]. Educação e Pesquisa, 45, 1-22. https://doi. org/10.1590/s1678-4634201945192953

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests [Coeficiente alfa y la estructura interna de pruebas]. Psychometrika, 16 (3), 297-334. https://doi.org/10.1007/BF02310555

Díaz, L. (2001). La metaevaluación y su método [Meta-evaluation and its method]. Revista de Ciencias Sociales (Cr), II-III (93), 171-192. https://www.redalyc.org/articulo.oa?id=15309314 Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Edu- cational Research, 83 (1), 70-12. https://doi.org/10.3102/0034654312474350

Farahman, F., & Masoud, Z. (2011). A comparative study of EFL teachers’ and intermediate High School students’ perceptions of written correc- tive feedback on grammatical errors. English Language Teaching, 4 (4), 36-48. http://dx.doi.org/10.5539/elt.v4n4p36

Fornell, C., & Larcker, D. (1981). Evaluating struc- tural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18 (1), 39-50. https://doi.org/10.2307/3151312

Furr, R. M. (2020). Psychometrics in clinical psychological research. In The Cambridge hand- book of research methods in clinical psychology (pp. 54–65). Cambridge University Press. https://doi.org/10.1017/9781316995808.008

García-Jiménez, E. (2015). La evaluación del apren- dizaje: de la retroalimentación a la autorregula- ción. El papel de las tecnologías [Assessment of learning: From feedback to self-regulation. The role of technologies]. Relieve: Revista Electrónica de Investigación y Evaluación Educativa, 21 (2), 1-24. http://dx.doi.org/10.7203/relieve.21.2.7546

General Law on the Protection of Personal Data Held by Obligated Parties. Official Journal of the Federation, 26 January 2017. Chamber of Deputies of the H. Congress of the Union. United Mexican States. https://www.diputados. gob.mx/LeyesBiblio/pdf/LGPDPPSO.pdf

Gliner, J. A., Morgan, G. A., & Harmon, R. J. (2001). Measurement reliability. Journal of the Ameri- can Academy of Child y Adolescent Psychiatry, 40, 486-488. https://doi.org/10.1097/00004583-200104000-00019

Hair, Jr., William, C. B., Barry, J. B., & Anderson, R. (2014). Multivariate data analysis. Pearson.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81-112. https://doi.org/10.3102/003465430298487

Halek, M., Holle, D., & Bartholomeyczik, S. (2017). Development and evaluation of the content validity, practicability, and feasibility of the Innovative dementia-oriented Assessment system for challenging behaviour in residents with dementia. BMC health services research, 17 (1), 1-26.

Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educational and Psychologi- cal measurement, 66 (3), 393-416. https://doi. org/10.1177/0013164405282485

Herrero, J. (2010). El análisis factorial confirmatorio en el estudio de la estructura y estabilidad de los instrumentos de evaluación: un ejemplo con el cuestionario de autoestima CA-14 [Confirmatory factor analysis in the study of the structure and stability of assessment instruments: An example with the self-esteem questionnaire (CA-14)]. Intervention Psychosocial, 19 (3), 289-300. https://doi.org/10.5093/in2010v19n3a9

Howard, C. M. (2016). A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve? International Journal of Human– Computer Interaction, 32 (1), 51-62. https://doi.org/10.1080/10447318.2015.1087664

JASP Team (2019). JASP (Version 0.11.1) [Comput- er software].

Jabrayilov, R., Emons, W. H. M., & Sijtsma, K. (2016). Comparison of classical test theory and item response theory in individual change assessment. Applied Psychological Measurement, 40 (8), 559-572. https://doi.org/10.1177/0146621616664046

Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14 (1), 63-76. https://doi. org/10.1177/1469787412467125

Jónsson, I. R., Smith, K., & Geirsdóttir, G. (2018). Shared language of feedback and assessment. Perception of teachers and students in three Icelandic secondary schools. Studies  in  Edu- cational Evaluation, 56, 52-58. https://doi.org/10.1016/j.stueduc.2017.11.003

Kline, P. (2015). A handbook of test construction (psychology revivals). Introduction to psychometric design. Routledge.

Koller, I., Levenson, M. R., & Glück, J. (2017). What do you think you are measuring? A mixed-methods procedure for assessing the content validity of test items and theory-based scaling. Frontiers in Psychology, 8, 126. https://doi.org/10.3389/fpsyg.2017.00126

Koning, A. J., & Frances, P. H. (2003). Confidence intervals for Cronbach’s Coefficient Alpha values. ERIM Report Series Reference No. ERS- 2003-041-MKT. http://hdl.handle.net/1765/431

Lagunes-Córdoba, R. (2017). Recomendaciones sobre los procedimientos de construcción y validación de instrumentos y escalas de medición en la psicología de la salud [Recommendations about procedures for construction and validation of scales in health psychology]. Revista Psicología y Salud, 27 (1), 5-18.

Leyva, E. (2011). Una reseña sobre la validez de constructo de pruebas referidas a criterio [An overview of the construct validity of criterion-referenced tests]. Perfiles Educativos, 33 (131), 131-154. https:// doi.org/10.22201/iisue.24486167e.2011.131.24238

Lloret-Segura, S., Ferreres-Traver, A., Hernández-Baeza, A. & Tomas-Marco, I. (2014). El análisis factorial exploratorio de los ítems: una guía práctica, revisada y actualizada [The exploratory factor analysis of the items: A practical guide, revised and updated]. Anales de Psicología, 30 (3), 1151-1169. http://dx.doi.org/10.6018/analesps.30.3.199361

López, A., & Osorio, K. (2016). Percepciones de estudiantes sobre la retroalimentación formativa en el proceso de evaluación [Student perceptions about formative feedback in the evaluation process]. Actualidades Pedagógicas, 68, 43-64. http://dx.doi.org/10.19052/ap.2829.

Martínez, M. (1995). Psicometria: teoride los tests psicologicos y educativos [Psychometry: Theory of psychological and educational tests]. Síntesis. Martínez-Rizo, F. (2013). Dificultades para implementar la evaluación formativa: revisión de literatura [Difficulties in implementing formative assessment: Literature review]. Perfiles educativos, 35 (139), 128-150. http://www.scielo.org.mx/pdf/peredu/v35n139/v35n139a9.pdf

McDonald, R. P. (1999). Test theory: A unified treatment. Lawrence Erlbaum Associates, Inc.

Mejía, M., & Pasek de Pinto, E. (2017). Proceso general para la evaluación formativa del aprendizaje [General process for formative assessment of learning]. Revista Iberoamericana de Evaluación Educativa, 10 (1), 177-193. https://doi. org/10.15366/riee2017.10.1.009

Mendoza-Mendoza, J., & Garza, J. B. (2009). La medición en el proceso de investigación científica: Evaluación de validez de contenido y confiabilidad [Measurement in the scientific research process: Content validity and reliability evaluation]. Innovaciones de Negocios, 6 (11), 17- 32.

Messick, S. (1980). Test validity and ethics of assessment. American Psychologist, 35 (11) 1012-1027. https://doi.org/10.1037/0003-066X.35.11.1012

Miguel, J. A. (2020). La educación superior en tiempos de pandemia: una visión desde dentro del proceso formativo [Higher education in times of pandemic: A view from within the training process]. Revista Latinoamericana de Estudios Educativos, 50 (ESPECIAL), 13-40. https://doi. org/10.48102/rlee.2020.50.ESPECIAL.95

Monje, V., Camacho, M., Rodríguez, E., & Carvajal, L. (2009). Influencia de los estilos de comunicación asertiva de los docentes en el aprendizaje escolar [Influence of teachers’ assertive communication styles on learning in schools]. Psicogente, 12 (21), 78-95.

Padilla, M. T., & Gil, J. (2008). La evaluación orientada al aprendizaje en la educación superior: condiciones y estrategias para su aplicación a la docencia universitaria [Learning-oriented assessment in higher education: conditions and strategies for its application in university education]. revista española de pedagogía, 66 (241), 467-486.

Pérez-Gil, J. A., Chacon-Moscoso, S., & Moreno-Rodríguez, R. (2000). Validez de constructo: el uso del análisis factorial exploratorio-confirmatorio para obtener evidencias de validez [Construct validity: The use of factor analysis]. Psicothema, 12 (2), 442-446. https://www.psicothema.com/pdf/601.pdf

Quezada, S., & Salinas, C. (2021). Modelo de retroalimentación para el aprendizaje: Una propuesta basada en la revisión de literatura [Feedback model for learning: A proposal based on literature review]. Revista mexicana de investigación educativa, 26 (88), 225-251.

Randall, M., & Thornton, B. (2005). Advising and supporting teachers. University Press.

Sadler, D. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144. http://dx.doi.org/10.1007/BF00117714

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153-189. https://doi.org/10.3102/0034654307313795

Tabachnick, B. G., & Fidell, L.S. (2001). Using multivariate statistics. Allyn y Bacon.

Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48 (6), 1273-1296. https://doi.org/10.1007/s11165-016-9602-2

Temesio, S., García, S., & Pérez, A. (2021). Rendimiento estudiantil en tiempo de pandemia: percepciones sobre aspectos con mayor impacto [Student performance in times of pandemic: Perceptions of aspects with the greatest impact]. Revista Iberoamericana de Tecnología en Educación y Educación en Tecnología, 28, e45. https://doi.org/10.24215/18509959.28.e45

Tobón, S. (2017). Evaluación socioformativa. Estrategias e instrumentos [Socioformative evaluation. Strategies and instruments]. Kresearch. https://cife. edu.mx/recursos/wpcontent/uploads/2018/08/LIBRO-Evaluaci%C3%B3n-Socioformativa-1.0-1.pdf

Tobón, S. (2013). Evaluación de las competencias en la edcación básica [Evaluation of competen- cies in basic education]. Santillana.

Torrance, H., & Pryor, J. (1998). Investigating formative assessment. Teaching, learning and assessment in the classroom. Open University Press.

Triana, A., & Velásquez, A. (2014). Comunicación asertiva de los docentes y clima emocional del aula en preescolar [Assertive teacher communication and emotional classroom climate in preschools ]. Voces y Silencios: Revista Latinoamericana de Educación, 5 (1), 23-41.

Tunstall, P., & Gipps, C. (1996). Teacher feedback to young children in formative assessment: A typology. British Educational Journal, 22 (4), 389-404. https://doi.org/10.1080/0141192960220402

Viladrich, C., Angulo-Brunet, A., & Doval, E. (2017). A journey around alpha and omega to estimate internal consistency reliability. An- nals of Psychology, 33 (3), 755-782. https://doi. org/10.6018/analesps.33.3.268401

Wiggins, G. (2011). Giving students a voice: The power of feedback to improve teaching. Educational Horizons, 89 (3), 23-26. https://doi.org/10.1177/0013175X1108900406

Wiliam, D. (2011). Embedded formative assessment. Bloomington, Solution Tree Press.

Yong, A. G., & Pearce, S. (2013). A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutorials in quantitative methods for psychology, 9 (2), 79-94. https://doi.org/10.20982/tqmp.09.2.p079

Yuan, K. H. (2005). Fit indices versus test statistics. Multivariate Behavioral Research, 40 (1), 115-148. https://doi.org/10.1207/s15327906mbr4001_5

 

María de la Luz Berlanga Ramírez. Doctoral candidate at the Centro Univer- sitario CIFE. Professor-Researcher at the Escuela Normal Superior del Estado de Coahuila. Her current research interests are in the field of evaluation in teacher training.

 https://orcid.org/0000-0001-9088-3991

Luis Gibran Juárez-Hernández. Doctor of Biological and Health Sciences from the Universidad Autónoma Metro- politana. Professor–Researcher at the Centro Universitario CIFE. His current research interests are the fields of evalu- ation instruments, sustainable development, and ecology.

 https://orcid.org/0000-0003-0658-6818

 

Data collected by PlumX Metrics.

This article has been read 74 times