Developing a Multiple-choice Discourse Completion Test for Iranian EFL Learners: The Case of the Four Speech Acts of Apology, Request, Refusal and Thanks

Document Type : مقالات علمی پژوهشی

Authors
1 Department of English Language and Literature, Faculty of Humanities and Social Sciences, Golestan University, Gorgan, Iran
2 PhD Candidate in Applied Linguistics, School of Education, The University of Adelaide, Adelaide, Australia
Abstract
Based on the hypothesis that not only linguistics competence must be considered during the test development, but also pragmatics competence should be evaluated, the present study was conducted to develop a scale to assess learners’ pragmatic proficiency. To this end, this study developed the pragmatics test of multiple-choice discourse completion task (MDCT) in the case of four primitive speech acts of request, thanks, apology, and refusal for Iranian EFL learners. In the first phase, 155 Iranian university students were asked to write the situations at university, home or in society, in which they apologize, thank, request something and refuse an offer or invitation. In the second phase, to ensure the naturalness of the situations, another group of students were asked to rate the selected situations based on the frequency of occurrence of these situations in their everyday life. In the third phase, the tests were reviewed by several native speakers and TEFL colleagues. Finally, the items MDCT were developed by the researchers and in the final step, two pilot tests were conducted. The findings indicated that the scale was reliable, valid and appropriate to measure Iranian EFL learners’ pragmatic knowledge.

1. Introduction

Before enlightening the importance of developing the measurement of Interlanguage Pragmatics (ILP) competence, the main focus was on different aspects of linguistic competence such as syntax, vocabulary, and cohesion (Liu, 2004). However, with the advent of pragmatics teaching and ILP, which are mainly focused on the development of linguistic action competence by nonnative speakers, the requirement in making the traditional assessment compatible with the new way of teaching has become more demanding. However, there is a wide gap between the teaching and testing measures to evaluate the pragmatic knowledge of students. Therefore, over the last decades, evaluating pragmatic competence of leaners by developing reliable and valid measures has drawn attention of many pragmatists (Hudson, Detmer & Brown, 1995).

In recent years, a plethora of studies have attempted to develop pragmatic tests from different perspectives for EFL learners to evaluate the ILP competence. An example is the study carried out by Liu (2007) who developed a pragmatic test for Chinese EFL students to evaluate their pragmatic knowledge in the case of speech act of apology.

Birjandi and Rezaei (2010) designated a Multiple-choice Discourse Completion Test (MDCT) to measure Iranian EFL learners’ pragmatic competence for two speech acts of apology and request in educational setting through the same phases as employed by Jianda. Salehi and Isavi (2013) also developed a pragmatics test for Iranian EFL learners in terms of request and apology in academic contexts in Iran, United states, England and Suadi Arabia. However, in spite of a large number of speech acts, their studies solely focused on two speech acts, namely apology and request. Further, they suffer from several major drawbacks during the development of the test, such as the lack of measuring reliability and validity of the developed pragmatic test which may differ among various cultures.

Since there seems to be just one study conducted on developing pragmatic measurement for Iranian EFL learners and teachers, it is apparent that more attention should be paid to and more comprehensive studies should be conducted (Alemi & Khanlarzadeh, 2016). Therefore, the present study seeks to remedy the previous study deficiencies by using statistical procedures and also considering four major speech acts of thanks, request, refusal, and apology. More importantly, the present study attempts to develop a well-known pragmatic assessment of MDCT for Iranian EFL learners. While learners are asked to choose the best option in MDCT, it gauges the participants’ awareness of the speech acts.



2. Methodology

Participants

A total number of 158 participants took part in the present study. There were 11 English native speakers, two university lecturers and 145 EFL university students with an intermediate level of English aged 19-22. They participated in five different steps of the test development process (Table 1). The selection of all participants was based on convenience sampling.



Table 1.

Number of Participants in Each Step of the Test Development Process



Steps
Non-native
Native


university students
University lecturers


1. Exemplar generation
40
--
--


2. Likelihood investigation
30
--
--


3.Metapragmatic Assessment
--
2
2


4. Scenario generation and MDCT development
30
--
9


5. Pilot study I
25
--
--


6. Pilot study II
20
--
--


Sum
145
2
11



The Development of Testing Instruments

The testing instruments consisted of a 32-item Multiple-choice Discourse Completion Task (to measure the awareness of speech acts). Following Birjandi and Rezaei (2010) and Liu (2007), the researchers developed the test items through a process of exemplar generation and likelihood investigation, metapragmatic assessment, and piloting. The following is an overview of the process of test development:

Exemplar generation

In order to know the most frequent situations in which the participants of the study were more likely to use the selected speech acts for this study, 40 TEFL students of Islamic Azad University were asked to write either in Persian or English at least five most probably occurring situations at university, home or in society in which they apologize, thank, ask (request something) and refusal. The purpose of this stage was to find the situational topics which approximate authentic situations the students encounter in their real life. A qualitative analysis of the situations showed that many of them were overlapping and 60 situations (15 situations for each speech act) were selected. In the following examples, some apologies produced by the students are provided:

I always apologize for getting home late.

I always apologize for interrupting my new friend.

Likelihood investigation

To test the naturalness of situations, 30 more students from the same pool were asked to rate the 60 selected situations from 1 to 5 on the basis of the frequency of occurrence and naturalness of these situations in their everyday life. Based on their ratings, 32 scenarios getting the highest mean scores were selected. The minimum number an item needed to get to be selected was 120 out of a maximum of 150 (80%).

Metapragmatic assessment

Following Takimoto (2007) and Liu (2007), the 52 selected situations were reviewed and analyzed according to situational features to balance the situations and promote content validity. This involved assessing the imposition (the burden put on the hearer), power relationship (equal, unequal) and social distance (+/- social distance). This was done to include situations with different metapragmatic features. Thirty-two situations (eight situations for each speech act) were selected to be employed in MDCT. After the metapragmatic assessment, the researcher developed MDCT situations by providing the situational background in detail.

Scenario generation and MDCT development

The 32 selected situations were given to 30 Iranian intermediate EFL learners and also six native speakers of English. The linguistically inaccurate and socially inappropriate responses given by non-native participants were used as distractors and the native speaker’s responses were used as correct options for the MDCT items. Following Hudson et al. (1992; 1995) and Liu (2007), the number of choices for each item was decided to be three. Then, to ensure that the native speakers of English would choose the assigned keys, the test was administered to three native American English speakers. It was found that in most cases all of them chose the keys assigned.

Pilot study I

The pilot study involved 25 freshman male and female university students, aged 18-20, studying TEFL at the Islamic Azad University of Gorgan, a northern city in Iran. They were required to answer the 32-item MDCT in 45 minutes.

Rating MDCT

For rating the MDCT, each correct response was given 1 and the incorrect ones received 0. As mentioned above, to be sure about the correct responses assigned by the researcher, ten native speakers of English were asked to answer the MDCT items. Cronbach’s alpha reliability estimates turned out to be .6 for pretest and posttest.

The process of revising items

The MDCT's reliability index of .6 suggested that there might be some problems regarding the testing method, the students, testing environment, and some other factors. Therefore, an attempt was made to find the causes of the problem and obviate them. To improve the reliability of MDCT, situations were reviewed and elaborated more in detail and the following three statistics were gathered for each item: 1) Item difficulty, 2) distractor analysis, 3) Corrected Item-Total Correlation. On the basis of findings from the first pilot study the following decisions and revisions were applied in the second pilot and the main study:

Pilot Study II

Based on the results of the first pilot study, some changes were made in the MDCT test items. Having applied the changes, the researcher conducted a second pilot study. It was to check the reliability index of the revised MDCT tests. The participants were 20 freshman TEFL students at Islamic Azad University of Gorgan. Based on the Oxford Quick Placement Test, the participants’ level was either lower-intermediate or upper-intermediate. They were required to answer the questions in 60 minutes. The reliability of the MDCT rubric was calculated at .81 (standardized item alpha).

Discussions and Conclusion

The present study has gone some way towards enhancing our understanding to the role of functional aspect of language in learning language which unfortunately mainly focuses on the grammar side of language in Iran. The study was an attempt to develop a MDCT to measure the pragmatic knowledge of Iranian EFL learners. The study has gone some way towards enhancing our understanding to the role of functional aspect of language in learning language which unfortunately mainly focuses on the grammar side of language in Iran. However, as far as the result of the test was appropriate in Iranian context, it is not obvious if it is equally works out in other context as well. Consequently a number of future studies developing the same measurements in different contexts are apparent. Besides, a further study could assess whether these tests are applicable to other speech acts in other cultures or not.

Keywords

Subjects


Ahn, R. C. (2005). Five measures of interlanguage pragmatics in KFL (Korean as a foreign language) learners (Unpublished doctoral dissertation). University of Hawaii, Manoa.
Alavi, S. M., & Dini, S. (2008). Assessment of pragmatic awareness in an EFL classroom context: The case of implicit and explicit instruction. Pazhuheshe Zabanhaye Khareji, 45, 99-113.
Alemi, M., & Khanlarzadeh, N. (2016). Pragmatic Assessment of Request Speech Act of Iranian EFL Learners by Non-Native English Speaking Teachers. Iranian Journal of Language Teaching Research, 4(2), 19-34.
Alemi, M., & Tajeddin, Z. (2013). Pragmatic rating of L2 refusal: Criteria of native and non-native English teachers. TESL Canada Journal, 30, 63-63.
Beebe, L. M., & Cummings, M. C. (1996). Natural speech act data versus written questionnaire data: How data collection method affects speech act performance. In S. M. Gass & J. Neu (Eds.), Speech acts across cultures (pp. 65-86). Mouton de Gruyter.
Billmyer, K., & Varghese, M. (2000). Investigating instrument-based pragmatic variability: Effects of enhancing discourse completion tests. Applied Linguistics, 21(4), 517-552.
Birjandi, P., & Rezaei, S. (2010). Developing a multiple-choice discourse completion test of interlanguage pragmatics for Iranian EFL learners. Proceedings of the First Conference on ELT in the Islamic World, ILI Language Teaching Journal, 6(1-2), 43-58.
Blum-Kula, S. (1982). Learning to say what you mean in a second language: A study of the speech act performance of learners of Hebrew and English. In N. Wolfson & E. Jedd (Eds.), Sociolinguistics and language acquisition (pp. 36-55). Newbury House.
Blum-Kulka, S., & Olshtain, E. (1984). Requests and apologies: A cross-cultural study of speech act realization patterns (CCSARP). Applied Linguistics, 5(1), 196-213.
Blum-Kulka, S., House, J., & Kasper, G. (1989). Investigating cross-cultural pragmatics: An introductory overview. In Sh. Blum-Kulka, J. House & G. Kasper (Eds.), Cross-cultural pragmatics: Requests and apologies (pp. 1-36Ablex.
Brown, J. D. (2001). Pragmatics tests: Different purposes, different tests. In K. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 301-325). Cambridge University Press.
Cohen, A. D. & Olshtain, E. (1981). Developing a measure of sociocultural competence: The case of apology. Language Learning, 31(1), 113-34.
Enochs, K., & Yoshitake-Strain, S. (1996). Self-assessment and role plays for evaluating appropriateness in speech act realizations. ICU Language Research Bulletin, 11, 57-76.
Enochs, K., & Yoshitake-Strain, S. (1999). Evaluating six measures of EFL learners’ pragmatic competence. JALT Journal, 21, 29-50.
Farhady, H. (1980). Justification, development and validation of functional language testing (Unpublished doctoral dissertation). University of California.
Golato, A. (2003). Studying compliment responses: A comparison of DCTs and recordings of naturally occurring talk. Applied Linguistics, 24(1), 90-121.
Houck, N., & Gass, S. M. (1996). Non-native refusal: A methodological perspective. In S. M. Gass & J. Neu (Eds.), Speech acts across cultures (pp. 45-64). Mouton de Gruyter.
Hudson, T., Detmer, E., & Brown, J. D. (1992). A framework for testing crosscultural pragmatics. Second Language Teaching and Curriculum Center University of Hawai’i.
Ishihara, N. (2010). Assessing learners’ pragmatic ability in the classroom. In D. H. Tatsuki & N. R. Houck (Eds.), Pragmatics: Teaching speech acts (pp. 209-227). TESOL.
Jernigan, J. E. (2007). Instruction and developing second language pragmatic competence: An investigation into the efficacy of output (Unpublished doctoral dissertation). Florida State University.
Jianda, L. (2006a). Measuring interlanguage pragmatic knowledge of EFL learners. Peter Lang.
Jianda, L. (2006b). Assessing EFL Learners' interlanguage pragmatic knowledge: Implications for testers and learners. Reflections on English Language Teaching, 5(1), 1-22.
Jianda, L. (2007). Developing a pragmatics test for Chinese EFL learners. Language Testing, 24(3), 391-415.
Karatza, S. (2009). Assessing C1 KPG candidates’ pragmatic competence in written tasks: Towards the design of task-specific rating scales (Doctoral dissertation). National and Kapodistrian University of Athens.
Kasper, G. (2001). Classroom research on interlanguage pragmatics. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 33-60). CUP.
Kasper, G., & Rose, K. R. (2002). Pragmatic development in a second language. Blackwell.
Kusevska, M., Ulanska, T., Ivanovska, B., Daskalovska, N., & Mitkovska, L. (2015). Assessing Pragmatic Competence of L2 Learners. Journal of Foreign Language Teaching and Applied Linguisticss,, 149-158.
Li, S. (2018). Developing a test of L2 Chinese pragmatic comprehension ability. Language Testing in Asia, 8(1), 1-23.
Liu, J. (2004). Measuring interlanguage pragmatic knowledge of Chinese EFL learners (Unpublished doctoral dissertation). City University of Hong Kong.
Liu, J. (2006). Assessing EFL learners’ interlanguage pragmatic knowledge: Implication for testers and teachers. Reflection on English Language Teaching, 5(1), 1-22.
Liu, G. (2007). A contrastive study of request strategies in English and Chinese: From the perspective of politeness pragmatics. China Higher Education Press.
McLean, T. (2005). Why no tip?: Student-generated DCTs in the ESL classroom. In D. Tatsuki (Ed.), Pragmatics in language learning, theory, and practice (pp. 150-156). Pragmatics Special Interest Group of the Japan Association for Language Teaching.
Martinez-flor, A., & Usó-Juan, E. (2011). Research methodologies in pragmatics: Eliciting refusals to request. Elia, 11, 47-87.
Nelson, G. L., Carson, J., Al Batal, M., & El Bakary, W. (2002). Cross-cultural pragmatics: Strategy use in Egyptian Arabic and American English refusals. Applied Linguistics, 23(2), 163-189.
Nurani, L. (2009). Methodological issue in pragmatic research: Is discourse completion test a reliable data collection instrument? Journal Sosioteknologi Edisi, 17(8), 667-678.
Oller, J. W. (1979). Language tests at school: A pragmatic approach. Longman.
Pallant, J. (2007). SPSS survival manual: A step by step guide to data analysis using SPSS for Windows. Maidenhead: Open University Press.
Purpura, J. (2004). Assessing grammar. Cambridge University Press.
Rose, K. R., & Kasper, G. (2001). Pragmatics in language teaching. Cambridge University Press.
Roever, C. (2005). Testing ESL pragmatics: Development and validation of a web-based assessment battery. Peter Lang.
Roever, C. (2008). Rater, item, and candidate effects in discourse completion tests: A FACETS approach. In A. Martinez-Flor & E. Alcon (Eds.), Investigating pragmatics in foreign language learning, teaching, and testing (pp. 249-266). Multilingual Matters.
Salehi, M., & Isavi, E. (2013). Developing a test of interlanguage pragmatics for Iranian EFL learners in relation to the speech acts of request and apology. Iranian Journal of Language Studies, 1(1), 1-16.
Setoghuchi, E. (2008). Multiple-choice discourse completion tasks in Japanese English language assessment. Second Language Studies, 27(1), 41-101.
Shimazu, Y. M. (1989). Construction and concurrent validation of a written pragmatic competence test of English as a second language (Unpublished doctoral dissertation). University of San Francisco.
Sonnenburg-Winkler, S. L., Eslami, Z. R., & Derakhshan, A. (2020). Rater variation in pragmatic assessment: The impact of the linguistic background on peer-assessment and self-assessment. Lodz Papers in Pragmatics, 16(1), 67-85.
Tada, M. (2005). Assessment of EFL pragmatic production and perception using video prompts (Unpublished doctoral dissertation,). Temple University.
Taguchi, N. (2011). The effect of L2 proficiency and study‐abroad experience on pragmatic comprehension. Language Learning, 61(3), 904-939.
Takimoto, M. (2007). The effects of input-based tasks on the development of learners’ pragmatic proficiency. Applied Linguistics, 28(1), 1-28.
Xiangjuan, F., & Jianda, L. (2017). Exploring a method for measuring the interlanguage pragmatic knowledge of learners of Chinese as a foreign language. Language Teaching and Linguistic Studies, 188(6), 9-19.
Yamashita, S. O. (1996). Six measures of JSL pragmatics. Second Language Teaching and Curriculum Center of University of Hawaii.
Youn, S. (2007). Rater bias in assessing the pragmatics of KFL learners using facets analysis. Second Language Studies, 26, 85-163.
Yoshitake-Strain, S. (1997). Measuring interlanguage pragmatic competence of Japanese students of English as a foreign language: A multi-test framework evaluation (Unpublished doctoral dissertation). Columbia Pacific University.