| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Assessment in Online Learning- Issues and Practices

Page history last edited by Felix Kwihangana 9 years, 11 months ago

 

Assessment in Online Learning: Issues & Practices

 

By Mohammed Ghusen Almamari & Felix Kwihangana

 

Introduction

 

Current technologies have made online learning more popular and accessible. Yet this development does not always come with a deeper understanding of all the aspects of online learning. One of those aspects is assessment which is considered to be an important aspect in teaching and learning. Indeed, Benson (2003) sustains that “Assessment is a key component of any teaching and learning system.”(p.69); while Swan, Shen and Hiltz (2006) believe that the value of any ‘instructional system’ depends on it. The particularity of online learning “in which interaction between people is an important form of support for the learning process” (Goodyear et al. (2001: 68) may suggest that online learning should be assessed differently from traditional learning. Some have even called for ‘effective methods’ to assess online learning (Vonderwell, Liang and Alderman, 2007).

 

As online learners and prospective online teachers, online assessment interests us and we think that understanding it would not only improve our online learning experience, but also prepare us to become effective online teachers. Hence, we are going to focus on aspects of assessment that can serve as a foundation for a prospective online teacher. We aim at finding how online assessment is done, what is/could be assessed, and issues arising from the assessment in online learning. Due to time and space limitations, the review does not cover all the issues in the assessment of online learning but focusses on those aspects that are widely discussed in the literature.


 

Particularities of assessing online learning

 

Many scholars have tried to show that the assessment of online learning is different from the assessment of traditional learning. Bauer (2002) cautioned that “Launching the virtual class may be the easy part.” He argued that assessing online learning “poses new twists in traditional assessment methodology” (Bauer 2002: 31). These new “twists” need to be in line with the principles of online learning, which is generally collaborative and learner-centred. Nevertheless, Swan, Shen and Hiltz (2006) realized that traditional, teacher-centred assessment dominates online assessment, denying collaborative learning its due value. This is probably because assessing collaboration is rather difficult. It requires a radical change in traditional practices (Swan, Shen and Hiltz (2006).

 

However, Benson (2003) argues that online assessment is built on the same principles as those that underlie face-to-face assessment. She maintains that online assessment has added advantages because it relies on technologies that “provide capabilities beyond those provided in the traditional classroom.”(Benson, 2003: 71. The affordances of the tools used in online where synchronous and asynchronous teacher-student and student-student communication are possible constitutes the foundations of this understanding. The learner is able to get immediate (corrective) feedback, give his ideas or answer/attempt every question, participate in every discussion initiated on the course using asynchronous tools (Benson 2003). Therefore, asynchronous tools used in online learning support and enhance the assessment of online learning (Vonderwell, Liang and Alderman, 2007). This means that the process and product of online learning is made available for assessment (Macdonald, 2003) while this may not be possible in traditional learning. Therefore, while online learners may not learn in the same conditions as face-to-face learners, the purpose and roles of assessment should not vary. Instead, online learning, assessment is likely to benefit from technological advances to make available for assessment what cannot be assessed in face-to-face classes.


 

What should be assessed?

 

Anderson (2008) advocates for the assessment of what is useful rather than what is easy. For, Benson (2003) online assessment “should measure learning in all relevant learning domains” (p.71), with particular attention on cognitive processes (Anderson, 2008). While all the skills assessed on-campus can also be assessed online (Benson, 2003), some have focussed more attention on the purpose of the assessment. This is reflected in the two forms of online learning assessment termed ‘assessment of learning’ and ‘assessment for learning’ (Vonderwell, Liang and Alderman, 2007; Elwood & Klenowski 2002). The difference between these two is that the former is for grading purposes, and the latter’s purpose is “to enable students, through effective feedback, to fully understand their own learning and the goals they are aiming for”. (Elwood & Klenowski, 2002: 244).

 

The main issue has remained to know what is ‘useful’. Some authors like Bauer (2002) find it useful to assess students’ participation in chat rooms and Bulletin Boards, which is also referred to as asynchronous and synchronous discussions and contributions (Vonderwell, Liang and Alderman, 2007; Benson,2003). Understanding these two and how they affect the process and product of online learning could result in “sound assessment practices” (Milam, Voorhees, & Bedard‐Voorhees, 2004: 77). However, the nature of what is really assessed can be understood by looking at the forms the assessment is given, and that is what we look at in the next section.


 

Forms of online learning assessments

 

Some scholars have identified different ways in which online assessment may be conducted. Vonderwell, Liang and Alderman (2007) consider that “Peer-to-peer tasks, collaborative buddies, self-assessment activities, and peer assessment activities can be incorporated into the assessment process.” (p. 324). Benson (2003) identifies 11 types of assessment (or rather activities) that can be used in online learning. Benson’s list can be seen as falling into the categories of traditional assessment and alternative assessment and at the same time Formative and Summative as it includes traditional tests consisting of fixed answers, check boxes, multiple choice, or yes or no questions, but also tests in form of e-portfolios, project work, collaborative work, etc. which can be administered progressively to support learning or to measure learning at the end of the course (Benson, 2003).


 

Assessing Synchronous and asynchronous discussions

 

Online learning is usually characterized by asynchronous and synchronous discussions in forums or chat-rooms where students participate and exchange views and opinions on a topic of their course. Student collaboration plays an important role in such forms of learning. Assessing these discussions stems from the very necessity of encouraging students to use discussions as part of their learning (Bauer, 2002; Vonderwell, Liang and Alderman, 2007). Bauer (2002) notes that assessing students’ participation in synchronous discussions (chat-rooms) would develop their critical thinking skills, among other benefits. There are empirical evidence suggesting that these benefits also exist for asynchronous discussions. Indeed, Vonderwell, Liang and Alderman (2007)’s “findings imply that writing for a group in the asynchronous environment facilitated reflection, metacognitive processes, and articulation of students’ own learning”. (p.323). Bauer sees assessing this participation as a key element in the success of online learning. Indeed, as he argues, “In an online class, professors who place little or no value on participation in chat run the risk of talking to themselves.”(p.33). He, nevertheless, notes that assessing chat-room participation may be at the expense of “slow typists” whose typing speed prevents from following the pace of the discussions. Regarding the asynchronous discussions, Bauer (2002) likens that activity to essay writing in the classroom and suggests the use of the same strategies/rubric to assess them because students have time to write, proofread and edit, before submitting them. Some learners may also prefer to have their discussions assessed. For example, in a study by Slaouti (2007) , some of the participants (who were student-teachers) suggested coercing students to participate in forums because they considered this to be very important for learning.

 

There exists a belief that student forum discussions can showcase their learning. For example, Vondrwell, Liang and Alderman (2007) maintain that “Instructors can use the discussion postings to assess student learning and progress.”(p. 324). Swan, Shen, & Hiltz, (2006) propose a way of assessing participation in the online discussions. They argue that “Assessment can be done by counting things like the number, regularity, and length of contributions.” They however acknowledge that students, aware of what is being assessed, may load the forums with messages that lack quality. Nevertheless, there is also another issue of those ‘vicarious learners’ who may not post too much (Sutton, 2001) and may be disadvantaged by the counting. Indeed, we think that some students may remain passive in ‘chatrooms’ and assuming that the “Assessment of online learning equals counting the number of messages” is just a myth (Qing and Akins, 2005: 58 ).Thus, little interaction in forums does not prove that no learning is happening.


 

Assessing Collaborative Activities Online

 

There are many activities that involve collaboration in online and could be assessed such as group projects or collaborative writing (Benson, 2003, Macdonald, 2003). Building on the premise that assessment shows what is important for learners to know, Macdonald (2003) claims that “assessment related tasks attract student attention at the expense of non-assessed tasks.”(p. 378). Therefore the importance of such collaborative activities in online learning can only be proven by assessing them because “if you value collaboration as an instructor, you need to find ways to motivate students and to assess collaborative activity.” (Swan, Shen, & Hiltz, 2006:45). A study conducted by Macdonald (2003) concluded that “more students will participate in online collaborative activities if they are linked to assessment” (p.389). Therefore assessing online collaborative activities may encourage students to be engaged in such activities.

 

Benson (2003) suggests that rubrics specifying what students are expected to do for asynchronous and synchronous discussions may improve the assessment process. Thus, we believe that a ‘rubric’ is to provide grading criteria which online learners need to prove their understanding and set plans for reaching the goals. Swan, Shen & Hiltz (2006) have made similar claims, and give examples of such kind of rubrics to be used in assessing learning. Macdonald (2003)’s study showed that many students enjoyed the collaborative work, but not all for different reasons. She concluded that “many are less enthusiastic about collaborative assessment, where it requires them to rely on fellow students for marks.” (p.389). Macdonald recounts that some students in her study would have preferred more marks for the individual part than the collaborative part of the assignment. Macdonald contends that linking collaboration with marks constitutes “the ultimate test of mutual trust, and underlines the distinction, for some students, between the pleasure of online collaborative study and the pain of collaborative assessment.”(p.388). To avoid conflicts, the teacher has to plan such activities carefully.

 

Haythornthwaite (2006) proposes a solution to the issues raised by Macdonald (2003) by presenting two ways in which collaborative activities can be done. On the one hand there is the Coordinated Action in which “students coordinate their activities, doing piecework that is later assembled into a whole, or passing pieces from one person to another in an assembly line model” (p.12). On the other hand, there is the Collaborative Action in which “no single hand is visible in the final product, and thus assessment is of the work of the group as a whole, not of any individual.”(p.12). This can be done in assessment that come in the form of report writing or e-portfolio. For instance Macdonald (2003) combined both individual and collaborative assessments in an activity that consisted of “an individually produced report, attracting individual marks, together with a summary and conclusion for which everybody in the group receives the same mark.” (p.386). In this case, students who like cooperative work and those who do not will all find their share in the assessment.


 

Self-and peer-assessment

 

The literature shows many benefits of students’ involvement in their assessing their learning in the form of self-and peer-assessments. McConnell (2002), for example, suggests that involving students in their own assessment prepares them for life and work. Benson (2003) advocates for peer-feedback and self-assessment as an important part of online learning assessment. Indeed, self-assessment allows “students [to] measure their own learning and achievement” (Robles and Braathen, 2002:45). This seems to reflect the likes of students. For example, in McConnell (2002)’s study, 94% of students were of the view that “the tutor alone should not be the one to assess their course work.” (p.78). Benson (2003) adds that “Opportunities for self-assessment can be valuable to learners”; showing that it is important for students to self-evaluate.

 

Quoting previous research, Chen et al. (2009: 284) suggest that involving students in the assessment of their peers impacts positively on their performance and attitudes towards learning. Benson also shares this view, maintaining that Peer-feedback is “an effective assessment technique” that can result in more learning. In practice, “learners can share drafts of writing projects and obtain feedback from each other. When learners are provided with rubrics to structure their feedback, it becomes an opportunity for higher-level learning.” (p. 76). Chen et al. (2009) divide peer-assessment into two forms: peer observation and peer feedback. They argue that peer-observation does not differ from observation learning, defining it as “learning by observing others’ behaviours and performance” (p.285). They also classify peer-feedback as either negative or positive. From their study’s findings, they conclude that “…positive feedback or negative feedback has no significant effect on learners’ reflection levels, but peer feedback could give learners an opportunity to play the role as an instructor: from the peer learning perspective, this is good for learners to improve their own critical thinking abilities by giving comments to their peers.”(p.290) These findings suggest that involving students in their the assessment process would have benefits although the authors also note that there is no research showing the impact of much peer negative feedback would have on students’ reflection.

 

Regarding the feasibility of these forms of assessment, Anderson (2008) argues that the communication technologies and the kind of learners in online courses who are already in their work contexts have increased the opportunities for self-assessment. Nevertheless, depending on the nature and objectives of the online course, the type of learners involved and the instructor’s beliefs about learning, these types of assessment may not be welcome in some online courses.


 

Issues in online assessment

 

Assessment is an important aspect of learning and, therefore, has to be conducted in a way that leads to credibility. Many issues are linked to the assessment of online learning. Prakash and Saini (2012) suggest that two require particular attention: reliability and credibility. For Benson (2003), all issues in the assessment of online learning fall under one umbrella term: academic dishonesty. The following sections present briefly forms of ‘academic dishonesty’ as they appear in the literature and ways of dealing with them.


 

Fake Identities

 

Many scholars consider that one of the biggest issues in online assessment is faking the identity (Olt, 2002; Rowe, 2004; Benson, 2003; Milam, Voorhees, & Bedard‐Voorhees, 2004). They argue that it is almost impossible to confirm “that the learner enrolled in online study is the learner who completes the coursework, including assessments.”(Benson 2003: 72). As a solution, Olt (2002) suggests a variety of methods, including increasing the teacher-student interaction so that the teacher gets to know the student well and giving more collaborative assessment, or changing passwords for every test and giving students many tasks to limit their chance of having someone do it for them all the time. This however tends to ignore the impact too much work would have on the students’ ability to complete the course. Rowe (2004) on the other hand suggests the use of “some traditional tests” where identities can be checked. Again, this may not be possible for some learners depending on their location and availability.


 

Plagiarism and cheating

 

Also related to faking identities is plagiarism, which is seen as another issue because of the ease with which information is found and copied online by students (Benson, 2003); and so is the ability for students to bring, unnoticed, whoever or whatever they like to help in an online assessment as no one can see that (Rowe, 2004). Heberling (2002) goes even further, arguing that the “cut-and-paste technology makes cheating so easy that the students get both lazy and sloppy.” Although Heberling (2002) sees that this technology negatively affects students’ motivation to learn, he nevertheless acknowledges that it is also much easier to detect cheating online. Also, Stuber-McEwen, Wisely and Hoggart (2009) who compared cheating among online and face-to-face found that online students reported much less involvement in cheating than face-to-face. This implies that cheating which is feared to be a very big issue in the assessment of online learning may not be as widespread as or worse than in face-to-face as some scholars think it is.

 

Despite these growing concerns, solutions to the problem of plagiarism and  cheating exist. Some propositions include conducting formal assessments on a specific physical site such as a local school where identities can be checked (Benson, 2003; Rowe 2004), using online plagiarism checkers (Benson, 2003; Heberling 2002; Olt, 2002; Prakash and Saini, 2012); giving and discussing with students the “academic integrity/dishonesty policy” (Olt, 2002); or using multiple and diversified assessments that would allow the instructor to find out inconsistencies in the work of the student (Olt, 2002; Benson, 2003). However, some of these suggestions may be difficult to implement. For example, bringing students on an examination centre would not be a sign of flexibility that online learning prides itself for and could be costly because online learners are now everywhere in remote areas and prefer to take online courses because they cannot attend physical classes.

 

Conclusion

 

Advances in online learning technologies provide more options for assessing learning. In this review, we explored arguments about the similarity between online and face-to-face assessment. We have shown that although these may operate under the same principles there is an advantage for online learning assessment because of the affordances of the tools used. However, a radical change in the traditional assessment techniques is still required to suit the particularity of online learning assessment. We have also presented different views on what should be assessed and how this may vary depending on a variety of factors. The skills to be learned or the purpose of a certain online course influences what is assessed. Instructor may choose to use assessment for learning or assessment of learning through the use of different forms of assessment which include but are not limited to interactions (synchronous; asynchronous), collaborative tasks and self- or peer-assessment. We finally explored some of the main challenges in online learning assessment. Fake identity, plagiarism and cheating are presented, not as the only issues but their recurrence in the literature make them some of the most important issues. Ways of dealing with these challenges have been discussed including their limitations. Some of the solutions discussed include conducting formal assessments on a physical site, using online plagiarism checkers, varying assessments, etc. In conclusion, we think exploring all these online assessment aspects has provided us, as language teachers, with sufficient foundation awareness for assessing online learning work. Additionally, we also note that there isn't sufficient literature regarding the assessment of particular skills like practical skills. This represents a gap that needs to be filled by research.


 

REFERENCES

 

Anderson, T. (2008). Towards a theory of online learning. In Anderson, T. (Ed.).The theory and practice of online learning. Athabasca University Press. 45-74.

Bauer, J. F. (2002). Assessing student work from chatrooms and bulletin boards. New Directions for Teaching and Learning, 2002(91), 31-36.

Benson, A. D. (2003). Assessing participant learning in online environments. New Directions for Adult and Continuing Education, 2003(100), 69-78.

Chen, N. S., Wei, C. W., Wu, K. T., & Uden, L. (2009). Effects of high level prompts and peer assessment on online learners’ reflection levels. Computers & Education, 52(2), 283-291.

Elwood, J., & Klenowski, V. (2002). Creating communities of shared practice: the challenges of assessment use in learning and teaching. Assessment & Evaluation in Higher Education, 27(3), 243-256.

Goodyear, P., Salmon, G., Spector, J. M., Steeples, C., & Tickner, S. (2001). Competences for online teaching: A special report. Educational Technology Research and Development, 49(1), 65-72.

Haythornthwaite, C. (2006). Facilitating collaboration in online learning. Journal of Asynchronous Learning Networks, 10(1), 7-24.

Heberling, M. (2002). Maintaining academic integrity in online education. Online Journal of Distance Learning Administration, 5(2). Retrieved April 23, 2014 from http://www.westga.edu/~distance/ojdla/spring2002/heberling51.html

Macdonald, J. (2003). Assessing online collaborative learning: process and product. Computers & Education, 40(4), 377-391.

McConnell, D. (2002). The experience of collaborative assessment in e-learning. Studies in Continuing Education, 24(1), 73-92.

Milam, J., Voorhees, R. A., & Bedard‐Voorhees, A. (2004). Assessment of online education: Policies, practices, and recommendations. New directions for community colleges, 2004(126), 73-85.

Olt, M. R. (2002). Ethics and distance education: Strategies for minimizing academic dishonesty in online assessment. Online journal of distance learning administration, 5(3). Retrieved on April 23, 2014 from http://www.westga.edu/~distance/ojdla/fall53/olt53.html?utm_source=November+13%2C+2012&utm_campaign=Google&utm_medium=email

Prakash, L. S., & Saini, D. K. (2012, July). E-assessment for e-learning. In Engineering Education: Innovative Practices and Future Trends (AICERA), 2012 IEEE International Conference on (pp. 1-6). IEEE.

Qing, L., and Akins, M. (2005) ‘Sixteen myths about online teaching and learning: Don’t believe everything you hear’, TechTrends, 49,( 4), pp. 51-60.

Robles, M., & Braathen, S. (2002). Online assessment techniques. Delta Pi Epsilon Journal, 44(1), 39-49.

Rowe, N. C. (2004). Cheating in online student assessment: Beyond plagiarism. Online Journal of Distance Learning Administration, 7(2). Retrieved April 23, 2014 from http://www.westga.edu/~distance/ojdla/summer72/rowe72.html

Slaouti, D. (2007). Teacher learning about online learning: experiences of a situated approach. European Journal of Teacher Education, 30(3), 285-304.

Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat: Frequency and type of academic dishonesty in the virtual classroom. Online Journal of Distance Learning Administration, 12(3).

Sutton, L. A. (2001). The principle of vicarious interaction in computer-mediated communications. International Journal of Educational Telecommunications, 7(3), 223-242.

Swan, K., Shen, J., & Hiltz, S. R. (2006). Assessment and collaboration in online learning. Journal of Asynchronous Learning Networks, 10(1), 45-62.

Vonderwell, S., Liang, X., & Alderman, K. (2007). Asynchronous Discussions and Assessment in Online Learning. Journal of Research on Technology in Education, 39(3).

Comments (0)

You don't have permission to comment on this page.