Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Assessing Foundational Literacy in English and the Potential Role of Technology

The Importance of Foundational Literacy

Not only is reading important on its own, basic literacy is the foundation that children need in order to be successful in other areas of education. It is important to ensure that students acquire sufficient reading Ability at their foundational stages, i.e. by grades 1 and 2. Learning the foundational skills becomes more difficult as students grow older since classroom instruction moves to other subjects and more advanced skills instead of focussing on basic reading. As instruction, studying, and assessment of all subjects these days involve written elements like textbooks, worksheets, exams, etc., students need to be able to read and write in order to learn and succeed in school. Therefore, over time, there will be an increase in learning outcome gaps between students who have sufficient foundational reading skills and those who do not. Children who do not learn to read in the first few grades are more likely to repeat grades and to eventually drop out of school. A low level of literacy further affects a person’s ability for self-guided and lifelong learning even beyond the classroom walls.

 Yet studies have found that in many countries, students enrolled in school for as many as six years are unable to read and understand a simple text. This is why reading assessments play a vital role at this stage: to measure whether each student has attained the necessary reading level in order to provide timely remediation if needed.

Skills Involved In Reading

There are several skills that students learn in the process of reading. These range from the very basics of letter recognition to more complex skills of reading comprehension. By Kindergarten and Class 1, reading skills which students should have acquired include:

  • Letter recognition: the ability to recognise the names of all the individual letters of the alphabet and the sound that each makes
  • Phonics: the ability to identify the relationship between the written combinations of letters and the sounds they make
  • Phoneme segmentation: the ability to break a word into its individual sounds. This is a precursor to reading the entire word together. For example, the word ‘cat’ consists of three sounds: /k/ /a/ /t/.
  • Decoding: the ability to recognise and correctly sound out written words
  • Print concepts: an understanding of how words and sentences are represented on the page. For example, how one sentence is differentiated from another by capitalising the first letter of the first word, by adding a full stop at the end, etc.
  • Comprehension: the ability to put together the meaning of the words and sentences read in order to first understand the literal meaning of the text (as a start, to be able to answer questions like who, what, when, where, why about the text). At the next level, comprehension includes filling in the gaps and making inferences about things not specifically mentioned in the text.

Assessing Foundational Reading Ability

In English-medium schools around the world, different types of tests and tools are used to assess reading ability. Some of the standard tests involve asking the student to identify letters on a page, to identify the right letters needed to form a word, to split given words into their phonemes, to form rhyming words, to complete given sentences, and to read a text and answer questions on it.

Other tests that are administered involve non-word reading and oral reading fluency. A non-word reading fluency question tests the student’s decoding ability. In this test, the student is presented with a list of non-words. These are “words” that have no actual meaning but are put together following the conventions that regular words follow (eg. gub, pote). Since there is no way that students would have seen these “words” before, the question tests whether they can use their knowledge of phonics to sound them out correctly. This is often a timed exercise, where students have to correctly read out as many non-words as they can in a minute.

In an Oral Reading Fluency (ORF) test, students are asked to read out as much as they can from a given passage in a minute. This tests not only the student’s accuracy, but also their reading speed. Reading pace is an important skill to develop as slow readers will find it increasingly difficult to comprehend and learn in higher classes.

Traditionally, these assessments are conducted by teachers in one-on-one sessions with students. Several external assessments of reading, such as the DIBELS and ASER, also conduct the tests in person. The reason for this is that paper-based tests require children to have some minimum levels of reading fluency and comprehension skills. Thus, if they are unable to read the question, misunderstand the instruction, or have difficulty writing despite having the skill that the question tests, the test will not be able to accurately measure what they know.

Since such tests are administered one-on-one, they are time-intensive. The teacher has to spend a lot of time in creating the assessment, conducting it, and then scoring each student. Even if multiple teachers or evaluators are employed to test students, they need to be trained and work in a coordinated manner to ensure standardisation of instructions and scoring. This makes it harder to conduct such tests on a large scale and to compare data across schools and regions.

Can technology help?

One advantage that technology offers is standardisation. Each student can be given exactly the same instructions and be scored the same way. Another major advantage is that multiple students can take the test at the same time. Instructions on how to answer each question can be given through audio and video for students.

The grading process is also much faster, with systems being able to record and calculate the students’ score immediately after they finish the test. Detailed reports can be generated and provided to teachers. If responses are recorded, teachers can also view their students’ performance on each question to gain further insights. 

Most of the necessary skills can be tested through such a digital test, including letter recognition, phonics, sentence completion, syntax, grammar, comprehension, and vocabulary. For instance, Educational Initiatives recently piloted a reading assessment for grades 1-4 with around 18000 students in UAE. Here are some question types that were used in this pilot, to illustrate how a digital platform can be used to test the above-mentioned skills. All instructions were given in audio format, and simple question formats like MCQs, fill in the blanks, and dropdown were used.

(Please note that these images are just sample questions and do not contain the actual content tested.)

Here, students were tested on phonics by asking them to enter the letter that would complete the word.

Here, students were tested on phonics. They were asked to choose the letters that would make a meaningful word.

This question tests students’ knowledge of grammar and syntax by asking them to form a meaningful sentence by putting each option in the right position

This is an example of a Maze/Cloze comprehension question, which is a commonly-used test of reading comprehension. Students have to identify the right words to complete the passage based on its context.

The pilot showed promising results with class-wise average performance across schools ranging from 35% to 87% and most students performing well on letter recognition and phonics questions. A survey of around 25 students was conducted in one of the schools where some students who attempted the digital assessment were then asked to read out words from a sheet, similar to a word reading fluency test. Scores of the students in the digital and the in-person assessment were compared and the team found moderate to high positive correlations between the two (grade-wise correlations ranged from around 0.5 to 0.85). This suggests that digital assessments have the potential to provide a similar estimation of a student’s reading ability as traditional in-persons tests.

 However, there are a few key skills which still present difficulties to a digital approach. Voice recognition technology is far from being perfect. Therefore, tests like non-word reading or ORF which can only be tested using spoken input from the student are difficult to conduct.

However, even here, there is a possibility for technology to eventually be used. Machine learning provides a promising outlook, where a system can be trained to recognise whether a student is able to read out words correctly or not. While this still may not be accurate for every single word, if the system is able to collect enough input from each student (i.e. if each student speaks out a couple of paragraphs’ worth of words), it can provide results that are accurate overall.

So, at present, a mixed approach is perhaps a good option to ensure that all the required skills are covered while also maximising efficiency. A majority of the skills can be covered in an online test while additional tests like that for ORF can be administered in person. This significantly brings down the amount of time needed per student. Further, if teachers can input these scores into the same digital platform used for the rest of the assessment, a final report can still be generated online to provide an overall evaluation of the student’s reading ability.

Finally, technological interventions could help in bridging the gaps in students’ reading abilities after these have been identified. Digital learning platforms could provide weaker students with extra exposure, practice, remediation and guidance to help them catch up to the appropriate level.

Whether it is feasible or not for all schools to adopt such an approach immediately, this does raise interesting possibilities for the future of assessment, where technology can play a large role in making teachers’ lives easier and in covering a range of student needs.

References:

Hoover, W. A., & Gough, P. B. (n.d.). The Reading Acquisition Framework – An Overview. Retrieved from https://sedl.org/reading/framework/overview.html

 Lamba, R. (2019, February 23). So What Is Reading Anyway? [Web log post]. Retrieved from https://www.linkedin.com/pulse/so-what-reading-anyway-ritu-lamba/

 RTI International. (2015). Early Grade Reading Assessment (EGRA) Toolkit, Second Edition. Washington, DC: United States Agency for International Development.

 University of Oregon (2018). 8th Edition of Dynamic Indicators of Basic Early Literacy Skills (DIBELS ®)

The post Assessing Foundational Literacy in English and the Potential Role of Technology appeared first on EI blog.



This post first appeared on EI Blog – Educational Initiatives, please read the originial post: here

Share the post

Assessing Foundational Literacy in English and the Potential Role of Technology

×

Subscribe to Ei Blog – Educational Initiatives

Get updates delivered right to your inbox!

Thank you for your subscription

×