JoVE Logo

Zaloguj się

W tym Artykule

  • Podsumowanie
  • Streszczenie
  • Wprowadzenie
  • Protokół
  • Wyniki
  • Dyskusje
  • Ujawnienia
  • Podziękowania
  • Materiały
  • Odniesienia
  • Przedruki i uprawnienia

Podsumowanie

This protocol presents the step-by-step method for using a writing-based, valid, and reliable application designed for kindergarten-level education. It operates as a curriculum-based assessment tool, specifically focusing on evaluating early writing skills in young learners.

Streszczenie

Kindergarten writing involves acquiring fundamental skills, such as letter formation, phonemic awareness, and the gradual use of written language to express ideas. In this context, tablet-based curriculum assessments present new opportunities for teaching and evaluating these early writing abilities. To the best of our knowledge, there is currently no tool with these features available for Spanish-speaking children. Therefore, the primary objective of this study was to present a tablet-based protocol for screening at-risk young writers. This tablet-based protocol is specifically designed for kindergarten-level education and serves as a curriculum-based assessment tool that focuses on evaluating early writing skills in young learners. This user-friendly application incorporates interactive tasks and exercises, including assessments of phonological awareness, name writing, alphabet letter copying fluency, and oral narrative skills. These comprehensive evaluations cover various aspects of writing. This application is meticulously aligned with kindergarten curriculum objectives, ensuring that assessments adhere to educational standards. By providing educators with a digital platform to assess and enhance students' writing skills, this tool empowers them to make data-driven decisions for effective instruction during the early stages of writing development. Moreover, it functions as a curriculum-based measurement (CBM), offers valuable support for identifying potential writing challenges in young learners and continuously monitoring their progress. This feature enables early intervention and tailored instruction to optimize writing skill development.

Wprowadzenie

The primary aim of this study is to introduce a multimedia assessment protocol for kindergarten students aimed at identifying potential writing difficulties and exploring its internal structure and diagnostic accuracy.

At the kindergarten level, writing marks the beginning of a child's literacy journey. It goes beyond scribbles and shapes, representing the foundation for communication and creative expression. The acquisition of writing skills is a matter of paramount concern because it requires the attention of parents, educators, children, and scholars alike. The World Health Organization (WHO)1 (2001) designated writing difficulties as a significant impediment to school participation, a pivotal component in a child's normative developmental trajectory. Writing, serving as an indispensable medium, empowers children to articulate their knowledge and thoughts, enabling their participation across a spectrum of academic endeavors2. However, this deceptively elementary undertaking is far from simplistic and encompasses many intricate processes. Children must invest time and diligence to acquire and refine this nuanced skill. The transition from ideation to orthographic representation necessitates the orchestration of cognitive, linguistic, and motor processes.

Numerous national reports have shed light on the prevalence of students failing to attain the requisite proficiency levels in writing. For instance, in Spain, the Instituto Nacional de Evaluación Educativa [National Institute for Quality and Evaluation] (INEE) unveiled the results of its assessment conducted during the 1999-2000 academic year for Primary and Secondary Education3. The evaluation revealed that students' performance in writing not only fell below the expected standards but also underscored substantial shortcomings in classroom instruction and pedagogical preparation. Additionally, Spain's Ministry of Education issued a General Diagnostic Assessment Report in 2009, which further underscored the levels of foundational skills among fourth-year primary education students4. In Article 20.3 of Organic Law 2/2006, dated May 3, pertaining to Education (LOE), which was subsequently amended by Organic Law 8/2013, enacted on December 9, aimed at enhancing educational quality (LOMCE), the introduction of a diagnostic assessment for third-year primary education was established5. This legislative provision stipulated that 'educational institutions shall administer an individualized evaluation for all students upon completing the third year of primary education.' The primary objective of this assessment was to ascertain the extent of proficiency in skills, competencies, and abilities encompassing oral and written expression, comprehension, calculation, and problem-solving. This evaluation was conducted to evaluate the acquisition of competence in linguistic communication and mathematical proficiency. The formulation of the General Framework for this assessment was a collaborative effort involving the active engagement of 14 educational authorities and the Ministry of Education, Culture, and Sport (MECD). In 2010, in the comprehensive diagnostic assessment for secondary education (ESO), concerning the processes of written expression, a low score was obtained in the presentation of written work. It also underscored a disquieting educational landscape in Spain, particularly concerning language proficiency6.

Mastering the art of writing is a formidable challenge that demands intricate cognitive processing. There are two models of writing that have received significant conceptual support: The simple view of writing model (SVW)7,8 and the not-so-simple view of the writing model (NSVW)9,10. The latter, a revision of the previous one, postulates that transcription skills, such as handwriting fluency and spelling, collaborate with self-regulatory functions, including attention, goal setting, and revision, in addition to working memory, in the nuanced process of text generation. This process involves the dual tasks of idea generation and transforming those ideas into linguistic propositions intricately linked with oral language skills. Both transcription skills11,12 and oral language proficiency13,14,15 are robustly acknowledged as predictive factors in the early development of writing. Due to the intricate nature of writing, certain students encounter difficulties in attaining proficiency. Writing disabilities not only detrimentally impact self-efficacy and academic motivation but can also persist into adulthood, negatively influencing future workplace experiences and emotional well-being16,17. The DSM-V defines learning disabilities in writing as follows: With impairment in written expression (315.2 F81.81): skills in spelling, grammar, punctuation, clarity, and organization of written expression are affected. Additionally, the DSM-V specifies that in this disorder, the individual may add, delete, or substitute vowels or words. As a result, written work is often illegible or difficult to read. An impairment in writing skills significantly interferes with academic achievement or activities of daily living that require the composition of written text (APA, 2013)18.

The assessment protocol targets key components such as alphabet copying, phonological awareness, name writing, expressive vocabulary, and oral narrative skills. Alphabet copying is defined as the process of having children reproduce letter forms by tracing, copying, or writing over models of individual alphabet letters. This task relies predominantly on motor reproduction skills rather than long-term memory, with enhanced performance correlating with the automation of motor reproduction routines19. Phonological awareness, on the other hand, assesses children's understanding of the sound structure of language. Specifically, it involves the explicit knowledge that words can be segmented into a sequence of phonemes and that phonemes can be blended into words20. Name writing, according to Puranik and Lonigan21, is defined as the ability to reproduce one's own name in writing, which is considered an important developmental milestone in early literacy and the first step toward mastering the alphabetic principle. Expressive vocabulary refers to the words a child can use to communicate their thoughts, feelings, and ideas through spoken language. It is a crucial aspect of language development in early childhood22. Finally, oral narrative skills encompass the ability of young children to understand, produce, and convey stories through spoken language. This involves sequencing events in a logical order, providing relevant details, and using appropriate language structures to recount personal experiences or fictional tales23. These skills are typically assessed by having children retell stories or generate narratives based on picture prompts, with measures evaluating elements such as story structure, use of evaluative language, temporal and causal connections, and overall coherence24.

Early assessment protocols are crucial for detecting students at risk of writing disabilities before the problem develops. Extensive research has shown that early detection leads to a better prognosis. In the context of the Response to Intervention (RTI) model, Curriculum-Based Measurements (CBMs) are often used for universal screening and progress monitoring because they are quick, practical, reliable, valid, accurate, and sensitive to the time of measurement25,26,27. The CBM process involves evaluation at three distinct points during the course (normally in autumn, winter, and spring). This process allows us to identify students at risk and decide whether the intervention should be modified or if the progress is acceptable. However, there is limited scientific literature on early detection tools for the writing processes compared to other academic areas, such as reading or mathematics28.

Technological advances in education
The use of touchscreen tablets among young children is increasing in the home and early childhood education centers29. A review of the most recent scientific literature highlights the relevance and advantages of CBMs generated through the use of digital means (i.e., tablets)30. In recent years, the usability and applicability of CBMs have garnered significant attention. This is associated with inquiries into the successful implementation of progress monitoring tools within educational practice31. In this context, the significance of modern digital media in the educational domain is gaining recognition32. A curriculum-based assessment tool with the dual purpose of screening and progress monitoring utilizing tablet technology would offer numerous advantages for educators33,34,35. With this type of digital medium (i.e., tablets), the advantages outweigh the disadvantages of conducting assessments aimed at early detection of learning difficulties (i.e., screening) in the core academic areas of reading, writing, and mathematics, as well as assessments designed to monitor students' learning progress in these academic domains. This is attributed to the fact that digital assessments (whether conducted via computers or tablets) lead to increased time efficiency. Particularly concerning assessment data collection, presentation, and documentation tasks typically performed by teachers, these processes can be automated and executed far more efficiently in terms of resource utilization36. Moreover, providing support for data interpretation becomes much more straightforward. In addition, research indicates that tablet-based assessments are well received by teachers and result in greater learning gains37,38. In contrast, despite the potential advantages of tablet-based assessments, researchers have highlighted several drawbacks that should be considered. Bremer and Tillmann39 emphasize the significant costs associated with acquiring and maintaining tablets, as well as concerns about exacerbating existing inequalities in access to technology. Additionally, tablets often have limited processing power, storage capacity, and input capabilities compared to laptops, which can hinder their functionality for demanding computational tasks or extended typing39. These limitations may impact the effectiveness of tablet-based assessments, particularly in scenarios requiring advanced data analysis or extensive written responses. Furthermore, technical challenges such as reduced processing power, storage constraints, input difficulties, small screen size, connectivity issues, battery limitations, and compatibility concerns with assessment software or platforms can hinder their usability40,41. Moreover, engagement and distraction risks pose significant concerns, as digital assessments may struggle to maintain children's focus during testing42. Careful consideration of these factors is crucial when utilizing tablets for educational evaluation.

Overall, in light of the information presented in this report, the feasibility of CBMs utilizing tablets in the classroom appears promising and is increasingly needed in a digital society. This assertion is further substantiated by findings indicating that many children are gaining access to tablets both at school and at home43. In recent years, several tablet-based apps have been developed for the assessment of early literacy skills44,45,46. Neumann et al.46evaluated the psychometric properties of tablet-based literacy assessments, focusing on both validity and reliability. They tested an app designed to assess expressive and receptive literacy skills in a sample of 45 children aged 3 to 5 years. The children used the app on a tablet to complete assessments related to alphabet and word recognition skills. The results indicated that tablet-based assessments utilizing both expressive and receptive response formats offer a valid and reliable way to measure early literacy skills in young children. The purpose of the study conducted by Chu and Krishnan44 was to develop and determine the validity of a computerized tool called the Quantitative Assessment of Prewriting Skills (QAPS) for assessing the pattern of children's copying to measure their visual-motor skills. The authors demonstrated that the QAPS is feasible and adequate for measuring and distinguishing the drawing skills of typically developing children and children with visual motor deficits. Similarly, Dui et al.45 designed a novel tablet-based app, Play Draw Write, which was tested among healthy children with mastered handwriting (third graders) and those at a preliterate age (kindergartners). Their findings provide evidence for the effectiveness of tablet technology in quantitatively evaluating handwriting production. Additionally, they propose that a tablet-based application holds the potential for identifying handwriting difficulties at an early stage. Nevertheless, it is worth noting that none of these studies have evaluated the diagnostic accuracy required for identifying children at risk of writing-related learning disabilities. Therefore, there is a need for precise classification data to validate the effectiveness of such a screening tool. The classification accuracy of a screening tool is determined by how well it identifies students as at risk or not at risk compared to a later writing outcome. Researchers often report the area under the curve (AUC), a measure of overall classification accuracy. The AUC serves as a diagnostic performance index by combining sensitivity and specificity into a single measure. It classifies students as at-risk or not at-risk, using their performance on another standardized test as the criterion. The following intervals were used to interpret the AUC: high, AUC > .90; good, AUC > .80-.90; moderate, AUC = .70-.79; and low, AUC= .50-.6947.

Screening tools and their respective cutoff scores for risk determination must balance sensitivity and specificity, meaning that increasing sensitivity tends to decrease specificity and vice versa. Sensitivity represents the proportion of students identified as at risk on the screening tool and at risk on the outcome measure (true positives) from all students who scored at risk on the outcome measure (true positives plus false negatives). Essentially, a high sensitivity value indicates that fewer students at risk are missed during the screening process. If the screening tool lacks sensitivity and fails to identify these students, they will not receive the necessary intervention. Conversely, specificity represents the proportion of students accurately identified as not at risk by both the screening tool and the outcome measure (true negatives) among all students classified as not at risk by the outcome measure (including true negatives and false positives). Increased specificity leads to a reduction in false positives. Schools prioritize minimizing false-positive identifications during screening because excessive identification of students as at risk39 (i.e., identifying a large number of false positives) strains resources allocated for intervention services. Although there is no universally agreed-upon standard for acceptable values of these indices, achieving high sensitivity values in universal screening is paramount importance48,49.

Over the past few years, various paper-and-pencil screening instruments have been designed to assess writing proficiency among Spanish-speaking students at the kindergarten and elementary school levels. Specifically, the Indicadores de Progreso de Aprendizaje en Escritura (IPAE) serves a tailored CBM for elementary grades50. In contrast, the Early Grade Writing Assessment in Kindergarten (EGWA-K) specifically targets kindergarten students, providing a Spanish standardized test that assesses fundamental literacy skills. It includes tasks such as transcribing words from images, segmenting pseudowords into phonemes, writing freely chosen words, and narrating a story based on a drawing. These diverse tasks demonstrate high reliability and validity in evaluating early writing abilities51.

However, to our knowledge, no technology-based CBM tools have yet been developed for Spanish language assessment for screening and monitoring learning progress in early-age writing. Recognizing the importance of identifying young children who may struggle with writing and the lack of computerized tools for the Spanish-speaking population, this study aimed to introduce a multimedia assessment protocol for kindergarten students and explore its internal structure and diagnostic accuracy.

Protokół

The protocol presented here was conducted in accordance with the guidelines provided by the Comité de Ética de la Investigación y Bienestar Animal (Research Ethics and Animal Welfare Committee, CEIBA), Universidad de La Laguna. The data were collected at three different time points, capturing information exclusively from students whose parents, administrations, and schools provided consent.

NOTE: The app used in the protocol is Tablet App Indicadores de Progreso de Aprendizaje en Escritura para Educación Infantil [Basic Early Writing Skills for Kindergarteners] (T-IPAE-K). It includes five tasks: 1) Copying Alphabet Letters, 2) Name Writing, 3) Expressive Vocabulary, 4) Phonological Awareness, and 5) Oral Narrative. A pedagogical agent provides instructions for each task, along with one or two trials (depending on the task) and a demonstration before the testing phase begins. An example of the application protocol for each task is provided below:

1. Experimental setup

  1. Use the following inclusion criteria: Ensure that students belong to the last year of kindergarten and are native Spanish speakers. Ensure they can follow instructions properly.
  2. Use the following exclusion criteria: Exclude children with special educational needs (i.e., those who require support and specific educational attention due to their sensory impairment or acquired neurological problems, among others; these are traditionally used as exclusionary criteria for learning disabilities or severe behavioral disorders, either temporarily or throughout their schooling).
  3. Install the application on the children's tablets. This application is installed on tablets using a single file with the .APK extension, which is the type of file used by Android to distribute and install applications
    NOTE: The file contains an automated installer that allows the user to select the location of the installation. The installer detects previous versions of the tool and warns the user of possible updates. These updates do not delete the saved data of the previous versions.
    1. Upon initial access to the application, users encounter a menu presenting several options: 1) Evaluar [initiate the game], 2) Corregir [rectify assignments], 3) Modificar [update student data], and 4) Nuevo [include new students]
    2. Initiate the process by creating the student profile. Enter essential details like 'Student Code,' [codigo] 'Date of Birth,'[Fecha de Nacimiento] 'Application Date,' [Fecha de aplicación] 'School Name,' [Nombre del colegio] etc. Click Save [guardar] to finalize the new student profile and update the information later in the 'modify' [Modificar] section if necessary (see Figure 1).
    3. Before starting tasks, check that everything is working. Click Examiner, then Check Peripherals to confirm touch screen, keyboard, voice key, and recorder functionality. Save the configuration by clicking OK.
      NOTE: Each task is highlighted within a green box. The tasks are the following: Copia de letras [Copying alphabet letters], Vocabulario expresivo [Expressive vocabulary], Escritura del nombre [Name writing 1st part, 2nd part], Conciencia fonológica aislar [Phonological awareness], and Narrativa oral [Oral narrative].
    4. Perform each task highlighted in green. Upon task completion, ensure that the Menu returns to the task section and the box changes to red, signaling to both the examiner and the student that the task has been completed (see Figure 2).
    5. Examine children in the last year of kindergarten in a single session of approximately 25 min. Administer the task in a quiet room. Use the external keyboard to either correct after completing the task or to record hits and misses while performing the task, depending on the task.
      NOTE: No headsets are needed, as instructions can be heard using the tablet's built-in speakers. The program itself can be recorded through the tablet.
    6. Perform the task.
      1. Provide oral instruction about the task and demonstrate the procedure with the help of an in-built pedagogical agent.
      2. Ensure that the pedagogical agent provides initial instruction to the students on what the game is about and how to play it.
      3. When the pedagogical agent asks the students if they have understood the task, ensure that the student clicks the green tick if the answer is 'yes' or the red cross if the answer is 'no' (see Figure 3).
      4. Depending on the answer, execute the task with the help of the pedagogical agent on the students. Ensure they perform the task independently without any help.
        NOTE: The pedagogical agent is not present when the student is performing the task.
  4. Once the task has been completed, ensure the pedagogical agent provides the correct feedback.
  5. Repeat the initial instructions if the task is not understood.
    NOTE: When a student clicks the "I did not understand" option for the second time, the assignment gets coded in the database as "the student was not able to perform the task" and the task will be marked as completed.

figure-protocol-5622
Figure 1: CBM main and new student menu Please click here to view a larger version of this figure.

figure-protocol-6017
Figure 2: Tasks before and after completion of the alphabetic copying letter task Please click here to view a larger version of this figure.

figure-protocol-6454
Figure 3: "Did you understand?" screen Please click here to view a larger version of this figure.

2. Tasks

figure-protocol-6908
Figure 4: Demonstration of a student's performance in the alphabetic copying letter task Please click here to view a larger version of this figure.

  1. Copying the alphabet
    1. To access the task, click Evaluation [Evaluar] and select Copying the letters [Copia de letras].
    2. Let the pedagogical agent provide instructions and model the task. An example is shown in Figure 4.
    3. Press the Continue key on the external keyboard to proceed.
    4. Ask the student if he/she understood the task and ask them to click Yes or No accordingly. Guide if needed.
    5. Begin the task: Ask the student to reproduce vowels as presented on the screen within 1 min, presented sequentially.
    6. Use the external keyboard's Continue key after completing each letter.
    7. After finishing the task, press the Exit door as directed by the pedagogical agent.
    8. Open the Correct [corrección] section of the main menu and click on Copying the letters" [copia de letras].
    9. Record both correct and incorrect letters on the screen.
    10. Click on the box to be filled and use the external keyboard to enter the number of letters typed, correct letters, or errors (misalignment, omission, addition, or inversion)
    11. Using the external keyboard, identify and note the correct and incorrect letters. Also note alignment, total or partial inversions, and the addition or omission of strokes in the corresponding section.
      NOTE: A letter is considered correct if its free of errors presented in step 2.1.10. Also, add the type of pencil grip the student used. The examiner uses images of different types of grips to identify which one the child uses while writing. There are no specific citations for the type of grip as it is based on visual assessment by the examiner.
    12. Once the correction is finished, click Save [guardar].
    13. Calculate the number of correct letters (maximum score: 5) and note any errors in alignment, inversions, additions, or omissions (see Figure 5).
      ​NOTE: Correct letters are defined as those without the following errors: misalignment, reversals, added strokes, and missing strokes. Misalignment refers to the distance between where a letter is intended to be positioned on the baseline and where it is placed. Reversals occur when a letter or part of it is reversed or rotated incorrectly (e.g., e/ǝ/, a/ɒ, u/n). Added strokes are any strokes not belonging to the original letter from the template while missing strokes refer to the absence of any stroke in the original letter on the answer sheet52.

figure-protocol-10139
Figure 5: Example of a student's performance correction in the alphabetic copying letter task Please click here to view a larger version of this figure.

  1. Expressive vocabulary
    1. To access the task, click Evaluation [Evaluar] and then, select Expressive Vocabulary [Vocabulario expresivo].
    2. Ensure that the pedagogical agent provides instructions to the student, followed by the modeling section (refer to Figure 6). Two example items will be presented.
    3. React to the student's performance using the Correct or Incorrect keys on the external keyboard.
    4. If the student obtains examples wrong, offer corrective feedback by showing the correct answer.
    5. Ask the student if they have understood the task by pressing the Yes or No button on the display. If necessary, guide their answer.
    6. Once the final step is completed, ensure that the task starts. Images will appear individually for each of the ten items. Record the correct hits (through the Correct key) and misses (through the Incorrect key) using an external keyboard. Synonyms associated with the image names are also considered correct.
    7. After completing the task, press the Exit door as directed by the pedagogical agent.
      NOTE: The tablet automatically calculates the total number of hits, with 10 being the maximum score.

figure-protocol-12043
Figure 6: An example item of the expressive vocabulary Please click here to view a larger version of this figure.

  1. Name writing
    1. To access the task, click on Evaluation [Evaluar] and then select Name Writing [Escritura del nombre].
    2. The first part of the task begins first. Let the pedagogical agent offer the student the instruction and modeling part. In the first part, the pedagogical agent says, "Let's do an example; I am going to write my name", and the examiner accompanying the child must write his/her name as an example (refer to Figure 7).
    3. Then, ask the student if they have understood the task by clicking the Yes or No button on the screen. Provide guidance if necessary.
    4. During the first part of the task execution, students write their names within the given guidelines in less than 1 min.
    5. After completing the first part, proceed to the second part following the same procedure. In this second part, the pedagogical agent explains the task and gives an example.
    6. In the second part, students write the names they know within the given guideline in less than 3 min.
    7. After completion, the pedagogical agent asks students about the names they have written. Note the names beneath the written responses.
    8. Press the Exit door as directed by the pedagogical agent.
    9. To correct, open Correct [Corregir] in the main menu and select Names Writing [Escritura del nombre].
    10. Use the external keyboard to enter the total number of correct letters and correctly spelled names in both parts.
    11. Enter the total number of correct letters and correctly spelled names for both parts using the external keyboard. To move to the second part, press the Grey box (second part) [segunda parte]. Also, add the pencil grip the student used for both parts.
    12. Choose one of the following 12 grip types that the student used for both parts: Radial cross palmar, palmar supinate, digital pronation, brush, graps with extended fingers, cross thumb, static tripod, four finger, lateral tripod, dynamic tripod, dynamic quadrupod or lateral quadruped.53
    13. Click Save [Guardar] when the correction is complete.
      NOTE: A name is considered correct when written with all its corresponding letters, including homophones. Common mistakes include omission, addition, substitution, or letter translation. A letter is considered correct when it is roughly recognizable. Incorrect spellings of homophonic letters (e.g., writing the letter /b/ instead of /v/) will not be considered.

figure-protocol-15214
Figure 7: Example of a student's performance correction in the name writing task Please click here to view a larger version of this figure.

  1. Phonological awareness
    1. To access the task, click on Evaluation [Evaluar] and then select Phonological Awareness - Isolate [Conciencia fonológica aislar].
    2. Allow the pedagogical agent to provide the student with instructions and task modeling. Two example items will be presented. If the student makes mistakes in the examples, the pedagogical agent will provide feedback by revealing the correct answer.
    3. Allow the pedagogical agent to check the student's understanding of the task by prompting the students to press the Yes or No button on the screen.
    4. Ensure the student understands the task and provide additional guidance, if needed.
    5. Complete this final step, and the task will begin.
    6. Present 18 items associated with this task.
    7. Using the external keyboard, mark as 'hit' when the student articulates the sound of the phoneme that needs to be isolated. Mark it as a 'partial hit' when the student names the letter and mark it as 'incorrect' for any other answer.
    8. When the task is completed, instruct students to follow the pedagogical agent's instructions, i.e., to press the Exit door and leave the task.
    9. Display an image of an icon of a person emitting sound to indicate accurate reaction time recording (see Figure 8).
      NOTE: No corrections are required after the task is completed. The student is awarded 2 points for each correct response, 1 point for each partial correct response, and 0 points for each error. The tablet automatically calculates the total number of hits, both total and partial, with 36 being the maximum score achievable.

figure-protocol-17523
Figure 8: An example item related to phonological awareness. A pedagogical agent says the word aloud, and the vocal key appears in the upper left corner when the student responds Please click here to view a larger version of this figure.

  1. Oral narrative
    1. To access the task, click on Evaluation [Evaluar] and then select Oral Narrative [Narrativa oral].
    2. Have the pedagogical agent offer instructions to the student.
    3. Ask the student if they understand the task and have them respond with 'yes' or 'no'.
    4. Give the student 30 s to think about their stories ( Figure 9).
    5. Start a countdown and begin the task once the countdown is finished.
    6. Allow the student up to 5 min to narrate their stories based on the provided topic.
    7. Record the student's narrative on the tablet for later correction.
    8. Instruct the student to press the End button when finished.
    9. Display the Exit door to leave the task.
    10. For corrections, open Correct [Corregir] in the main menu, select Oral Narrative, [Narrativa oral] and Review stories using the media player tool.
    11. Press the play button to replay the student's story which was recorded during his performance by the same software( Figure 10).
    12. Use the external keyboard to note the total number of words, unique words, word sequences, and T-units (i.e., A T-unit consists of a main sentence plus any dependent or relative clauses directly connected to it).
    13. Click Save [Guardar] when correction is complete

figure-protocol-19616
Figure 9: An example of a prompt in the narrative oral task: "One day I wake up and I can fly" Please click here to view a larger version of this figure.

figure-protocol-20074
Figure 10: Correction menu for the oral narrative task Please click here to view a larger version of this figure.

3. Student evaluation

  1. Following completion of the assessment by all students, evaluate the assignments and subsequently upload data to a server managed by the research team.
  2. Import, merge, and meticulously clean the data in preparation for subsequent analysis.
  3. To evaluate learning during the school year, ask students to perform the tasks and assess using Form A (fall), Form B (winter), and Form C (spring).
  4. Generate data for Form A of the application at the beginning of the academic year (fall).
  5. Generate data for Form B of the application, midyear (winter).
  6. Generate data for Form C of the application, at the end of the academic year (summer).
  7. Assess the appropriateness of the data for exploratory factor analysis (EFA) by analyzing the correlations between variables to ensure they are sufficiently intercorrelated54.
  8. Use Bartlett's test to verify that your population correlation matrix differs from an identity matrix, indicating that your variables are not independent55.
  9. Apply the Kaiser-Meyer-Olkin (KMO) test to evaluate the sampling adequacy for including each variable in the EFA56.
  10. Examine the determinant of the correlation matrix to check for multicollinearity.
  11. Determine the number of factors to extract by retaining variables with eigenvalues of 1 or greater and examining the scree plot to identify factors with a sharp decline in eigenvalues, especially in samples over 200 participants 57,58.
  12. Incorporate parallel analysis in addition to these commonly used criteria, as it is regarded as the most precise method for determining the number of factors to retain59.
  13. Perform Exploratory factor analysis to find the correlation between different variables obtained60.
  14. Compare the results with the EGWA-K and Teacher rating scale performed on the same set of students at the end of the school year.
    NOTE: Teachers rated students' language, communication, and representation skills, including oral language, phonological awareness, communicative abilities, and initial writing, on a scale of 1 to 4, where 1 is "poor" and 4 is "excellent." These ratings (Teacher Rating Scale) reflect teachers' perceptions of students' academic competence and progress throughout the academic year.

Wyniki

For this study, 336 Spanish kindergarten students (boys = 163, girls = 173; Mage = 5.3 years (63.6 months), SD = 0.29) were recruited from state and private schools in urban and suburban regions of Santa Cruz de Tenerife. Children with special educational needs were excluded. This included children with sensory impairments, acquired neurological conditions, or other issues traditionally deemed exclusionary criteria for learning disabilities. This information was sourced from the Department of Education of the Canary Islands Government.

The variables of the application were computed as follows: The total number of correctly copied letters was used for the Alphabetic Letter Copy task. The Expressive Vocabulary task was assessed using the total number of correct items. For the Name Writing task, the total number of correctly written names from both parts of the task was recorded. The Phonological Awareness task considered the total number of correct and partially correct responses. The Oral Narrative task included three measures: Unique Words (UW), the total number of correct word sequences, and the total number of terminal units. Finally, an aggregate variable was derived by averaging all the previously computed variables. This new variable encompassed the transcription and narrative competence measures. Descriptive statistics (mean, standard deviation, minimum, maximum, range, skewness, and kurtosis) for Forms A, B, and C are detailed in Table 1. Results indicated a normal distribution of data, with kurtosis and skewness indices below 10.00 and 3.00, respectively54.

Descriptive Statistics of T-IPAE-K measures per form
MeasuresForm AForm BForm C
nMSDminmaxrangeskewkurtosisnMSDminmaxrangeskewkurtosisnMSDminmaxrangeskewkurtosis
ACL3221.431.630550.91-0.443312.331.710550.10-1.313343.071.83055-0.48-1.22
EV3207.302.7401010-1.351.113297.782.1301010-1.582.893318.422.3501010-2.334.80
NW3200.480.990662.9810.333290.991.370771.763.243311.952.37012121.592.50
PA3104.017.57036362.385.133178.4911.62036361.15-0.1532212.6412.79036360.67-1.09
UW3268.0111.17056561.471.9133210.7011.89053530.940.3933211.7211.25057571.081.35
WS3269.0515.9901041042.628.1833212.5716.51091911.864.3633213.5315.71085851.894.46
TU3252.082.99016161.652.833321.982.50013131.773.973332.872.94015151.312.03
ACL = Alphabetic Copying Letters; EV = Expresive Vocabulary; NW = Names Writing; PA = Phonological Awareness; UW = Unique Words; WS = Word Sequences; TU = T-Units

Table 1: Descriptive statistics of CBM indicators per form.

The concurrent and predictive validity of this assessment was established by administering it alongside a standardized paper-and-pencil writing test, the EGWA-K, and by gathering teachers' assessments using the Teacher Rating Scale (TRS), which evaluates their students' curricular competence. Teachers utilized the TRS questionnaire to assess students' competencies in these skills by rating their skill acquisition or difficulty levels. The results revealed significant correlations between the CBM forms (A, B, and C) and both the EGWA-K and TRS, as shown in Table 2. Specifically, the form administered at the beginning of the academic year (fall) demonstrated a moderate association with both the EGWA-K (r = .38, p < .001) and the TRS (r = .24, p < .001). The mid-year form (winter) showed a stronger correlation with the EGWA-K (r = .42, p < .001) and a more substantial but still moderate correlation with the TRS (r = .33, p < .001). The form administered at the end of the academic year (spring) exhibited the highest correlations with the EGWA-K (r = .48, p < .001) and the TRS (r = .31, p < .001), reflecting the pattern across individual forms (A, B, and C).

Correlation coefficients of the form A, B and C, concurrent and predictive validity
Observe variable1234567EGWA-KTeacher Rating
Form A
ACL1.00.05.06.13*.04.02.04.19***.04
EV1.00.07.12*.11*.08.11.20***.18**
NW1.00.34***.01.00.00.28***.21***
PA1.00.05.01.04.28***.11
UW1.000.92***.97***.20***.09
WS1.00.92***.21***.12*
TU1.00.20***.09
Form B
ACL1.00.04.05.17**.10.08.06.12*.02
EV1.00.21***.20***.25***.22***.21***.29***.27***
NW1.00.28***.07.07.03.35***.11
PA1.00.13*.12*.11*.50***.09
UW1.00.96***.93***.19***.21***
WS1.00.95***.17**.19**
TU1.00.17**.18**
Form C
ACL1.00.02.24***.25***.08.10.05.21***.17**
EV1.00.08.12*.10.12*.13*.14**.10
NW1.00.42***.15**.13*.14*.47***.22***
PA1.00.08.07.09.51***.00
UW1.00.95***.94***.26***.14*
WS1.00.94***.25***.15*
TU1.00.25***.14*
Note. *p < .05; **p < ,01; ***p < ,001; ACL = Alphabetic Copying Letters; EV = Expresive Vocabulary; NW = Names Writing; PA = Phonological Awareness; UW = Unique Words; WS = Word Sequences; TU = T-Units.

Table 2: Correlation coefficients of Forms A, B and C: concurrent and predictive validity

The results of the EFA using parallel analysis revealed a two-factor solution. Both the scree plot and parallel analysis indicated that two factors should be selected (see Figures 11, 12 and 13). All factor loadings were above 0.30 and statistically significant (p < 0.001). One factor was related to transcription skills (TS) (i.e., the number of accurately spelled names, total hits in expressive vocabulary, the total number of copied letters, and the number of correctly isolated phonemes), and another factor was related to Oral Narrative Competence (NC) (i.e., number of unique words in the oral narrative, number of T-units in the oral narrative, and correct word sequences).

This structure was confirmed through confirmatory factor analysis (CFA). Model fit was evaluated using the robust maximum likelihood (RML) estimation method and assessed through the following indices59,61: standardized root mean square (SRMS ≤ 0.08), chi-square (χ², p > 0.05), Tucker-Lewis index (TLI ≥ 0.90), comparative fit index (CFI ≥ 0.90), root mean square error of approximation (RMSEA ≤ .06), and composite reliability (ω ≥ .60). Modification indices (MIs) were also examined.

The CFA results for each form (i.e., A, B, and C) will be explained separately.

Form A
The CFA results of the Form A two-factor model are presented in Figure 14. The index of fit indicated an excellent fit of the model to the data (χ2= 10.61, df= 13, p = 0.64; χ2/df = 0.81; CFI = 1.00; TLI=1.002; NFI = 0.99; NNFI = 1.002; MFI = 1.007; RMSEA =0.00; CI = 0.00-0.04; SRMR = 0.03).

figure-results-14830
Figure 11: Scree Plot for Exploratory Factor Analysis: Form A  Please click here to view a larger version of this figure.

The model fit evaluation for 'Form A' of the CBM demonstrates a strong alignment between the proposed model and the observed data. The chi-square statistic (χ2=10.61) with 13 degrees of freedom (df=13) yields a p-value of 0.64, indicating a robust model fit. The chi-square-to-degree-of-freedom ratio (χ2/df = 0.81) falls within the expected range, confirming a favorable model fit. Additionally, the goodness-of-fit indices, including the comparative fit index (CFI), Tucker-Lewis index (TLI), normalized fit index (NFI), nonnormed fit index (NNFI), McDonald's fit index (MFI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR), collectively indicate precise model fit. In particular, the CFI has a perfect value of 1.00, while the RMSEA has a low value of 0.00, with a 90% confidence interval ranging from 0.00 to 0.04. In summary, the results for Form A of the CBM undeniably point to an exemplary model fit. The chi-square statistic, p value, and an array of fit indices collectively underline the precise alignment between the model and the observed data, providing robust support for the suitability of Form A within our research context. The coefficient omega is ω = 0.78

Form B
The CFA results for the Form B two-factor model are presented in Figure 15. The indices of fit indicated an excellent fit of the model to the data (χ2= 28.60, df= 13, p = 0.01; χ2/df = 2.2; CFI = 0.99; TLI=0.98; NFI = 0.97; NNFI = 0.98; MFI = 0.98; RMSEA =0.05; CI = 0.02-0.08; SRMR = 0.04).

figure-results-17002
Figure 12: Scree Plot for Exploratory Factor Analysis: Form B Please click here to view a larger version of this figure.

The model fit assessment for Form B of the CBM indicates a reasonably good alignment between the proposed model and the observed data. The chi-square statistic (χ2=28.60) with 13 degrees of freedom (df=13) yields a p value of 0.01, signifying a statistically significant fit, albeit slightly above the conventional significance threshold. The chi-square-to-degree-of-freedom ratio (χ2/df = 2.2) suggested an acceptable model fit within an anticipated range. Various goodness-of-fit indices underline a reasonable model fit. In particular, the CFI reported a high value of 0.99, while the RMSEA registered a value of 0.05, with a 90% confidence interval ranging from 0.02 to 0.08. In summary, the results for Form B of the CBM indicate a satisfactory model fit. The chi-square statistic, p value, and various fit indices collectively support a reasonably strong alignment between the model and the observed data, providing robust evidence for the suitability of Form B within our research context. The coefficient omega was ω = 0.86

Form C
The CFA results for the Form C two-factor model are depicted in Figure 16. The fit indices suggest an outstanding fit of the model to the dataset. (χ2= 19.85, df= 13, p = 0.09; χ2/df = 1.52; CFI = 0.99; TLI=0.99; NFI = 0.98; NNFI = 0.99; MFI = 0.99; RMSEA =0.03; CI = 0.00-0.06; SRMR = 0.03).

figure-results-18952
Figure 13: Scree Plot for Exploratory Factor Analysis: Form C  Please click here to view a larger version of this figure.

The model fit assessment for Form C of the CBM suggests a strong alignment between the model and the observed data. The chi-square statistic (χ2=19.85) with 13 degrees of freedom (df=13) yields a p value of .09, indicating a reasonably acceptable model fit, although the p value is slightly above the customary significance threshold. The chi-square-to-degree-of-freedom ratio (χ2/df = 1.52) falls within the anticipated range, indicating a reasonable model fit. Several goodness-of-fit indices corroborate a strong model fit. Specifically, the CFI has a robust value of 0.99, while the RMSEA has a value of 0.03, with a 90% confidence interval ranging from 0.00 to 0.06. In summary, the results for Form C of the CBM suggest a robust model fit. The chi-square statistic, p value, and various fit indices collectively support a strong alignment between the model and the observed data, providing compelling evidence for the suitability of Form C within our research context. The coefficient omega was ω = 0.82.

In general, after computing Omega values, which assess the internal consistency considering the multidimensional nature of the CBM, all three versions (Forms A, B, and C) demonstrate robust reliability. A value of omega above 0.70 is typically considered acceptable in most research contexts. Therefore, in each of the three versions, the CBM appears to be internally consistent, indicating that the items reliably measure the same construct or skill it is designed to assess. It is also important to note that these omega values provide an additional measure of CBM quality, complementing the previously provided fit analysis. Collectively, these results support the appropriateness of all three forms of the CBM in the context of the present research.

The multidimensional approach of the tool was confirmed. The tasks included in the application were loaded on two factors: 1) phonological awareness, name writing, alphabetic copying letters, and expressive vocabulary indicators loaded on the "transcription factor" and 2) t-units, unique words, and word sequences loaded on the "narrative competence" factor.

Finally, receiver operating characteristic (ROC) analysis was performed to evaluate the diagnostic accuracy of the application based on the two factors derived from the CFA analysis. A composite score, the Omnibus Pomp Score, was generated to capture both factors: Transcription and Narrative Competence (TRNC). The standardized EGWA-K was used as the gold standard for testing the accuracy of each single diagnostic measure (i.e., factor). The students were classified into two groups: a) at-risk children with scores within or below the 20th percentile on the standardized writing test EGWA-K51 (n =147) and b) typically achieving children with scores within or above the 20th percentile on the same test (n =107). The area under the ROC curve (AUC > 0.70), sensitivity (> 0.70) and specificity (> .80) were explored60. In terms of diagnostic accuracy, Form A exhibited an area under the curve (AUC) of 71.18, a sensitivity of 70.47 and a specificity of 58.69. Form B had a superior AUC of 75.43, in conjunction with a sensitivity of 71.02 and a specificity of 70.21. Moreover, Form C demonstrated a notably robust AUC of 82.03, with a sensitivity of 75.70 and a specificity of 72.34 (Figure 17). These discerning outcomes collectively underscore an evident augmentation in the diagnostic accuracy of the CBM over the course of the academic year.

figure-results-23215
Figure 14: Confirmatory Factor Analysis of Form A. Note. NW = Name writing; EV = Expressive vocabulary; ACL = Alphabetic copying letters; PA = Phonological awareness; TU = T-units; UW = Unique words; WS = Word sequence; TR = Transcription factor; NC = Narrative competence factor.  Please click here to view a larger version of this figure.

figure-results-23858
Figure 15: Confirmatory Factor Analysis of Form B. Note. NW = Name writing; EV = Expressive vocabulary; ACL = Alphabetic copying letters; PA = Phonological awareness; TU = T-units; UW = Unique words; WS = Word sequence; TR = Transcription factor; NC = Narrative competence factor.  Please click here to view a larger version of this figure.

figure-results-24501
Figure 16: Confirmatory factor analysis of Form C. Note. NW = Name writing; EV = Expressive vocabulary; ACL = Alphabetic copying letters; PA = Phonological awareness; TU = T-units; UW = Unique words; WS = Word sequence; TR = Transcription factor; NC = Narrative competence factor.  Please click here to view a larger version of this figure.

figure-results-25144
Figure 17: Curve ROC analysis. Please click here to view a larger version of this figure.

Dyskusje

This study investigated the framework of a CBM in Spanish kindergarten students using a literature-informed model, examining how transcription skills and oral narrative abilities impact observable indicators throughout the academic year. The study highlights the lack of technology-driven CBM tools tailored for Spanish, hindering early writing progress assessment. To address this gap and emphasize the importance of identifying potential writing difficulties in young learners, a multimedia assessment protocol for kindergarten students was introduced.

By employing a comprehensive model, which delved beyond a simplistic view of writing, the study conducted a multifaceted examination of early writing abilities. This model, also known as the not-so-simple view of writing10,11, recognized the roles of transcription skills (e.g., letter formation, spelling) and linguistic competencies. The primary aim was to introduce a multimedia assessment protocol aimed at identifying writing difficulties and exploring its internal structure and diagnostic accuracy, thus enhancing early intervention strategies based on a robust theoretical framework.

The early years of formal schooling, especially kindergarten, mark a crucial period for identifying and offering specific support to children facing writing obstacles62. Challenges with writing in the early school years can impede a child's ability to keep pace with peers, leading to academic setbacks and diminished self-esteem63. Consequently, it is imperative for educators and caregivers to possess a comprehensive understanding of potential indicators of writing difficulties and to enact suitable interventions that bolster children's developmental progress64.

This study aimed to examine the factorial structure of the CBM among Spanish kindergarten students. Transcription skills and narrative competence were identified as latent factors potentially accounting for the variance observed in each task across three different time points throughout the academic year. Currently, there is a notable absence of technology-driven CBM tools available in Spanish for screening and monitoring early writing progress. Recognizing the importance of identifying young children who may encounter writing difficulties and the lack of digital resources for Spanish-speaking populations, we introduced a multimedia assessment protocol for kindergarten students.

The present study has proven that the application is a valid and reliable tool. Considering composite reliability, we can conclude that the CBM has good reliability in three forms (i.e., A, B, and C), being above 0.70 in all the cases. The results from the ROC analyses were promising, as the AUCs ranged from 0.71 to 0.82 across all the forms, indicating acceptable to excellent accuracy. It is important to note that we relied on a single measure as the gold standard, specifically focusing on writing skills, which is a relatively limited approach. To accurately reflect the content of the criterion being investigated, we believe that classification accuracy could be enhanced by incorporating additional standardized assessments in future studies.

Additionally, the results showed adequate indices of concurrent and predictive validity, with all correlations being statistically significant (p < 0.01). These results emphasize the strong concurrent validity of the CBM, highlighted by consistent correlations among the three forms and the standardized writing measure (EGWA-K). Additionally, the findings indicate the significant predictive capacity of the scale, as evidenced by its correlation with teachers' assessments of students' curricular competence (RT scale) at the end of the academic year. Some studies have corroborated the accuracy of teacher ratings in some contexts. While teacher ratings of emergent literacy skills have shown mixed results65,66, they can still provide valuable insights. Cabell et al.65 reported that teacher ratings differentiated children with lower literacy skills but were insufficient for reliably identifying at-risk children. However, Coker and Ritchey66 showed that teacher ratings were accurate in identifying at-risk children in writing measures. Additionally, Gray et al.67 validated the usefulness of teacher-reported indicators for monitoring oral language skills. Despite these limitations, teacher judgments can offer important information, especially when used alongside other assessments. In summary, the magnitude of these correlations found substantiates the tool's ability to assess and predict writing skill development in kindergarten-aged children, highlighting its potential for implementation in educational research and practice. Diagnostic accuracy was also found to be adequate, as the CBM was able to distinguish between at-risk and non-risk students.

The CBM, designed to detect early signs of writing learning challenges in kindergarten children, faces various constraints. Limited access to technology, especially in resource-limited settings, may hinder its deployment, reducing its effectiveness. Additionally, the application's efficacy relies on the digital literacy of both children and educators; inadequate familiarity with digital tools can undermine its functionality. Moreover, an incomplete representation of cultural and linguistic nuances in the assessment may compromise its validity, potentially affecting result accuracy. Factors like internet connectivity, device calibration, and test standardization could also influence reliability and validity, impacting result accuracy. The absence of direct human interaction may limit the application's ability to understand individual children's needs adequately, hindering personalized support, crucial in early learning environments. Additionally, some assessment components may lack cultural sensitivity, hindering equitable detection of learning difficulties across diverse groups.

Regular updates are crucial to meet evolving user needs and sustain effectiveness. Neglecting updates may render the application outdated or less effective in identifying and addressing writing challenges in kindergarten children. Addressing these limitations during design, development, and implementation is essential to optimize usefulness and efficacy in educational contexts.

In summary, we conclude that the assessment model is an excellent fit for fall, winter, and spring for Spanish kindergarten students. These results indicate the good construct validity of the CBM, which allows early writing skills to be measured. This study has two main contributions. First, the results support the relationship among the proposed measures (i.e., task) as observable indicators of the latent factor of transcription and narrative competence. Second, the results support the use of the CBM for assessing transcription skills and narrative competence. The development of the CBM will allow teachers to identify and monitor the progress of Spanish students struggling in writing throughout the school year. Furthermore, teachers can use the information collected through the CBM to determine the most appropriate intervention strategies for an individual child. Future research should explore the growth trajectories and longitudinal factorial invariance of the CBM.

Ujawnienia

The authors listed above certify that there are no financial interests or other conflicts of interest associated with the present study.

Podziękowania

We gratefully acknowledge the support of the Spanish government through its Plan Nacional I+D+i (R+D+i National Research Plan, Spanish Ministry of Economy and Competitiveness), Grant PID2019-108419RB-100 funded by MCIN/AEI/10.130.39/50110001103, with the first author as the principal investigator. We also thank the Unidad de Audiovisuales ULL team for their participation in the production of the video.

Materiały

NameCompanyCatalog NumberComments
Indicadores de Progreso de Aprendizaje en Escritura para Educación Infantil (T-IPAE-EI)“Universidad de La Laguna ®Copyrigth, 2023  C826ED53A500DC82D7A7CD0F 03C136CDB8F5A8E41D4175 0052469CA4CC0E11F8

Odniesienia

  1. World Health Organization (WHO). Mental health: a call for action by World Health Ministers. , World Health Organization. (2001).
  2. Berninger, V. Frames of Reference for the Assessment of Learning Disabilities: New Views on Measurement Issues. Lyon, G. R., Paul, H. , Brookes Publishing Co. 419-439 (1994).
  3. Evaluación general de diagnóstico 2010. Educación Secundaria Obligatoria. Segundo curso. Informe de resultados. Ministerio de Educación [General diagnostic assessment 2010. Compulsory secondary education. Second Year. Results Report]. , Ministerio de Educación. (2011).
  4. Evaluación general de diagnóstico 2009. Marco de la evaluación [General diagnostic assessment 2009. Framework for the evaluation]. , Ministerio de Educación. (2009).
  5. Ley Orgánica 8/2013, de 9 de diciembre, para la mejora de la calidad educativa [Organic Law 8/2013, of December 9, for the improvement of educational quality]. The Official State Gazette. , Boletín Oficial del Estado. (2023).
  6. Panorama de la Educación. Indicadores de la OCDE 2021. Informe español [Education at a Glance. OECD Indicators 2021. Spanish report]. , Ministerio de Educación y Formación Profesional. (2021).
  7. Juel, C., Griffith, P. L., Gough, P. B. Acquisition of literacy: A longitudinal study of children in first and second grade. J Educ Psychol. 78, 243-255 (1986).
  8. Juel, C. Learning to read and write: A longitudinal study of 54 children from first through fourth grades. J Educ Psychol. 80, 437-447 (1988).
  9. Berninger, V. W., Amtmann, D. Handbook of Learning Disabilities. Swanson, H. L., Harris, K. R., Graham, S. , 345-363 (2003).
  10. Berninger, V. W., Winn, W. D. Handbook of Writing Research. MacArthur, C. A., Graham, S., Fitzgerald, J. , The Guilford Press. 96-114 (2006).
  11. Berninger, V. W. Coordinating transcription and text generation in working memory during composing: automatic and constructive processes. LDQ. 22, 99-112 (1999).
  12. Graham, S., Berninger, V. W., Abbott, R. D., Abbott, S. P., Whitaker, D. Role of mechanics in composing of elementary school students: A new methodological approach. J Educ Psychol. 89, 170-182 (1997).
  13. Kent, S., Wanzek, J., Petscher, Y., Al Otaiba, S., Kim, Y. -S. Writing fluency and quality in kindergarten and first grade: the role of attention, reading, transcription, and oral language. Reading and Writing. 27, 1163-1188 (2014).
  14. Kim, Y. -S., et al. Componential skills of beginning writing: An exploratory study. Learn Individ Diff. 21, 517-525 (2011).
  15. Kim, Y. -S., Al Otaiba, S., Wanzek, J. Kindergarten predictors of third grade writing. Learn Individ Diff. 37, 27-37 (2015).
  16. Garcia, J. N., de Caso, A. M. Changes in writing self-Efficacy and writing products and processes through specific training in the self-efficacy beliefs of students with learning disabilities. LDCJ. 4, 1-27 (2006).
  17. Graham, S., Schwartz, S. S., MacArthur, C. A. Knowledge of Writing and the Composing Process, Attitude Toward Writing, and Self-Efficacy for Students With and Without Learning Disabilities. J Learn Disabil. 26, 237-249 (1993).
  18. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. , American Psychiatric Association. (2013).
  19. Graham, S., Harris, K. R., Fink, B. Is handwriting causally related to learning to write? Treatment of handwriting problems in beginning writers. J Educ Psychol. 92, 620-633 (2000).
  20. Gillis, M. B., Moats, L. Theme Editors Introduction: The role of phonology and language in learning to read. Perspec Lang Liter. 46, 7-9 (2020).
  21. Puranik, C. S., Lonigan, C. J. Name-writing proficiency, not length of name, is associated with preschool children's emergent literacy skills. Early Child Res Q. 27, 284-294 (2012).
  22. Rowe, M. L., Goldin-Meadow, S. Early gesture selectively predicts later language learning. Dev Sci. 12, 182-187 (2009).
  23. Boudreau, D. Narrative Abilities: Advances in research and implications for clinical practice. Topics in Language Disorders. 28, 99-114 (2008).
  24. Rodríguez, C., Jiménez, J. E., Balade, J. The impact of oral language and transcription skills on early writing production in kindergarteners: productivity and quality. Early Child Educ. , (2024).
  25. Schatschneider, C., Petscher, Y., Williams, K. M. Solving problems in the teaching of literacy. Achieving excellence in preschool literacy instruction. Justice, L. M., Vukelich, C. , Guilford Press. 304-316 (2008).
  26. Glover, T. A., Albers, C. A. Considerations for evaluating universal screening assessments. J Educ Psychol. 45, 117-135 (2007).
  27. Lembke, E., Deno, S. L., Hall, K. Identifying an indicator of growth in early writing proficiency for elementary school students. Assessment for Effective Intervention. 28, 23-35 (2003).
  28. Balade, J., Rodríguez, C., Jiménez, J. E. Curriculum-based measurement for early writing struggles in kindergarten: a systematic review. Psicol Educ. 30, 101-110 (2024).
  29. Neumann, M. M., Neumann, D. L. Touch Screen Tablets and Emergent Literacy. Early Childh Educ J. 42, 231-239 (2014).
  30. Blumenthal, S., Blumenthal, Y. Tablet or paper and pen? examining mode effects on german elementary school students' computation skills with curriculum-based measurements. IJEM. 6, 669-680 (2020).
  31. Gil, V., de León, S. C., Jiménez, J. E. Universal screening for writing risk in spanish-speaking first graders. Reading & Writing Quarterly. 37, 117-135 (2021).
  32. Haleem, A., Javaid, M., Qadri, M. A., Suman, R. Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers. 3, 275-285 (2022).
  33. CEUR Workshop Proceedings 1227. Bremer, C., Tillmann, A. Proceedings of DeLFI Workshops 2014, colocated with 12th e-Learning Conference of the German Computer Society, , 156-163 (2014).
  34. Genz, F., Bresges, A. Tablets in Schule und Unterricht [In tablets in school and lessons]. , Springer VS. 63-86 (2017).
  35. Redecker, C., Johannessen, Ø Changing assessment - toward a new assessment paradigm using ICT. Eur J Educ. 48, 79-96 (2013).
  36. Fuchs, L. S. The Past, present, and future of curriculum-based measurement research. Sch Psychol Rev. 33, 188-192 (2004).
  37. Förster, N., Souvignier, E. Learning progress assessment and goal setting: Effects on reading achievement, reading motivation and reading self-concept. Learning and Instruction. 32, 91-100 (2014).
  38. Souvignier, E., Förster, N., Schulte, E. Lernverlaufsdiagnostik [Progress Monitoring]. Hasselhorn, M., Schneider, W., Trautwein, U. H. 12, 221-237 (2014).
  39. Lissitz, R. Informing Assessment Practice: Insights from Research and Implementation. , Information Age Publishing. 1-20 (2014).
  40. McHenry, M. S., et al. The current landscape and future of tablet-based cognitive assessments for children in low-resourced settings. PLOS Digit Health. 2, e0000196-e0000196 (2023).
  41. World Health Organization. mHealth: New horizons for health through mobile technologies: second global survey on eHealth. , (2011).
  42. Vedechkina, M., Borgonovi, F. A review of evidence on the role of digital technology in shaping attention and cognitive control in children. Front in Psychol. 12, (2021).
  43. Neumann, M. M., Neumann, D. L. The use of touch-screen tablets at home and pre-school to foster emergent literacy. J Early Child Lit. 17, 203-220 (2017).
  44. Chu, V., Krishnan, K. Quantitative assessment of prewriting skills in children: the development and validation of a tablet assessment tool. Percept Mot Skills. 129, 554-569 (2022).
  45. Dui, L. G., et al. A tablet app for handwriting skill screening at the preliteracy stage: instrument validation study. JMIR Serious Games. 8, e20126-e20126 (2020).
  46. Neumann, M. M., Worrall, S., Neumann, D. L. validation of an expressive and receptive tablet assessment of early literacy. J ResTechnol Educ. 51, 326-341 (2019).
  47. Hosmer, D., Lemeshow, S., Sturdivant, R. X. Applied Logistic Regression. , Hoboken. (2013).
  48. Johnson, E. S., Jenkins, J. R., Petscher, Y., Catts, H. W. How can we improve the accuracy of screening instruments. Learning Disabilities Research & Practice. 24, 174-185 (2009).
  49. Klingbeil, D. A., McComas, J. J., Burns, M. K., Helman, L. Comparison of predictive validity and diagnostic accuracy of screening measures of reading skills. Psychol Sch. 52, 500-514 (2015).
  50. Jiménez, J. E., Gil, V. Modelo respuesta a la intervención: Un enfoque preventivo para el abordaje de las dificultades de aprendizaje [Response to intervention model: A preventive approach to addressing specific learning disabilities]. Jiménez, J. E. , Pirámide. (2019).
  51. Jiménez, J. E., Rodríguez, C., Balade, J., et al. Identifying kindergarteners at risk of writing difficulties based on foundational literacy skills. Reading and Writing: An Interdisciplinary Journal. Advance online publication. , : https://doi-org.remotexs.ntu.edu.sg/10.1007/s11145-024-10518-7 (2024).
  52. Graham, S., Struck, M., Santoro, J., Berninger, V. W. Dimensions of good and poor handwriting legibility in first and second graders: motor programs, visual-spatial arrangement, and letter formation parameter setting. Dev Neuropsychol. 29, 43-60 (2006).
  53. Schneck, C. M., Henderson, A. Descriptive analysis of the developmental progression of grip position for pencil and crayon control in nondysfunctional children. Am J Occup Ther. 44, 893-900 (1990).
  54. Kline, R. B. Principles and practice of structural equation modeling. , 2nd edition, Guilford Press. (2005).
  55. Bartlett, M. S. A note on the multiplying factors for various chi square approximations. J R Stat Soc Series B Stat Methodol. 16, 296-298 (1954).
  56. Kaiser, H. F. A second generation little jiffy. Psychometrika. 35, 401-415 (1970).
  57. Cattell, R. B. The scree test for the number of factors. Multivariate Behavioral Research. 1, 245-276 (1966).
  58. Stevens, J. Applied multivariate statistics for the social sciences. , Fifth Edition , Routledge. (2009).
  59. Henson, R. K., Roberts, J. K. Use of exploratory factor analysis in published research. Educ Psychol Meas. 66, 393-416 (2006).
  60. Zainudin, A. A handbook on structural equation modeling. MPWS Rich Resources. , 54-73 (2014).
  61. Brown, T. A. Confirmatory factor analysis for applied research. , The Guildford Press. (2015).
  62. Thomas, L. J. G., et al. The early writing skills of children identified as at-risk for literacy difficulties. Early Child Res Q. 51, 392-402 (2020).
  63. Dombek, J. L., Otaiba, S. A. Curriculum-based measurement for beginning writers (K-2). Intervention in School and Clinic. 51, 276-283 (2016).
  64. Smolkowski, K., Cummings, K. D. Evaluation of diagnostic systems. Assessment for Effective Intervention. 41, 41-54 (2015).
  65. Cabell, S. Q., Justice, L. M., Zucker, T. A., Kilday, C. R. Validity of teacher report for assessing the emergent literacy skills of at-risk preschoolers. Lang Speech Hear Serv Sch. 40, 161-173 (2009).
  66. Coker, D. L., Ritchey, K. D. Universal screening for writing risk in kindergarten. Assess Eff Interv. 39, 245-256 (2014).
  67. Gray, S., et al. Can a teacher-reported indicator be used for population monitoring of oral language skills at school entry. Int J Speech Lang Pathol. 20, 447-457 (2018).

Przedruki i uprawnienia

Zapytaj o uprawnienia na użycie tekstu lub obrazów z tego artykułu JoVE

Zapytaj o uprawnienia

Przeglądaj więcej artyków

Tablet Based Curriculum Based MeasurementKindergarten WritingWriting DevelopmentTranscription SkillsIdea GenerationWriting DisabilitiesSelf efficacyAcademic MotivationEarly Assessment ProtocolUniversal ScreeningProgress MonitoringCurriculum based Measurements CBMT IPAE KDigital CBM In Writing

This article has been published

Video Coming Soon

JoVE Logo

Prywatność

Warunki Korzystania

Zasady

Badania

Edukacja

O JoVE

Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone