Method Article
This protocol presents the step-by-step method for using a writing-based, valid, and reliable application designed for kindergarten-level education. It operates as a curriculum-based assessment tool, specifically focusing on evaluating early writing skills in young learners.
Kindergarten writing involves acquiring fundamental skills, such as letter formation, phonemic awareness, and the gradual use of written language to express ideas. In this context, tablet-based curriculum assessments present new opportunities for teaching and evaluating these early writing abilities. To the best of our knowledge, there is currently no tool with these features available for Spanish-speaking children. Therefore, the primary objective of this study was to present a tablet-based protocol for screening at-risk young writers. This tablet-based protocol is specifically designed for kindergarten-level education and serves as a curriculum-based assessment tool that focuses on evaluating early writing skills in young learners. This user-friendly application incorporates interactive tasks and exercises, including assessments of phonological awareness, name writing, alphabet letter copying fluency, and oral narrative skills. These comprehensive evaluations cover various aspects of writing. This application is meticulously aligned with kindergarten curriculum objectives, ensuring that assessments adhere to educational standards. By providing educators with a digital platform to assess and enhance students' writing skills, this tool empowers them to make data-driven decisions for effective instruction during the early stages of writing development. Moreover, it functions as a curriculum-based measurement (CBM), offers valuable support for identifying potential writing challenges in young learners and continuously monitoring their progress. This feature enables early intervention and tailored instruction to optimize writing skill development.
The primary aim of this study is to introduce a multimedia assessment protocol for kindergarten students aimed at identifying potential writing difficulties and exploring its internal structure and diagnostic accuracy.
At the kindergarten level, writing marks the beginning of a child's literacy journey. It goes beyond scribbles and shapes, representing the foundation for communication and creative expression. The acquisition of writing skills is a matter of paramount concern because it requires the attention of parents, educators, children, and scholars alike. The World Health Organization (WHO)1 (2001) designated writing difficulties as a significant impediment to school participation, a pivotal component in a child's normative developmental trajectory. Writing, serving as an indispensable medium, empowers children to articulate their knowledge and thoughts, enabling their participation across a spectrum of academic endeavors2. However, this deceptively elementary undertaking is far from simplistic and encompasses many intricate processes. Children must invest time and diligence to acquire and refine this nuanced skill. The transition from ideation to orthographic representation necessitates the orchestration of cognitive, linguistic, and motor processes.
Numerous national reports have shed light on the prevalence of students failing to attain the requisite proficiency levels in writing. For instance, in Spain, the Instituto Nacional de Evaluación Educativa [National Institute for Quality and Evaluation] (INEE) unveiled the results of its assessment conducted during the 1999-2000 academic year for Primary and Secondary Education3. The evaluation revealed that students' performance in writing not only fell below the expected standards but also underscored substantial shortcomings in classroom instruction and pedagogical preparation. Additionally, Spain's Ministry of Education issued a General Diagnostic Assessment Report in 2009, which further underscored the levels of foundational skills among fourth-year primary education students4. In Article 20.3 of Organic Law 2/2006, dated May 3, pertaining to Education (LOE), which was subsequently amended by Organic Law 8/2013, enacted on December 9, aimed at enhancing educational quality (LOMCE), the introduction of a diagnostic assessment for third-year primary education was established5. This legislative provision stipulated that 'educational institutions shall administer an individualized evaluation for all students upon completing the third year of primary education.' The primary objective of this assessment was to ascertain the extent of proficiency in skills, competencies, and abilities encompassing oral and written expression, comprehension, calculation, and problem-solving. This evaluation was conducted to evaluate the acquisition of competence in linguistic communication and mathematical proficiency. The formulation of the General Framework for this assessment was a collaborative effort involving the active engagement of 14 educational authorities and the Ministry of Education, Culture, and Sport (MECD). In 2010, in the comprehensive diagnostic assessment for secondary education (ESO), concerning the processes of written expression, a low score was obtained in the presentation of written work. It also underscored a disquieting educational landscape in Spain, particularly concerning language proficiency6.
Mastering the art of writing is a formidable challenge that demands intricate cognitive processing. There are two models of writing that have received significant conceptual support: The simple view of writing model (SVW)7,8 and the not-so-simple view of the writing model (NSVW)9,10. The latter, a revision of the previous one, postulates that transcription skills, such as handwriting fluency and spelling, collaborate with self-regulatory functions, including attention, goal setting, and revision, in addition to working memory, in the nuanced process of text generation. This process involves the dual tasks of idea generation and transforming those ideas into linguistic propositions intricately linked with oral language skills. Both transcription skills11,12 and oral language proficiency13,14,15 are robustly acknowledged as predictive factors in the early development of writing. Due to the intricate nature of writing, certain students encounter difficulties in attaining proficiency. Writing disabilities not only detrimentally impact self-efficacy and academic motivation but can also persist into adulthood, negatively influencing future workplace experiences and emotional well-being16,17. The DSM-V defines learning disabilities in writing as follows: With impairment in written expression (315.2 F81.81): skills in spelling, grammar, punctuation, clarity, and organization of written expression are affected. Additionally, the DSM-V specifies that in this disorder, the individual may add, delete, or substitute vowels or words. As a result, written work is often illegible or difficult to read. An impairment in writing skills significantly interferes with academic achievement or activities of daily living that require the composition of written text (APA, 2013)18.
The assessment protocol targets key components such as alphabet copying, phonological awareness, name writing, expressive vocabulary, and oral narrative skills. Alphabet copying is defined as the process of having children reproduce letter forms by tracing, copying, or writing over models of individual alphabet letters. This task relies predominantly on motor reproduction skills rather than long-term memory, with enhanced performance correlating with the automation of motor reproduction routines19. Phonological awareness, on the other hand, assesses children's understanding of the sound structure of language. Specifically, it involves the explicit knowledge that words can be segmented into a sequence of phonemes and that phonemes can be blended into words20. Name writing, according to Puranik and Lonigan21, is defined as the ability to reproduce one's own name in writing, which is considered an important developmental milestone in early literacy and the first step toward mastering the alphabetic principle. Expressive vocabulary refers to the words a child can use to communicate their thoughts, feelings, and ideas through spoken language. It is a crucial aspect of language development in early childhood22. Finally, oral narrative skills encompass the ability of young children to understand, produce, and convey stories through spoken language. This involves sequencing events in a logical order, providing relevant details, and using appropriate language structures to recount personal experiences or fictional tales23. These skills are typically assessed by having children retell stories or generate narratives based on picture prompts, with measures evaluating elements such as story structure, use of evaluative language, temporal and causal connections, and overall coherence24.
Early assessment protocols are crucial for detecting students at risk of writing disabilities before the problem develops. Extensive research has shown that early detection leads to a better prognosis. In the context of the Response to Intervention (RTI) model, Curriculum-Based Measurements (CBMs) are often used for universal screening and progress monitoring because they are quick, practical, reliable, valid, accurate, and sensitive to the time of measurement25,26,27. The CBM process involves evaluation at three distinct points during the course (normally in autumn, winter, and spring). This process allows us to identify students at risk and decide whether the intervention should be modified or if the progress is acceptable. However, there is limited scientific literature on early detection tools for the writing processes compared to other academic areas, such as reading or mathematics28.
Technological advances in education
The use of touchscreen tablets among young children is increasing in the home and early childhood education centers29. A review of the most recent scientific literature highlights the relevance and advantages of CBMs generated through the use of digital means (i.e., tablets)30. In recent years, the usability and applicability of CBMs have garnered significant attention. This is associated with inquiries into the successful implementation of progress monitoring tools within educational practice31. In this context, the significance of modern digital media in the educational domain is gaining recognition32. A curriculum-based assessment tool with the dual purpose of screening and progress monitoring utilizing tablet technology would offer numerous advantages for educators33,34,35. With this type of digital medium (i.e., tablets), the advantages outweigh the disadvantages of conducting assessments aimed at early detection of learning difficulties (i.e., screening) in the core academic areas of reading, writing, and mathematics, as well as assessments designed to monitor students' learning progress in these academic domains. This is attributed to the fact that digital assessments (whether conducted via computers or tablets) lead to increased time efficiency. Particularly concerning assessment data collection, presentation, and documentation tasks typically performed by teachers, these processes can be automated and executed far more efficiently in terms of resource utilization36. Moreover, providing support for data interpretation becomes much more straightforward. In addition, research indicates that tablet-based assessments are well received by teachers and result in greater learning gains37,38. In contrast, despite the potential advantages of tablet-based assessments, researchers have highlighted several drawbacks that should be considered. Bremer and Tillmann39 emphasize the significant costs associated with acquiring and maintaining tablets, as well as concerns about exacerbating existing inequalities in access to technology. Additionally, tablets often have limited processing power, storage capacity, and input capabilities compared to laptops, which can hinder their functionality for demanding computational tasks or extended typing39. These limitations may impact the effectiveness of tablet-based assessments, particularly in scenarios requiring advanced data analysis or extensive written responses. Furthermore, technical challenges such as reduced processing power, storage constraints, input difficulties, small screen size, connectivity issues, battery limitations, and compatibility concerns with assessment software or platforms can hinder their usability40,41. Moreover, engagement and distraction risks pose significant concerns, as digital assessments may struggle to maintain children's focus during testing42. Careful consideration of these factors is crucial when utilizing tablets for educational evaluation.
Overall, in light of the information presented in this report, the feasibility of CBMs utilizing tablets in the classroom appears promising and is increasingly needed in a digital society. This assertion is further substantiated by findings indicating that many children are gaining access to tablets both at school and at home43. In recent years, several tablet-based apps have been developed for the assessment of early literacy skills44,45,46. Neumann et al.46evaluated the psychometric properties of tablet-based literacy assessments, focusing on both validity and reliability. They tested an app designed to assess expressive and receptive literacy skills in a sample of 45 children aged 3 to 5 years. The children used the app on a tablet to complete assessments related to alphabet and word recognition skills. The results indicated that tablet-based assessments utilizing both expressive and receptive response formats offer a valid and reliable way to measure early literacy skills in young children. The purpose of the study conducted by Chu and Krishnan44 was to develop and determine the validity of a computerized tool called the Quantitative Assessment of Prewriting Skills (QAPS) for assessing the pattern of children's copying to measure their visual-motor skills. The authors demonstrated that the QAPS is feasible and adequate for measuring and distinguishing the drawing skills of typically developing children and children with visual motor deficits. Similarly, Dui et al.45 designed a novel tablet-based app, Play Draw Write, which was tested among healthy children with mastered handwriting (third graders) and those at a preliterate age (kindergartners). Their findings provide evidence for the effectiveness of tablet technology in quantitatively evaluating handwriting production. Additionally, they propose that a tablet-based application holds the potential for identifying handwriting difficulties at an early stage. Nevertheless, it is worth noting that none of these studies have evaluated the diagnostic accuracy required for identifying children at risk of writing-related learning disabilities. Therefore, there is a need for precise classification data to validate the effectiveness of such a screening tool. The classification accuracy of a screening tool is determined by how well it identifies students as at risk or not at risk compared to a later writing outcome. Researchers often report the area under the curve (AUC), a measure of overall classification accuracy. The AUC serves as a diagnostic performance index by combining sensitivity and specificity into a single measure. It classifies students as at-risk or not at-risk, using their performance on another standardized test as the criterion. The following intervals were used to interpret the AUC: high, AUC > .90; good, AUC > .80-.90; moderate, AUC = .70-.79; and low, AUC= .50-.6947.
Screening tools and their respective cutoff scores for risk determination must balance sensitivity and specificity, meaning that increasing sensitivity tends to decrease specificity and vice versa. Sensitivity represents the proportion of students identified as at risk on the screening tool and at risk on the outcome measure (true positives) from all students who scored at risk on the outcome measure (true positives plus false negatives). Essentially, a high sensitivity value indicates that fewer students at risk are missed during the screening process. If the screening tool lacks sensitivity and fails to identify these students, they will not receive the necessary intervention. Conversely, specificity represents the proportion of students accurately identified as not at risk by both the screening tool and the outcome measure (true negatives) among all students classified as not at risk by the outcome measure (including true negatives and false positives). Increased specificity leads to a reduction in false positives. Schools prioritize minimizing false-positive identifications during screening because excessive identification of students as at risk39 (i.e., identifying a large number of false positives) strains resources allocated for intervention services. Although there is no universally agreed-upon standard for acceptable values of these indices, achieving high sensitivity values in universal screening is paramount importance48,49.
Over the past few years, various paper-and-pencil screening instruments have been designed to assess writing proficiency among Spanish-speaking students at the kindergarten and elementary school levels. Specifically, the Indicadores de Progreso de Aprendizaje en Escritura (IPAE) serves a tailored CBM for elementary grades50. In contrast, the Early Grade Writing Assessment in Kindergarten (EGWA-K) specifically targets kindergarten students, providing a Spanish standardized test that assesses fundamental literacy skills. It includes tasks such as transcribing words from images, segmenting pseudowords into phonemes, writing freely chosen words, and narrating a story based on a drawing. These diverse tasks demonstrate high reliability and validity in evaluating early writing abilities51.
However, to our knowledge, no technology-based CBM tools have yet been developed for Spanish language assessment for screening and monitoring learning progress in early-age writing. Recognizing the importance of identifying young children who may struggle with writing and the lack of computerized tools for the Spanish-speaking population, this study aimed to introduce a multimedia assessment protocol for kindergarten students and explore its internal structure and diagnostic accuracy.
The protocol presented here was conducted in accordance with the guidelines provided by the Comité de Ética de la Investigación y Bienestar Animal (Research Ethics and Animal Welfare Committee, CEIBA), Universidad de La Laguna. The data were collected at three different time points, capturing information exclusively from students whose parents, administrations, and schools provided consent.
NOTE: The app used in the protocol is Tablet App Indicadores de Progreso de Aprendizaje en Escritura para Educación Infantil [Basic Early Writing Skills for Kindergarteners] (T-IPAE-K). It includes five tasks: 1) Copying Alphabet Letters, 2) Name Writing, 3) Expressive Vocabulary, 4) Phonological Awareness, and 5) Oral Narrative. A pedagogical agent provides instructions for each task, along with one or two trials (depending on the task) and a demonstration before the testing phase begins. An example of the application protocol for each task is provided below:
1. Experimental setup
Figure 1: CBM main and new student menu Please click here to view a larger version of this figure.
Figure 2: Tasks before and after completion of the alphabetic copying letter task Please click here to view a larger version of this figure.
Figure 3: "Did you understand?" screen Please click here to view a larger version of this figure.
2. Tasks
Figure 4: Demonstration of a student's performance in the alphabetic copying letter task Please click here to view a larger version of this figure.
Figure 5: Example of a student's performance correction in the alphabetic copying letter task Please click here to view a larger version of this figure.
Figure 6: An example item of the expressive vocabulary Please click here to view a larger version of this figure.
Figure 7: Example of a student's performance correction in the name writing task Please click here to view a larger version of this figure.
Figure 8: An example item related to phonological awareness. A pedagogical agent says the word aloud, and the vocal key appears in the upper left corner when the student responds Please click here to view a larger version of this figure.
Figure 9: An example of a prompt in the narrative oral task: "One day I wake up and I can fly" Please click here to view a larger version of this figure.
Figure 10: Correction menu for the oral narrative task Please click here to view a larger version of this figure.
3. Student evaluation
For this study, 336 Spanish kindergarten students (boys = 163, girls = 173; Mage = 5.3 years (63.6 months), SD = 0.29) were recruited from state and private schools in urban and suburban regions of Santa Cruz de Tenerife. Children with special educational needs were excluded. This included children with sensory impairments, acquired neurological conditions, or other issues traditionally deemed exclusionary criteria for learning disabilities. This information was sourced from the Department of Education of the Canary Islands Government.
The variables of the application were computed as follows: The total number of correctly copied letters was used for the Alphabetic Letter Copy task. The Expressive Vocabulary task was assessed using the total number of correct items. For the Name Writing task, the total number of correctly written names from both parts of the task was recorded. The Phonological Awareness task considered the total number of correct and partially correct responses. The Oral Narrative task included three measures: Unique Words (UW), the total number of correct word sequences, and the total number of terminal units. Finally, an aggregate variable was derived by averaging all the previously computed variables. This new variable encompassed the transcription and narrative competence measures. Descriptive statistics (mean, standard deviation, minimum, maximum, range, skewness, and kurtosis) for Forms A, B, and C are detailed in Table 1. Results indicated a normal distribution of data, with kurtosis and skewness indices below 10.00 and 3.00, respectively54.
Descriptive Statistics of T-IPAE-K measures per form | |||||||||||||||||||||||||||
Measures | Form A | Form B | Form C | ||||||||||||||||||||||||
n | M | SD | min | max | range | skew | kurtosis | n | M | SD | min | max | range | skew | kurtosis | n | M | SD | min | max | range | skew | kurtosis | ||||
ACL | 322 | 1.43 | 1.63 | 0 | 5 | 5 | 0.91 | -0.44 | 331 | 2.33 | 1.71 | 0 | 5 | 5 | 0.10 | -1.31 | 334 | 3.07 | 1.83 | 0 | 5 | 5 | -0.48 | -1.22 | |||
EV | 320 | 7.30 | 2.74 | 0 | 10 | 10 | -1.35 | 1.11 | 329 | 7.78 | 2.13 | 0 | 10 | 10 | -1.58 | 2.89 | 331 | 8.42 | 2.35 | 0 | 10 | 10 | -2.33 | 4.80 | |||
NW | 320 | 0.48 | 0.99 | 0 | 6 | 6 | 2.98 | 10.33 | 329 | 0.99 | 1.37 | 0 | 7 | 7 | 1.76 | 3.24 | 331 | 1.95 | 2.37 | 0 | 12 | 12 | 1.59 | 2.50 | |||
PA | 310 | 4.01 | 7.57 | 0 | 36 | 36 | 2.38 | 5.13 | 317 | 8.49 | 11.62 | 0 | 36 | 36 | 1.15 | -0.15 | 322 | 12.64 | 12.79 | 0 | 36 | 36 | 0.67 | -1.09 | |||
UW | 326 | 8.01 | 11.17 | 0 | 56 | 56 | 1.47 | 1.91 | 332 | 10.70 | 11.89 | 0 | 53 | 53 | 0.94 | 0.39 | 332 | 11.72 | 11.25 | 0 | 57 | 57 | 1.08 | 1.35 | |||
WS | 326 | 9.05 | 15.99 | 0 | 104 | 104 | 2.62 | 8.18 | 332 | 12.57 | 16.51 | 0 | 91 | 91 | 1.86 | 4.36 | 332 | 13.53 | 15.71 | 0 | 85 | 85 | 1.89 | 4.46 | |||
TU | 325 | 2.08 | 2.99 | 0 | 16 | 16 | 1.65 | 2.83 | 332 | 1.98 | 2.50 | 0 | 13 | 13 | 1.77 | 3.97 | 333 | 2.87 | 2.94 | 0 | 15 | 15 | 1.31 | 2.03 | |||
ACL = Alphabetic Copying Letters; EV = Expresive Vocabulary; NW = Names Writing; PA = Phonological Awareness; UW = Unique Words; WS = Word Sequences; TU = T-Units |
Table 1: Descriptive statistics of CBM indicators per form.
The concurrent and predictive validity of this assessment was established by administering it alongside a standardized paper-and-pencil writing test, the EGWA-K, and by gathering teachers' assessments using the Teacher Rating Scale (TRS), which evaluates their students' curricular competence. Teachers utilized the TRS questionnaire to assess students' competencies in these skills by rating their skill acquisition or difficulty levels. The results revealed significant correlations between the CBM forms (A, B, and C) and both the EGWA-K and TRS, as shown in Table 2. Specifically, the form administered at the beginning of the academic year (fall) demonstrated a moderate association with both the EGWA-K (r = .38, p < .001) and the TRS (r = .24, p < .001). The mid-year form (winter) showed a stronger correlation with the EGWA-K (r = .42, p < .001) and a more substantial but still moderate correlation with the TRS (r = .33, p < .001). The form administered at the end of the academic year (spring) exhibited the highest correlations with the EGWA-K (r = .48, p < .001) and the TRS (r = .31, p < .001), reflecting the pattern across individual forms (A, B, and C).
Correlation coefficients of the form A, B and C, concurrent and predictive validity | ||||||||||
Observe variable | 1 | 2 | 3 | 4 | 5 | 6 | 7 | EGWA-K | Teacher Rating | |
Form A | ||||||||||
ACL | 1.00 | .05 | .06 | .13* | .04 | .02 | .04 | .19*** | .04 | |
EV | 1.00 | .07 | .12* | .11* | .08 | .11 | .20*** | .18** | ||
NW | 1.00 | .34*** | .01 | .00 | .00 | .28*** | .21*** | |||
PA | 1.00 | .05 | .01 | .04 | .28*** | .11 | ||||
UW | 1.00 | 0.92*** | .97*** | .20*** | .09 | |||||
WS | 1.00 | .92*** | .21*** | .12* | ||||||
TU | 1.00 | .20*** | .09 | |||||||
Form B | ||||||||||
ACL | 1.00 | .04 | .05 | .17** | .10 | .08 | .06 | .12* | .02 | |
EV | 1.00 | .21*** | .20*** | .25*** | .22*** | .21*** | .29*** | .27*** | ||
NW | 1.00 | .28*** | .07 | .07 | .03 | .35*** | .11 | |||
PA | 1.00 | .13* | .12* | .11* | .50*** | .09 | ||||
UW | 1.00 | .96*** | .93*** | .19*** | .21*** | |||||
WS | 1.00 | .95*** | .17** | .19** | ||||||
TU | 1.00 | .17** | .18** | |||||||
Form C | ||||||||||
ACL | 1.00 | .02 | .24*** | .25*** | .08 | .10 | .05 | .21*** | .17** | |
EV | 1.00 | .08 | .12* | .10 | .12* | .13* | .14** | .10 | ||
NW | 1.00 | .42*** | .15** | .13* | .14* | .47*** | .22*** | |||
PA | 1.00 | .08 | .07 | .09 | .51*** | .00 | ||||
UW | 1.00 | .95*** | .94*** | .26*** | .14* | |||||
WS | 1.00 | .94*** | .25*** | .15* | ||||||
TU | 1.00 | .25*** | .14* | |||||||
Note. *p < .05; **p < ,01; ***p < ,001; ACL = Alphabetic Copying Letters; EV = Expresive Vocabulary; NW = Names Writing; PA = Phonological Awareness; UW = Unique Words; WS = Word Sequences; TU = T-Units. |
Table 2: Correlation coefficients of Forms A, B and C: concurrent and predictive validity
The results of the EFA using parallel analysis revealed a two-factor solution. Both the scree plot and parallel analysis indicated that two factors should be selected (see Figures 11, 12 and 13). All factor loadings were above 0.30 and statistically significant (p < 0.001). One factor was related to transcription skills (TS) (i.e., the number of accurately spelled names, total hits in expressive vocabulary, the total number of copied letters, and the number of correctly isolated phonemes), and another factor was related to Oral Narrative Competence (NC) (i.e., number of unique words in the oral narrative, number of T-units in the oral narrative, and correct word sequences).
This structure was confirmed through confirmatory factor analysis (CFA). Model fit was evaluated using the robust maximum likelihood (RML) estimation method and assessed through the following indices59,61: standardized root mean square (SRMS ≤ 0.08), chi-square (χ², p > 0.05), Tucker-Lewis index (TLI ≥ 0.90), comparative fit index (CFI ≥ 0.90), root mean square error of approximation (RMSEA ≤ .06), and composite reliability (ω ≥ .60). Modification indices (MIs) were also examined.
The CFA results for each form (i.e., A, B, and C) will be explained separately.
Form A
The CFA results of the Form A two-factor model are presented in Figure 14. The index of fit indicated an excellent fit of the model to the data (χ2= 10.61, df= 13, p = 0.64; χ2/df = 0.81; CFI = 1.00; TLI=1.002; NFI = 0.99; NNFI = 1.002; MFI = 1.007; RMSEA =0.00; CI = 0.00-0.04; SRMR = 0.03).
Figure 11: Scree Plot for Exploratory Factor Analysis: Form A Please click here to view a larger version of this figure.
The model fit evaluation for 'Form A' of the CBM demonstrates a strong alignment between the proposed model and the observed data. The chi-square statistic (χ2=10.61) with 13 degrees of freedom (df=13) yields a p-value of 0.64, indicating a robust model fit. The chi-square-to-degree-of-freedom ratio (χ2/df = 0.81) falls within the expected range, confirming a favorable model fit. Additionally, the goodness-of-fit indices, including the comparative fit index (CFI), Tucker-Lewis index (TLI), normalized fit index (NFI), nonnormed fit index (NNFI), McDonald's fit index (MFI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR), collectively indicate precise model fit. In particular, the CFI has a perfect value of 1.00, while the RMSEA has a low value of 0.00, with a 90% confidence interval ranging from 0.00 to 0.04. In summary, the results for Form A of the CBM undeniably point to an exemplary model fit. The chi-square statistic, p value, and an array of fit indices collectively underline the precise alignment between the model and the observed data, providing robust support for the suitability of Form A within our research context. The coefficient omega is ω = 0.78
Form B
The CFA results for the Form B two-factor model are presented in Figure 15. The indices of fit indicated an excellent fit of the model to the data (χ2= 28.60, df= 13, p = 0.01; χ2/df = 2.2; CFI = 0.99; TLI=0.98; NFI = 0.97; NNFI = 0.98; MFI = 0.98; RMSEA =0.05; CI = 0.02-0.08; SRMR = 0.04).
Figure 12: Scree Plot for Exploratory Factor Analysis: Form B Please click here to view a larger version of this figure.
The model fit assessment for Form B of the CBM indicates a reasonably good alignment between the proposed model and the observed data. The chi-square statistic (χ2=28.60) with 13 degrees of freedom (df=13) yields a p value of 0.01, signifying a statistically significant fit, albeit slightly above the conventional significance threshold. The chi-square-to-degree-of-freedom ratio (χ2/df = 2.2) suggested an acceptable model fit within an anticipated range. Various goodness-of-fit indices underline a reasonable model fit. In particular, the CFI reported a high value of 0.99, while the RMSEA registered a value of 0.05, with a 90% confidence interval ranging from 0.02 to 0.08. In summary, the results for Form B of the CBM indicate a satisfactory model fit. The chi-square statistic, p value, and various fit indices collectively support a reasonably strong alignment between the model and the observed data, providing robust evidence for the suitability of Form B within our research context. The coefficient omega was ω = 0.86
Form C
The CFA results for the Form C two-factor model are depicted in Figure 16. The fit indices suggest an outstanding fit of the model to the dataset. (χ2= 19.85, df= 13, p = 0.09; χ2/df = 1.52; CFI = 0.99; TLI=0.99; NFI = 0.98; NNFI = 0.99; MFI = 0.99; RMSEA =0.03; CI = 0.00-0.06; SRMR = 0.03).
Figure 13: Scree Plot for Exploratory Factor Analysis: Form C Please click here to view a larger version of this figure.
The model fit assessment for Form C of the CBM suggests a strong alignment between the model and the observed data. The chi-square statistic (χ2=19.85) with 13 degrees of freedom (df=13) yields a p value of .09, indicating a reasonably acceptable model fit, although the p value is slightly above the customary significance threshold. The chi-square-to-degree-of-freedom ratio (χ2/df = 1.52) falls within the anticipated range, indicating a reasonable model fit. Several goodness-of-fit indices corroborate a strong model fit. Specifically, the CFI has a robust value of 0.99, while the RMSEA has a value of 0.03, with a 90% confidence interval ranging from 0.00 to 0.06. In summary, the results for Form C of the CBM suggest a robust model fit. The chi-square statistic, p value, and various fit indices collectively support a strong alignment between the model and the observed data, providing compelling evidence for the suitability of Form C within our research context. The coefficient omega was ω = 0.82.
In general, after computing Omega values, which assess the internal consistency considering the multidimensional nature of the CBM, all three versions (Forms A, B, and C) demonstrate robust reliability. A value of omega above 0.70 is typically considered acceptable in most research contexts. Therefore, in each of the three versions, the CBM appears to be internally consistent, indicating that the items reliably measure the same construct or skill it is designed to assess. It is also important to note that these omega values provide an additional measure of CBM quality, complementing the previously provided fit analysis. Collectively, these results support the appropriateness of all three forms of the CBM in the context of the present research.
The multidimensional approach of the tool was confirmed. The tasks included in the application were loaded on two factors: 1) phonological awareness, name writing, alphabetic copying letters, and expressive vocabulary indicators loaded on the "transcription factor" and 2) t-units, unique words, and word sequences loaded on the "narrative competence" factor.
Finally, receiver operating characteristic (ROC) analysis was performed to evaluate the diagnostic accuracy of the application based on the two factors derived from the CFA analysis. A composite score, the Omnibus Pomp Score, was generated to capture both factors: Transcription and Narrative Competence (TRNC). The standardized EGWA-K was used as the gold standard for testing the accuracy of each single diagnostic measure (i.e., factor). The students were classified into two groups: a) at-risk children with scores within or below the 20th percentile on the standardized writing test EGWA-K51 (n =147) and b) typically achieving children with scores within or above the 20th percentile on the same test (n =107). The area under the ROC curve (AUC > 0.70), sensitivity (> 0.70) and specificity (> .80) were explored60. In terms of diagnostic accuracy, Form A exhibited an area under the curve (AUC) of 71.18, a sensitivity of 70.47 and a specificity of 58.69. Form B had a superior AUC of 75.43, in conjunction with a sensitivity of 71.02 and a specificity of 70.21. Moreover, Form C demonstrated a notably robust AUC of 82.03, with a sensitivity of 75.70 and a specificity of 72.34 (Figure 17). These discerning outcomes collectively underscore an evident augmentation in the diagnostic accuracy of the CBM over the course of the academic year.
Figure 14: Confirmatory Factor Analysis of Form A. Note. NW = Name writing; EV = Expressive vocabulary; ACL = Alphabetic copying letters; PA = Phonological awareness; TU = T-units; UW = Unique words; WS = Word sequence; TR = Transcription factor; NC = Narrative competence factor. Please click here to view a larger version of this figure.
Figure 15: Confirmatory Factor Analysis of Form B. Note. NW = Name writing; EV = Expressive vocabulary; ACL = Alphabetic copying letters; PA = Phonological awareness; TU = T-units; UW = Unique words; WS = Word sequence; TR = Transcription factor; NC = Narrative competence factor. Please click here to view a larger version of this figure.
Figure 16: Confirmatory factor analysis of Form C. Note. NW = Name writing; EV = Expressive vocabulary; ACL = Alphabetic copying letters; PA = Phonological awareness; TU = T-units; UW = Unique words; WS = Word sequence; TR = Transcription factor; NC = Narrative competence factor. Please click here to view a larger version of this figure.
Figure 17: Curve ROC analysis. Please click here to view a larger version of this figure.
This study investigated the framework of a CBM in Spanish kindergarten students using a literature-informed model, examining how transcription skills and oral narrative abilities impact observable indicators throughout the academic year. The study highlights the lack of technology-driven CBM tools tailored for Spanish, hindering early writing progress assessment. To address this gap and emphasize the importance of identifying potential writing difficulties in young learners, a multimedia assessment protocol for kindergarten students was introduced.
By employing a comprehensive model, which delved beyond a simplistic view of writing, the study conducted a multifaceted examination of early writing abilities. This model, also known as the not-so-simple view of writing10,11, recognized the roles of transcription skills (e.g., letter formation, spelling) and linguistic competencies. The primary aim was to introduce a multimedia assessment protocol aimed at identifying writing difficulties and exploring its internal structure and diagnostic accuracy, thus enhancing early intervention strategies based on a robust theoretical framework.
The early years of formal schooling, especially kindergarten, mark a crucial period for identifying and offering specific support to children facing writing obstacles62. Challenges with writing in the early school years can impede a child's ability to keep pace with peers, leading to academic setbacks and diminished self-esteem63. Consequently, it is imperative for educators and caregivers to possess a comprehensive understanding of potential indicators of writing difficulties and to enact suitable interventions that bolster children's developmental progress64.
This study aimed to examine the factorial structure of the CBM among Spanish kindergarten students. Transcription skills and narrative competence were identified as latent factors potentially accounting for the variance observed in each task across three different time points throughout the academic year. Currently, there is a notable absence of technology-driven CBM tools available in Spanish for screening and monitoring early writing progress. Recognizing the importance of identifying young children who may encounter writing difficulties and the lack of digital resources for Spanish-speaking populations, we introduced a multimedia assessment protocol for kindergarten students.
The present study has proven that the application is a valid and reliable tool. Considering composite reliability, we can conclude that the CBM has good reliability in three forms (i.e., A, B, and C), being above 0.70 in all the cases. The results from the ROC analyses were promising, as the AUCs ranged from 0.71 to 0.82 across all the forms, indicating acceptable to excellent accuracy. It is important to note that we relied on a single measure as the gold standard, specifically focusing on writing skills, which is a relatively limited approach. To accurately reflect the content of the criterion being investigated, we believe that classification accuracy could be enhanced by incorporating additional standardized assessments in future studies.
Additionally, the results showed adequate indices of concurrent and predictive validity, with all correlations being statistically significant (p < 0.01). These results emphasize the strong concurrent validity of the CBM, highlighted by consistent correlations among the three forms and the standardized writing measure (EGWA-K). Additionally, the findings indicate the significant predictive capacity of the scale, as evidenced by its correlation with teachers' assessments of students' curricular competence (RT scale) at the end of the academic year. Some studies have corroborated the accuracy of teacher ratings in some contexts. While teacher ratings of emergent literacy skills have shown mixed results65,66, they can still provide valuable insights. Cabell et al.65 reported that teacher ratings differentiated children with lower literacy skills but were insufficient for reliably identifying at-risk children. However, Coker and Ritchey66 showed that teacher ratings were accurate in identifying at-risk children in writing measures. Additionally, Gray et al.67 validated the usefulness of teacher-reported indicators for monitoring oral language skills. Despite these limitations, teacher judgments can offer important information, especially when used alongside other assessments. In summary, the magnitude of these correlations found substantiates the tool's ability to assess and predict writing skill development in kindergarten-aged children, highlighting its potential for implementation in educational research and practice. Diagnostic accuracy was also found to be adequate, as the CBM was able to distinguish between at-risk and non-risk students.
The CBM, designed to detect early signs of writing learning challenges in kindergarten children, faces various constraints. Limited access to technology, especially in resource-limited settings, may hinder its deployment, reducing its effectiveness. Additionally, the application's efficacy relies on the digital literacy of both children and educators; inadequate familiarity with digital tools can undermine its functionality. Moreover, an incomplete representation of cultural and linguistic nuances in the assessment may compromise its validity, potentially affecting result accuracy. Factors like internet connectivity, device calibration, and test standardization could also influence reliability and validity, impacting result accuracy. The absence of direct human interaction may limit the application's ability to understand individual children's needs adequately, hindering personalized support, crucial in early learning environments. Additionally, some assessment components may lack cultural sensitivity, hindering equitable detection of learning difficulties across diverse groups.
Regular updates are crucial to meet evolving user needs and sustain effectiveness. Neglecting updates may render the application outdated or less effective in identifying and addressing writing challenges in kindergarten children. Addressing these limitations during design, development, and implementation is essential to optimize usefulness and efficacy in educational contexts.
In summary, we conclude that the assessment model is an excellent fit for fall, winter, and spring for Spanish kindergarten students. These results indicate the good construct validity of the CBM, which allows early writing skills to be measured. This study has two main contributions. First, the results support the relationship among the proposed measures (i.e., task) as observable indicators of the latent factor of transcription and narrative competence. Second, the results support the use of the CBM for assessing transcription skills and narrative competence. The development of the CBM will allow teachers to identify and monitor the progress of Spanish students struggling in writing throughout the school year. Furthermore, teachers can use the information collected through the CBM to determine the most appropriate intervention strategies for an individual child. Future research should explore the growth trajectories and longitudinal factorial invariance of the CBM.
The authors listed above certify that there are no financial interests or other conflicts of interest associated with the present study.
We gratefully acknowledge the support of the Spanish government through its Plan Nacional I+D+i (R+D+i National Research Plan, Spanish Ministry of Economy and Competitiveness), Grant PID2019-108419RB-100 funded by MCIN/AEI/10.130.39/50110001103, with the first author as the principal investigator. We also thank the Unidad de Audiovisuales ULL team for their participation in the production of the video.
Name | Company | Catalog Number | Comments |
Indicadores de Progreso de Aprendizaje en Escritura para Educación Infantil (T-IPAE-EI)“ | Universidad de La Laguna | ®Copyrigth, 2023 | C826ED53A500DC82D7A7CD0F 03C136CDB8F5A8E41D4175 0052469CA4CC0E11F8 |
Zapytaj o uprawnienia na użycie tekstu lub obrazów z tego artykułu JoVE
Zapytaj o uprawnieniaThis article has been published
Video Coming Soon
Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone