INDIVIDUALIZED ASSESSMENT OF HOME SCHOOLING EDUCATION

Lyn Boulter and Kristin Macaluso
Department of Psychology
Catawba College
2300 W. Innes St.
Salisbury, North Carolina 28144

Keywords: Homeschooling, home schooling, home education, assessment

The number of children in America meeting state compulsory education requirements at home rather than in school is in the range of 500,000 to one million and is continuing to grow (Clark, 1994). In the last ten years, 34 states have passed laws or regulations that delineate the implementation and assessment of home schooling (Eaton, 1993). This resurgence of home schooling  has led many school officials and academic communities to reconsider the validity of this form of education.
The home schooling educational model is based on the idea that skills and knowledge will be learned through daily life, social activities, and work or community service. Instructional plans emerge from the student’s individual needs and interests. The student is encouraged by the home instructor to pursue his/her own learning interests, rather than follow a generic daily/weekly schedule or a predetermined curriculum. Many educators acknowledge that home schooling may not be educationally harmful, but still do not regard this educational model as legitimate (Lines, 1987). There is a need for outcome-based measurement of the effectiveness of home schooling instruction with assessment instruments appropriately matched to the home schooling educational model.
One of the most common assessments of the effectiveness and validity of home education is the academic success of the students on standardized tests (Lines, 1987). Home educated students, grades K-12, in Washington State consistently scored above the national average on the Stanford Achievement Test (SAT) in reading, language, math and science, with the median score at approximately the 67th percentile on national norms (Ray, 1992). Other studies conducted in Tennessee and New York, as well as a nationwide study of home schooled students, showed that home schooled children scored higher than statewide averages for their age groups in each area of the SAT (Eaton, 1993; Ray, 1990).
Not all studies reported scores for home schooled students that were above the average of their peers. A study of home schooling in Alabama (Rakestraw, 1988) found that home educated students at the 1st and 4th grade level scored below the national average in math, and the 54th percentile in reading by some of the students at the 1st through 6th grade level could be considered marginal. Likewise, the Washington State Superintendent of Public Instruction reported marginal percentiles for math (53rd) and language (56th) (Ray, 1992). Overall, however, the studies assessing the academic outcomes of home education seem to suggest that, in general, students educated at home perform as well as their peers in conventional schools on standardized achievement tests (Eaton, 1993; Ray, 1988).
One limitation of the findings in these studies includes the home school sample populations. There is little evidence in any of these studies that the sample of home schooled students was matched to a comparable sample of students in conventional schools according to socioeconomic level, culture, or ethnicity (Ray, 1986). Also, standardized group achievement tests have been accused of bias against females, minorities, and individuals with developmental disabilities (Ray, 1988).
A second limitation concerns the use of standardized group achievement tests such as the SAT to assess the academic achievement of home educated students. Reviews of the group standardization sample and other research using the SAT to assess skill development in a variety of content areas  demonstrate adequate reliability and validity coefficients for screening academic performance (Ebel, 1978; Salvia & Ysseldyke, 1991). However, the content and administration procedures of the group achievement tests may not be a valid and accurate reflection of an individualized curriculum (Jenkins & Pany, 1978; Salvia & Ysseldyke, 1991) such as the instructional model of home education. Thus, the generalizability of group standardized tests to a population of students who are educated in a highly individualized environment is questionable (Ray, 1988).
     A third limitation concerns the way group achievement tests are administered. Group administered tests prevent the ability of the tester to observe individual student performance, and the global scores by content areas prohibit diagnostic assessment of student responses for future individualized skill development.
The purposes of the present study were to:
1. Assess the academic achievement of home schooled students with an individualized achievement test to evaluate the effectiveness of the curriculum and instructional model used by home school instructors.
2. Compare mean subtest scores on an individualized achievement test administered to a sample of home schooled students with their scores from matched subtests on a standardized group achievement test to provide concurrent validity of the individualized assessment for the home schooled population.

                                Method

Materials
     The present pilot study assessed the academic ability of home schooled students using the Woodcock-Johnson-Revised (WJ-R) Tests of Achievement. These individually administered tests include Letter-Word Identification, Passage Comprehension, Calculation, Applied Problems, Dictation, Writing Samples, Science, Social Studies, and Humanities. The nine subtests also yield “Cluster Scores” in areas labeled as Broad Reading, Broad Mathematics, Broad Written Language, and Broad Knowledge (Costenbader & Perry, l990). According to Woodcock and Mather (1990, p. 181), “the WJ-R can be used in a wide variety of settings for
the purposes of: (1) diagnosis and in-depth evaluation of development; (2) determination of psychoeducational discrepancies; (3) program placement; (4) planning individual programs in educational and vocational settings; (5) guidance and career counseling; (6) assessing growth in individual abilities over wide time spans; (7) evaluation of program effectiveness; (8) research; and (9) psychometric training.” Test reviews report that reliability and validity coefficients for age groups are considered adequate (Ebel, 1978; Salvia & Ysseldyke, 1991).

Subjects and Procedures
      A sample of 28 students (11 males, 17 females) from the southwest region of North Carolina were individually administered the nine WJ-R standard battery tests by a qualified psychologist and a trained psychology student assistant. Ages ranged from 5 years, 10 months to 14 years, 10 months, with a grade (equivalent) range from 1st through 9th grade. Computerized age-equivalent and grade-equivalent scores were obtained for each student. Scores were also obtained for the SAT group achievement tests each student had completed that year. All the families in the sample had voluntarily requested an individually administered assessment of academic progress.
Since passage of P.L. 94-142, students attending conventional public schools are typically screened for learning disabilities if they are falling more than 1 1/2 grade levels behind in their academic performance, but they have at least average intelligence. Students with identified learning disabilities are generally not included in standardization samples for group or individual achievement tests, or in the population of students who take group achievement tests (without modifications) mandated  by every conventional public school. Students instructed at home are not typically screened and identified for learning problems unless a student had attended a public school previously, or if the parents had the student formally evaluated by a private psychologist. Thus, the home schooled population in this study included all studentsClearning disabled students as well as those students who had typical learning abilitiesCwhose parents requested testing.

Results

The students’ percentile ranks, based on age-equivalent standard scores for each WJ-R subtest score and each cluster score, were used for analyses in order to compare the academic performance of this sample of home schooled students with the WJ-R norm sample. In addition to analyses computed for the total sample and the two gender groups, age groups were compared by splitting the sample in half. The younger group was composed of students between the ages of 5 and 10 years (N = 14), and the older group contained the students between 11 and 14 years (N = 14).
Average percentiles of the total sample of students, the two gender groups and the two age groups were computed for each of the nine subtests of the WJ-R (Table 1).

WJ-R Subtest                            Percentile

Letter-Word Identification            53
Passage Comprehension                        64
Calculation                                50
Applied Problems                       30
Dictation                                   30
Writing Sample                          68
Science                                                64
Social Studies                           57
Humanities                                49

Table 1. Mean WJ-R subtest percentiles for the total population.

Average percentiles of the total sample were at or better than the 50th percentile on Letter-Word Identification, Passage Comprehension, Calculation, Applied Problems, Writing Sample, Science, and Social Studies. The average percentiles for the total sample were below average on Humanities (49th) and Dictation (30th). T-test computations between gender and age groups for each subtest revealed that there were no significant percentile differences between males and females, but older students scored significantly higher percentiles than younger students in Applied Problems and Writing Samples (p < .05).
Average percentiles of the total sample, the two gender groups and the two age groups were calculated for the WJ-R four subtest clusters: Broad Reading, Broad Mathematics, Broad Written Language and Broad Knowledge (Table 2).

           Age Group          

10 years                        11 years
WJ-R Subtest                and younger      and older

Letter-Word Identification                        51                     56
Passage Comprehension                        62                     65
Calculation                                47                     53
Applied Problems                                   37                     71
Dictation                                   28                     31
Writing Sample                          57                     79
Science                                                68                     61
Social Studies                           58                     56
Humanities                                50                     47

Table 2. Mean WJ-R subtest percentiles by age group.

The total sample of students scored at or above the 50th percentile on all four of the subtest clusters. The younger age group scored above the 50th percentile in all cluster areas except Broad Written Language (43rd), but the older age group scored above the 50th percentile in all four clusters. Females scored above the 50th percentile in all cluster areas, and males scored above the 50th percentile in all clusters except Broad Written Language (43rd). T-tests computed comparing gender groups indicated that the male students scored significantly higher than the female students in Broad Knowledge. T-tests comparing the two age groups were not significant. Examination of raw scores for males, females, younger group and older group suggests that the below average percentiles in Broad Written Language may be due to the low scores of the younger males, but the sample sizes were too small to statistically assess this pattern.
Pearson product-moment correlation coefficients were computed between each subtest of the WJ-R and a matched subtest of the SAT to assess concurrent validity (Table 3).

     Gender    

Cluster                                      Male     Female

Broad Reading                          58         58
Broad Math                               55         56
Broad Written Language             43         54
Broad Knowledge                       65         51

Table 3. Mean WJ-R cluster percentiles by gender.

All correlations (range = .62 to .87) revealed significant positive relationships (p < .05) between WJ-R subtests and matched SAT subtests.

Discussion

The primary purpose of this study was to determine whether home educated students were performing in academic content areas at least as well as their peers in conventional school systems. This small sample of home schooled students demonstrated that in the content areas of Letter-Word Recognition, Passage Comprehension, Calculation, Applied Problems, Writing Samples, Science and Social Studies, they performed as well as or better than average according to standardization norms. This academic success may be explained, in part, by comparing the instructional methods used by home educators with nationally recognized criteria that defines effective teaching. After a review of the literature and a survey of teacher education programs, Dwyer and Villegas (1993) formulated a holistic perspective of effective instruction, organized into four task domains. Within each domain are a set of criteria by which teaching can be judged:
1. Domain A stresses teacher knowledge of the content and the students in order to select appropriate and relevant learning goals, methodology, instructional materials and evaluation strategies.
2. Domain B emphasizes creating a challenging, fair, consistent, and safe learning environment that engages students as active learners.
3. Domain C focuses on teaching methods that build knowledge, understanding and application of content adapted to each student’s previous knowledge, interests, abilities, and background.
4. Domain D concerns teacher professionalism, in which materials, activities and insights are shared with colleagues.
Most experts in education agree that there is no single “right way” to teach, and no single set of teaching procedures that meets the needs of all learners (Dwyer & Villegas, 1993). Descriptions of home school education in the literature demonstrate that this method of education satisfies the above criteria of effective instruction. Most home educators use curriculum approved by their local school district or their State’s Department of Education, and submit standardized test scores to their State Department at the end of each academic year (Lines, 1987). Philosophically, many home schoolers advocate a “free-flow,” community-based curriculum that allows students to determine what they want to learn according to their own interests and abilities (Divoky, 1983). Sometimes two home schoolers will get together and “team teach” their children, but most share materials, methods and other information through home schooling associations and other organized groups. In an extensive report of previous research findings and her own personal interviews, Lines (1987, p. 512) states that “some parentsCmany of them former teachersCthink through their methods very carefully to meet the individual needs of their children. Others have less training, but they usually make an effort to discover an appropriate pedagogical approach for their children, sometimes consulting teachers, experts, or materials on child development and learning.” Thus, home school education does employ effective instructional methods, resulting in the academic success of their students.
This sample of home schooled students did demonstrate below average performance in two academic areas: (1) Dictation (i.e., grammar, syntax and spelling), and (2) Humanities (knowledge of music, art and literature). Younger students also had difficulty with Applied Problems, and all students barely managed to approach average performance in Calculations. These findings replicated previous studies of home schooled students in Alabama and Washington State where scores for certain specific content areas in math or language were barely at the 50th percentile (Rakestraw, 1988; Ray, 1992).
These findings for specific content areas aid the interpretation of the Cluster scores. Percentiles for Broad Reading, Broad Mathematics and Broad Knowledge were above average, but Broad Written LanguageCwhich clusters Dictation and Writing SamplesCwas below average for the overall sample, males and the younger group of students. This pattern may be due to two unique characteristics of this home educated sample. The first characteristic concerns the learning abilities of the sample. Students with specific disabilities such as dyslexia and/or attention deficit disorder would have considerably more difficulty with this particular cluster of subtests. It is important to note that dyslexia and attention deficit disorder occur in males 80% of the time and are more observable in younger students than older students, who may have taught themselves ways to compensate for their disability. Since the home schooled population included students who may have learning disabilities, it would make sense that their percentile ranks would be lower than the normative sample, which did not include students with identified learning disabilities.
The second characteristic concerns the content learned by the students in this sample. True to the philosophy of the home education model, the home schooled students chose curriculum content and activities that interested them. The home educators (typically the parents) waited until the student expressed an interest in any content area or skill. It was not uncommon for parents to tell the examiner that her/his child “did not like to write,” “was not interested in learning spelling rules or grammar yet,” or “thought that just doing math problems for no reason was meaningless.”  These parents emphasized functional reading, “free writing” for fluency, and practical math activities. The arts were not as highly prioritized. Since the WJ-R can also be used as a diagnostic tool, the examiner was able to examine error patterns on the subtests of each student and make specific recommendations to the student’s home educator for curriculum planning and/or instructional modifications.
Follow-up research is planned that will re-test this original sample of students with the supplementary battery of the WJ-R to measure academic progress one year after the original individualized assessment. Long range plans are to conduct longitudinal research that will follow as many of these students as possible through the equivalent of grade 12, and track their success on criteria for college admission, or other postsecondary pursuits. In addition, the WJ-R achievement measures will be supplemented with a social adjustment scale to obtain a measure of social and emotional development.
A second purpose of this study was to determine whether scores on subtests of an individualized achievement test were similar to scores on matched subtests on a group achievement test. The power of the correlations between subtests of the WJ-R and SAT provide some concurrent validity to the WJ-R as a measure of the academic achievement of home schooled students. Comparing an individual achievement test with a group achievement test is “comparing apples to oranges,” and it provides questionable evidence for validity. Since group achievement tests are the only outcome measures used to evaluate home school education in the research to date, the attempt at establishing the WJ-R individual achievement test as a valid outcome                assessment of home education is at least a first step. Further research is needed that correlates scores on the WJ-R subtests with subtest scores on a comparable individualized achievement test.
Subsequent studies following up on this preliminary research  must also correct the inherent bias in this sample. These students were educated by conscientious parents who requested an individualized achievement test to assess the quality of their carefully individualized curriculum. A larger, randomized sample is needed that would be more representative of home schooled students in general.

References

Costenbader, V.K., & Perry, Cynthia. (1990). The Woodcock-Johnson Psychoeducational Battery-Revised: Test Review. Journal of Psychoeducational Assessment, 8, 180-184.
Clark, Charles S. (1994). Home schooling: Is it a healthy alternative to public education? The CQ [Congressional Quarterly] Researcher, 4(33), 769-792.
Divoky, Diane. (1983). The new pioneers of the home-schooling movement. Phi Delta Kappan, 64, 395-399.
Dwyer, Carol Anne & Villegas, Ana Maria. (1993). Defining teaching. In Praxis series highlights: Foundations for tomorrow’s teachers, Princeton, NJ: Educational Testing Service, 1993.
Eaton, Susan. (1993). Hostility fades as home schooling grows. The Harvard Education Letter, 9, 6-7.
Ebel, Robert L. (1978). Stanford Achievement Test, 1973 edition in Oskar K. Buros (Ed.). The eighth mental measurements yearbook. Highland park, NJ: The Gryphon Press.
Jenkins, Joseph, & Pany, Darlene. (1978). Standardized achievement tests: How useful for special education?. Exceptional Children, 44, 448-453.
Lines, Patricia M. (1987). Overview of home instruction. Phi Delta Kappan, 68, 510-517.
Rakestraw, Jennie F. (1988). Home schooling in Alabama. Home School Researcher, 4, 1-6.
Ray, Brian D. (1986). A comparison of home schooling and conventional schooling: With a focus on learner outcomes. A paper presented to the Department of Science, Math, and Computer Education, Oregon state University, Corvallis.
Ray, Brian D. (1988). Home Schools: A synthesis of research on characteristics and learner outcomes. Education and Urban Society, 21, 16-31.
Ray, Brian D. (1990). A nationwide study of home education: Family characteristics, legal matters, and student achievement. Salem, OR: Western Baptist College, National Home Education Research Institute.
Ray, Brian D. (1992). Marching to the beat of their own drum: A profile of home education research. Paeonian Springs, VA: The Home School Legal Defense Association.
Salvia, John, & Ysseldyke, James E. (1991). Assessment. Boston: Houghton Mifflin.
Weaver, Roy A., Negri, Anton, & Wallace, Barbara. (1980). Home tutorials vs. the public schools in Los Angeles. Phi Delta Kappan, 61, 254-255.
Woodcock, Richard W., Mather, Nancy. (1989). The Woodcock-Johnson Psychoeducational Battery-Revised. Allen, TX: DLM Teaching Resources.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply