Monday, April 26, 2010

Letters to the Editor

On April 1 the Doings of Western Springs published this letter to the editor:

Pleasantdale schools must be responsible
Everywhere you look, schools are in financial crisis. Many are cutting back on programs, eliminating positions or increasing class sizes to save money.

La Grange School District 102's superintendent explained the rationale behind their budget cutting decisions stating, the district is attempting to make reasonable cuts in hopes of showing the community, especially those without children in the schools, that they are being responsible with taxpayer dollars.

At Pleasantdale Elementary District 107 the opposite is being done.

Taxpayers are forced to subsidize an optional preschool program for about 100 kids to the tune of almost $250,000 a year. With expenses far exceeding revenue, Pleasantdale is not being responsible with taxpayer dollars. Now that Pleasantdale's teacher contract is up for renewal, this deficit will likely climb even higher.

Preschool is not state mandated; yet the entire community is burdened with paying for this program. Parents who choose to enroll their children in programs elsewhere must bear the entire cost of tuition on their own and do not get the benefit of having the community subsidize it.
The Pleasantdale school district should show its taxpayers, that they too, are being fiscally responsible with the public's money by requiring the participants of this program to bear the entire cost and not saddling the taxpayers with this ever increasing debt.
Gina Scaletta-Nelson,
Willow Springs




On April 22, the Doings printed a rebuttal from Mark Fredisdorf found below. It was rather comical to say the least. In red, please find our rebuttal to Fredisdorf's "facts."

Letters to the editor: Facts don't back Pleasantdale criticism

A letter submitted by Gina Scaletta-Nelson in the April 1, edition of The Doings criticizes Pleasantdale District 107 for providing an optional preschool program that is taxpayer subsidized by $250,000. Her criticism is not supported by facts.

The district provides an inclusive preschool allowing special education students to receive required services in their home school. (This "inclusive preschool" is for some children, not all. Children deemed to have more severe needs are routinely shipped out of the district.)


Pleasantdale is one of the few districts to also enroll regular education students for $3,250 per year which is commensurate with private preschools for a five day per week program. (Pleasantdale is the only local district that offers a fee-based preschool program in a public school.)


Revenue generated by parent paid tuition totals $122,657 this year. Total direct costs for the program this year including salaries, benefits, equipment and materials are $187,837. (Hmmm, Fredisdorf conveniently excluded the extended day kindergarten students in his revenues and direct costs. They ARE part of the Bright Beginnings program and this is the first time he has ever separated the two programs. It is his attempt to make the deficit look smaller.)


The net cost (tuition revenue minus direct costs) is $65,180 not $250,000. (Click on the documents below to enlarge them and see what Fredisdorf published in his February school board packet. They tell a different story. Total direct expenses are $325,374. When you add the indirect costs that Fredisdorf always includes when he calculates costs for this program, the cost of this program climbs even higher to $439,936. Look at Fredisdorf's projection sheet on the left and you will see that the cost of this program in 2014 will be $556,148. The deficit will increase to $294,641 if enrollment stays the same. If enrollment decreases, the deficit will increase.)

In addition to cost considerations, the educational benefits of a quality preschool are highly valued by parents. (This is actually funny. The argument Fredisdorf uses to support his preschool program is the same argument he uses against a free full day kindergarten program. A full day kindergarten program would also be educationally beneficial and highly valued by parents. If Fredisdorf feels so strongly about the educational benefits, why doesn't he have a full day kindergarten program?)


Enrollments for next school year are already nearing capacity. Scaletta-Nelson referenced actions of other districts that are cutting services due to fiscal problems. She asserts that Pleasantdale should exercise similar fiscal responsibility. (Yes, I do believe that Pleasantdale should exercise fiscal responsibility. We finally agree on something!)


Again, the facts don't support her criticism. Pleasantdale is not experiencing fiscal problems. (I never said you were.)


The Board of Education has perennially earned the highest rating for fiscal management. As a result, there is no immediate need to reduce services. (My argument is not to reduce services. It is to have the families that utilize the preschool programs be the ones to pay for them. This optional program should not be subsidized by the taxpayers.)


Equally important, taxpayers enjoy one of the lowest tax rates in the metropolitan area. Both taxpayers and school children are the beneficiaries of a long and well documented tradition of sound fiscal management. (And we have previous boards and former superintendents to thank for that!)
Mark Fredisdorf, superintendent, Pleasantdale School District 107

Fredisdorf's rebuttal may be about preschool, but we weren't born yesterday.

Saturday, April 24, 2010

Friday, April 23, 2010

The Apples Don't Fall Far From the Trees...

The new middle school assistant principal, Joni Sherman, is none other than a friend of Meg Pokorny and Mark Fredisdorf. She hails from their former district, 102, in La Grange. It's more of the same, just as we predicted. Watch for the middle school gifted teacher position to be filled by another of their buddies in the coming months.

Norm-Referenced Achievement Tests - The ITBS


Norm-Referenced Achievement Tests



Human beings make tests. They decide what topics to include on the test, what kinds of questions to ask, and what the correct answers are, as well as how to use test scores. Tests can be made to compare students to each other (norm-referenced tests) or to see whether students have mastered a body of knowledge (criterion or standards-referenced tests). This fact sheet explains what NRTs are, their limitations and flaws, and how they affect schools.

What are norm-referenced tests?

Norm-referenced tests (NRTs) compare a person's score against the scores of a group of people who have already taken the same exam, called the "norming group." When you see scores in the paper which report a school's scores as a percentage -- "the Lincoln school ranked at the 49th percentile" -- or when you see your child's score reported that way -- "Jamal scored at the 63rd percentile" -- the test is usually an NRT.

Most achievement NRTs are multiple-choice tests. Some also include open-ended, short-answer questions. The questions on these tests mainly reflect the content of nationally-used textbooks, not the local curriculum. This means that students may be tested on things your local schools or state education department decided were not so important and therefore were not taught.

Commercial, national, norm-referenced "achievement" tests include the California Achievement Test (CAT); Comprehensive Test of Basic Skills (CTBS), which includes the "Terra Nova"; Iowa Test of Basic Skills (ITBS) and Tests of Academic Proficiency (TAP); Metropolitan Achievement Test (MAT); and Stanford Achievement Test (SAT, not to be confused with the college admissions SAT). "IQ," "cognitive ability," "school readiness," and developmental screening tests are also NRTs.

Creating the bell curve.
NRTs are designed to "rank-order" test takers -- that is, to compare students' scores. A commercial norm-referenced test does not compare all the students who take the test in a given year. Instead, test-makers select a sample from the target student population (say, ninth graders). The test is "normed" on this sample, which is supposed to fairly represent the entire target population (all ninth graders in the nation). Students' scores are then reported in relation to the scores of this "norming" group.

To make comparing easier, testmakers create exams in which the results end up looking at least somewhat like a bell-shaped curve (the "normal" curve, shown in the diagram). Testmakers make the test so that most students will score near the middle, and only a few will score low (the left side of the curve) or high (the right side of the curve).

Scores are usually reported as percentile ranks. The scores range from 1st percentile to 99th percentile, with the average student score set at the 50th percentile. If Jamal scored at the 63rd percentile, it means he scored higher than 63% of the test takers in the norming group. Scores also can be reported as "grade equivalents," "stanines," and "normal curve equivalents."

One more question right or wrong can cause a big change in the student's score. In some cases, having one more correct answer can cause a student's reported percentile score to jump more than ten points. It is very important to know how much difference in the percentile rank would be caused by getting one or two more questions right.

In making an NRT, it is often more important to choose questions that sort people along the curve than it is to make sure that the content covered by the test is adequate. The tests sometimes emphasize small and meaningless differences among testtakers. Since the tests are made to sort students, most of the things everyone knows are not tested. Questions may be obscure or tricky, in order to help rank order the testtakers.

Tests can be biased. Some questions may favor one kind of student or another for reasons that have nothing to do with the subject area being tested. Non-school knowledge that is more commonly learned by middle or upper class children is often included in tests. To help make the bell curve, testmakers usually eliminate questions that students with low overall scores might get right but those with high overall scores get wrong. Thus, most questions which favor minority groups are eliminated.

NRTs usually have to be completed in a time limit. Some students do not finish, even if they know the material. This can be particularly unfair to students whose first language is not English or who have learning disabilities. This "speededness" is one way testmakers sort people out.

How accurate is that test score?
The items on the test are only a sample of the whole subject area. There are often thousands of questions that could be asked, but tests may have just a few dozen questions. A test score is therefore an estimate of how well the student would do if she could be asked all the possible questions.

All tests have "measurement error." No test is perfectly reliable. A score that appears as an absolute number -- say, Jamal's 63 -- really is an estimate. For example, Jamal's "true score" is probably between 56 and 70, but it could be even further off. Sometimes results are reported in "score bands," which show the range within which a test-takers' "true score" probably lies.

There are many other possible causes of measurement error. A student can be having a bad day. Test-taking conditions often are not the same from place to place (they are not adequately "standardized"). Different versions of the same test are in fact not quite exactly the same.

Sub-scores on tests are even less precise. This is mostly because there are often very few items on the sub-test. A score band for a Juanita's math sub-test might show that her score is between the 33rd and 99th percentile because only a handful of questions were asked.

Scores for young children are much less reliable than for older students. This is because young children's moods and attention are more variable. Also, young children develop quickly and unevenly, so even an accurate score today could be wrong next month.

What do score increases mean?
If your child's or your school's score goes up on a norm-referenced test, does that mean she knows more or the school is better? Maybe yes, maybe not. Schools cannot teach everything. They teach some facts, some procedures, some concepts, some skills -- but not others. Often, schools focus most on what is tested and stop teaching many things that are not tested. When scores go up, it does not mean the students know more, it means they know more of what is on that test.

For example, history achievement test "A" could have a question on Bacon's Rebellion (a rebellion by Black slaves and White indentured servants against the plantation owners in colonial Virginia). Once teachers know Bacon's Rebellion is covered on the exam, they are more likely to teach about it. But if those same students are given history test "B," which does not ask about Bacon's Rebellion but does ask about Shay's Rebellion, which the teacher has not taught, the students will not score as well.

Teaching to the test explains why scores usually go down when a new test is used. A district or state usually uses an NRT for five to ten years. Each year, the score goes up as teachers become familiar with what is on the test. When a new test is used, the scores suddenly drop. The students don't know less, it is just that different things are now being tested.

Can all the children score above average?
Politicians often call for all students to score above the national average. This is not possible.
NRTs are constructed so that half the population is below the mid-point or average score. Expecting all students to be above the fiftieth percentile is like expecting all teams in a basketball league to win more than half their games. However, because the tests are used for years and because schools teach to them, there are times when far more than half the students score above average.

Why use norm-referenced tests?

To compare students, it is often easiest to use a norm-referenced test because they were created to rank test-takers. If there are limited places (such as in a "Gifted and Talented" program) and choices have to be made, it is tempting to use a test constructed to rank students, even if the ranking is not very meaningful and keeps out some qualified children.

NRT's are a quick snapshot of some of the things most people expect students to learn. They are relatively cheap and easy to administer. If they were only used as one additional piece of information and not much importance was put on them, they would not be much of a problem.

The dangers of using norm-referenced tests

Many mistakes can be made by relying on test scores to make educational decisions. Every major maker of NRTs tells schools not to use them as the basis for making decisions about retention, graduation or replacement. The testmakers know that their tests are not good enough to use that way.

The testing profession, in its Standards for Educational and Psychological Measurement, states, "In elementary or secondary education, a decision or characterization that will have a major impact on a test taker should not automatically be made on the basis of a single test score."

Any one test can only measure a limited part of a subject area or a limited range of important human abilities. A "reading" test may measure only some particular reading "skills," not a full range of the ability to understand and use texts. Multiple-choice math tests can measure skill in computation or solving routine problems, but they are not good for assessing whether students can reason mathematically and apply their knowledge to new, real-world problems.

Most NRTs focus too heavily on memorization and routine procedures. Mutiple-choice and short-answer questions do not measure most knowledge that students need to do well in college, qualify for good jobs, or be active and informed citizens. Tests like these cannot show whether a student can write a research paper, use history to help understand current events, understand the impact of science on society, or debate important issues. They don't test problem-solving, decision-making, judgement, or social skills.

Tests often cause teachers to overemphasize memorization and de-emphasize thinking and application of knowledge. Since the tests are very limited, teaching to them narrows instruction and weakens curriculum. Making test score gains the definition of "improvement" often guarantees that schooling becomes test coaching. As a result, students are deprived of the quality education they deserve.

Norm-referenced tests also can lower academic expectations. NRTs support the idea that learning or intelligence fits a bell curve. If educators believe it, they are more likely to have low expectations of students who score below average.

Schools should not use NRTs

The damage caused by using NRTs is far greater than any possible benefits the tests provide. The main purpose of NRTs is to rank and sort students, not to determine whether students have learned the material they have been taught. They do not measure anywhere near enough of what students should learn. They have very harmful effects on curriculum and instruction. In the end, they provide a distorted view of learning that then causes damage to teaching and learning.
AttachmentSize
norm refrenced tests.pdf497.6 KB

Wednesday, April 21, 2010

Grants and More

About a week ago, I sent an email to Pleasantdale School's technology coordinators John McAtee, Judith Stevens and business manager Catherine Chang in regard to a technology grant that was available to schools. I also sent it to the superintendent of another local school district as well.


This is the email I sent to Pleasantdale. Click on the photo to make it bigger.

I received an almost immediate response from the other school's superintendent. You can read his response and the email I sent to them below.

Yesterday I received a written response in the mail from Pleasantdale. Click on the photo below to read the response for yourself.

Seriously, maybe Mr. Fredisdorf should view some of Dr. Wick's tutorials because it is evident that he has issues with reading comprehension.


Nowhere in this letter did I ask the district to obtain voice amplification equipment to benefit my daughter. She already has an FM amplification system due to hearing loss. I asked the district's technology coordinators to look into a grant that will benefit all the district's children, hearing impaired or not.


According to the link posted in the email above, "While a classroom amplification system is not a matter of technology integration or technology literacy, it clearly represents a technology solution that provides an optimal learning condition in the classroom. There is an abundance of independent research that shows the need for amplification in the K-12 classroom, both for hearing-impaired students and students with normal hearing. That research is clear, consistent, and conclusive.


If my intention was to obtain a voice amplification system for my daughter, why would I send this email to another district? Do I expect them to provide one for my daughter too? No. It was because this is something that is FREE and can benefit children.


Wait a minute...did I say I want Fredisdorf to look into a grant that will benefit all children? Silly me, what was I thinking! Everyone knows that children are low on the totem pole at Pleasantdale, especially if they have special needs!


Mark Fredisdorf does not want anything in his school that might help children unless, of course, it will raise his test scores. He absolutely refuses to apply for any grants for our district because he would rather use your tax dollars.


He wanted no part of the Safe Routes to School Sidewalk Grant that would allow kids to walk and bike safely to school. Willow Springs Mayor Alan Nowaczyk told us Fredisdorf said he doesn't want sidewalks because he would lose money he receives for buses. He cares more about money than the health and safety of your children.


He refused to apply for the Preschool For All grant because he has no problem pillaging taxpayers and sucking money out of unsuspecting parents, and he couldn't care less about the district's middle income families.


Apply for a grant? Are we crazy? Why would he want anyone's eyes on the finances at Pleasantdale?? Hmmm, makes you wonder...


In the future, I will gladly advance any inquiries not concerning the education services of my daughter to Mr. Fredisdorf and the Board of Education. Hopefully they, like other school districts, will appreciate the fact that community members care about children and their education.


Maybe they will understand the benefits of seeking out grant money and enhancing our children's education, but I won't hold my breath.

Status Quo

Thursday, April 15, 2010

Surly

That would be one way to describe Pleasant Dale Park director, Katherine Parker's demeanor at last night's park board meeting. Cantankerous would be another.

She probably gets that way when board president, Brad Martin and vice president, Colleen Pettrone leave her in a lurch.

Wednesday's board meeting didn't happen because both the president and vice president did not show up so there wasn't a quorum. It wasn't the first time it's happened and it won't be the last.

In fact, Brad Martin has missed FIVE  meetings in the last EIGHT months and was late for TWO. Wow! Bet you're glad you voted for him to represent (or misrepresent) you on the park board. Here is his track record for the last eight months:

Late: September 8, 2009, February 10, 2010
Absent: October 13, 2009, October 20, 2009, November 10, 2009, March 4, 2010, April 14, 2010
Present but did not conduct meeting: March 10, 2010 because "he might have to leave early."

Katherine claimed he was sick again. That man is sick A LOT! Maybe this will help with his "problem."

We are not sure when the meeting will be rescheduled. Don't look for it on the website or on the marquis outside the main park building because Katherine said that she didn't know if she would be able to put it up there. Translation: if Brad and Colleen will let her.  Parker went on to say she is not required (by law) to post it on the marquis. Well, here's a little tip for you Ms. Parker: You ARE required BY LAW to post it inside the main park building 48 hours PRIOR to a meeting. And, wouldn't you know it, that was not done. It was not online and it wasn't posted in the main building as of 1:40 p.m. on April 13th. Tsk, tsk.


We hear the mayor of Burr Ridge is a tad bit PO'd at the Pleasant Dale Park District and their shenanigans and now that village wants the park district to pony up almost $20K for law enforcement on the fourth of July this year.

Adding to the growing list of park district pains is a mountain of missing paperwork that was recently asked for through a FOIA request. The park district is under not one, not two, not three, but at least FOUR requests for review of various OMA and FOI violations at the Illinois Attorney General's Office.

That would be a reason to be surly AND cantankerous.

And you probably thought your life was complicated...

Sunday, April 11, 2010

Beauty and the Beast

Come on out to the middle school on Friday or Saturday night (April 16th and 17th) to see the musical of the Beauty and the Beast. Performances will be at 7 p.m. on both nights.

Open Letter to Dr. Fredisdorf

I would like to clarify my comments during open forum at the March school board meeting. It seems that once again, whoever writes the highlights and meeting minutes, cannot accurately recall what transpires each month.

I did not state that the preschool program should be eliminated. I stated that former board members (Mrs. Uckerman and Dr. Hallman) indicated that when they were on the school board and created the program, the premise was that it would only run if it was self sustaining. This means that community tax dollars were not to be used to subsidize this program. The preschool program was created to be run on the revenue received from participants and if it could not sustain itself, the program was to be discontinued. Once again, I did not come up with this premise, the former school board did. During the February meeting, I stated that I agreed with the former board, that the community should not subsidize this money losing program. Parents that chose to send their children to a different preschool bear the entire cost and DO NOT have the benefit of the community paying for it. It is the same when children attend private or parochial schools; parents must bear the cost, not the community.

During the February open forum, I stated that since I videotape all meetings they could be made available to the board for clarification purposes because it seems each month no one can recall what was actually stated the month before. Not once have you asked for the tapes to clarify what transpired in order to draft accurate meeting minutes.

I also said these video tapes are available to community members who would like to view meetings that they cannot attend. I do this as a service to the community. Many local school districts do this as a service to their community because they value open and honest communication. It is more than evident that you do not. There is so little emphasis on communication in our district that it’s no surprise the meeting highlights and minutes are inaccurate each month.

In the second session of open forum, I did not ask the board to review the staffing report. I pointed out an error in the staffing recommendations. This staffing recommendation sheet went through two board meetings and numerous people yet no one caught the error until I brought it to your attention. This mistake could have led to a proposed staffing budget error of almost $100,000.

Finally, the summary of a former board member that also spoke in open forum was inaccurate. He said one of the things Pleasantdale fails to look at and break down are the results of students that have had outside services. Pleasantdale looks at how the students progressed through the Bright Beginnings program and how they did in first and second grade. There are a lot of other preschool programs out there and Pleasantdale should identify how those kids do compared to the kids that go through Pleasantdale’s program if you really want to see whether Bright Beginnings has some benefit. He went on to say comparing Pleasantdale’s program to no program is a waste of time. You expect you’re going to have some benefit but compare your program to other programs outside where you can identify the students who have gone through those programs and look at a historical perspective instead of just one year. He also said the board should go back and review the balanced budget that was presented when they raised the price of Bright Beginnings by 40%. At that time the board said if they raised the price by 40% they were going to have a balanced budget but what he heard at the meeting was that they still have a $250,000 shortfall. He didn’t know how that could be possible given the increases that they put forth to the community.

When you draft or approve the board meeting highlights, maybe you should check them for truth and accuracy before you distribute them to the school community.

Communication is a concern in this district and what this community continues to get from you is inaccurate, twisted or filtered information.

Gina Scaletta-Nelson

Saturday, April 10, 2010

Sensational Summer School!!


These summer school offerings are for both residents and non-residents of Hinsdale school district 181. School district 181 is ranked in the top 10 in suburban Chicago land! 

Classes meet 16 times between June 16 and July 7 and the cost is $175 whether you are a resident or not.

Some of their offerings include reading, writing, math, Lego robotics, guitar, art, keyboarding and much, much more. Click on the link below for more information on Hinsdale's summer offerings.

Friday, April 9, 2010

Second Grade Testing: A Position Paper - Part 1


Second Grade Testing: A Position Paper

 
in 
Brenda S. Engel, Lesley College

This position paper outlines reasons to oppose standardized testing of second
graders and then suggests a viable alternative.

A. Primary school children and standardized testing

1. Tests of children in grade two are likely to be unreliable. Walt Haney of Boston College's Center for the Study of Testing, for instance, says, Test results for young children are much less reliable than for older children. Research clearly shows that for children below fourth grade, the mechanics of taking tests and answering on specialized answer sheets can prove more difficult than the cognitive tasks the tests are asking them to address. Thus the test results are too much influenced by children's ability to fill in bubbles and handle pieces of paper; too little determined by their ability to read.

2. Related to the above point is the evident fact that standardized tests are scary for primary school children, bad for their morale and confidence. Overwhelmed by the test situation, they often don t show what they do know and can do. Instances of children breaking down, crying, unable to face school, becoming literally sick with anxiety in the face of standardized tests, are common. Most teachers in the early grades understand the importance of maintaining their students level of interest and high morale, both of which tend to be undone by tests. The National Association for the Education of Young Children has, for a number of years, come out against standardized testing of young children for some of these same reasons.

3. Most seven-year-olds are still in the process of acquiring the complex skills involved in learning to read and write. They need a chance to consolidate these skills which, at first, are fragile and inconsistent. Premature testing, no matter how well intentioned, is discouraging to the learner like having a work-in-progress exposed to summary judgment. And no matter how well intentioned the tests, no matter what the disclaimers or reassurances, the results will be understood by the children as judgment.

4. Differences in background show up vividly in the early years of schooling: some children arrive in school never having actually handled a book or in some cases seen one close up; others have had books read to them since infancy. These differences tend to diminish in the face of their common school experience. Narrowing the gap between the more and less advantaged students is one of the great potentials of the public school system. Premature testing, however, by highlighting differences, will reinforce them in the minds of children. Young children are not likely to have the kind of perspective that allows them to see the possibility of catching up . Since they always know who did well and who did badly children will sort themselves out accordingly. They will be likely to characterize themselves relative to their classmates as good readers (like fast runners ) or bad readers (like slow runners ). The early identification some poor testers will make of themselves as academic losers will be difficult at the very least to undo later.

B. Effects on teachers and schools
1. Teachers of kindergarten, first, second and third grades know very well, from their ordinary classroom activities, which children are learning to read and write with relatively little difficulty and which need extra help. Evaluation is part and parcel of daily instruction, a built-in function. When an outside agency takes over the responsibility for evaluation, however, the teacher loses both autonomy and confidence in his or her own expertise and trustworthiness. We convey to the teacher the disrespectful message that we do not trust her/him to evaluate student progress. The hazard, then, is that teachers abdicate responsibility for assessing learning and rely for instructional guidance on the relatively thin, out-of-context and delayed information contained in the test results.

2. Some teachers, in order to prepare students for answering questions on short reading passages, will use more work sheets and drill students on skills and vocabulary out of context. In competition for good scores on the reading tests, teachers will feel pressure to improve students testable skills. The curriculum in reading, then, is likely to become dry and mechanical with little time given to the kinds of rich reading and writing experiences that can hook children on books forever, with little effort made toward developing true literary cultures in the classroom. Reading will become a boring, meaningless, academic performance for most children although, again, less for those fortunate enough to have had an early introduction to the pleasures of literature.

In sum, the proposed second grade testing is the result of a pervasive and, in our opinion, mistaken belief that the solutions to perceived low school achievement are more testing, longer hours and more home work, .all of which are likely to be felt by children as burdens. These presumed solutions are not only inappropriate for young children but will prove counter-productive for both teaching and learning.

That, then, is the bad news. There is some good news, however, since we do believe it important to keep close track of children's reading ability in the early grades. The Early Literacy Assessment (ELA) developed in the Cambridge, Massachusetts Public Schools, provides an alternative to which the above objections don't pertain and which still meets the need for valid, reliable information on second grade reading.

Second Grade Testing: A Position Paper - Part 2


Second Grade Testing: A Position Paper

 
in 
Brenda S. Engel, Lesley College

Early Literacy Assessment
A. History of the Early Literacy Assessment
The Early Literacy Assessment has evolved over a period of a dozen years. The initial impetus for the assessment came out of the Cambridge-Lesley Literacy Project in the mid eighties which introduced the theory and practice of meaning-based reading and writing instruction into the Cambridge schools:

The primary motivation for seeking new methods for assessment has been to obtain more constructive, reliable information than that yielded by traditional standardized testing information that teachers can then put to direct use in the classroom and that will have a positive affect on actual practice.

Theory, forms and protocols were worked out and further developed in courses and workshops within the school system and at Lesley College. The actual project in the schools was directed by a three-person steering committee working with successive groups of participating specialists and teachers. It was piloted in the Cambridge public schools and revised, from 1994 to 1998, according to teachers recommendations. In the fall of 1998, the Cambridge School Committee recommended adoption of the Early Literacy Assessment citywide.

B. Content and Implementation
The new methods are simple and direct: essentially reading is assessed by having children read a whole (though brief) text; writing, by examining samples of children s in-class written work. The oral reading, a Running Record , is recorded and analyzed. Protocols, scoring and details of implementation (such as teacher training) are more complex and were worked out over time.

Stages in reading ability are defined by six developmental levels from early emergent to fluent. A child is assigned a level according to the difficulty of the text he/she is able to negotiate successfully (which includes giving an account of the overall content). Writing is assessed through a parallel developmental matrix similarly divided into six levels.

Expectations for the different grade levels are clear: what levels students should attain, for instance, in grade two. Although the Assessment is administered at specified times of the year, teachers, if they see a need, can do an interim assessment: if a teacher, for example, is uncertain about a particular student s assessed level, she can check it out at any point in time. (In the case of standardized tests, administered once a year by an outside agency, if a teacher is puzzled by a result, she has to wait until the following year to see if the score might have been influenced by outside factors such as illness, loss of a pet or other common, only indirectly related events.)

Teachers, with support from in-school specialists, are responsible for administering and scoring Running Records of the oral reading samples and for collecting writing samples. The writing samples are scored through a planned exchange among teachers so no one person is scoring the writing of his/her own students. In case of disagreement, a third scoring is done.

C. Results of the Early Literacy Assessment
Outcomes of the Assessment are recorded in both words (levels from emergent to fluent ) and numbers (levels 1-6). In addition, analysis of the Running Records themselves yield useful information on how a student is reading - what strategies he or she has available, what kinds of errors he makes, and so on. This information gives the teacher clues to instruction. The same benefits result from the writing sample; the teacher can adjust assignments and instruction to the needs of the students.

The results are accurate in part because of the simplicity, openness and directness of the process oral reading. The results are also easily verifiable: anyone with a genuine interest in the outcomes can spot-check and confirm a teacher s assessment of the children s oral reading levels.

The recorded outcomes thus serve several purposes: feedback of qualitative information to teachers and students that helps guide teaching and learning; clear, understandable information for families and caretakers ( Maria is able to read and understand this text ); quantified information for administrators from school principals to the State Department of Education. The results can be aggregated and graphed for individual classrooms, schools, or districts.

D. Benefits of the Early Literacy Assessment
1. Teachers take on the primary responsibility for assessment within a well thought out process with established protocols. This assumption of responsibility will enhance teachers sense of themselves as capable professionals able not only to teach but also think about their teaching, become researchers in their own classrooms. This should result in more thoughtful, intelligent, and effective teaching. In addition, teaching to the test in this case means simply teaching reading, i.e., attending to all aspects of students progress in becoming literate.

2. The assessment process and results are visible and understandable to children. They can see their own learning in terms of the increasing difficulty of the texts they can read, becoming aware of change and progress. Thus children can participate in evaluating their own learning never a possibility with standardized testing. Because they are taking on a measure of responsibility, children are likely to be more motivated towards improving their abilities.

3. Families and caretakers can, like the children, understand the continuum of six levels and see their children move from one level to the next. They can also see when significant progress is not being made and ask for explanations. Thus the ELA will empower parents rather than mystify them with numbers out of context, as is often the case with standardized test results.

4. The ELA has another significant advantage, related to the above point: the way the scores are conceived and presented. The numbered levels represent steps towards fluency. The numbers in most test scores, in contrast, are judgments at a point in time which depend on comparisons with a larger, unseen population. A child could get a high test score in grade two and a low one in grade four which doesn't mean however that he or she is reading less well in grade four. In fact it is not always clear to the layman what it does mean.

5. School administrations and school districts will benefit in several ways: they will have aggregatable, reliable information on second grade reading which can be verified through random sampling. Costs of the Early Literacy Assessment are a factor for consideration in the initial phase of teacher training only. Thereafter, needs are minimal with funds required only for materials (i.e. books) and duplicating (forms) as well as some training of new teachers unfamiliar with the methods, and occasional refresher workshops.

The basic and most important benefit of the Early Literacy Assessment lies in the message it conveys to teachers, administrators, families and the children themselves: rather than looking to the outside, to some external agency to tell them how well children are learning, schools take on this responsibility themselves as part of the educational process. This kind of evaluation has an immediate feedback function, improving education and dignifying teachers and children. School reform then becomes the project of those closest to the action. Responsibility in addition to accountability are the key words.