Get It Together, Baby

01 March 2010

As we move forward with an assessment process for 21st century skills, some aspects are straightforward...and some are squirrely. For example, constructing a task can be something that follows a regular pattern. There are parts that should be included, ways to check for alignment, and so forth. It's the evaluation portion that isn't always predictable.

Let me give you a little piece that I'm wrestling with: Organize ideas.

What does that mean to you? What qualities come to your mind when you think about a student who has organized their ideas? Would your answer be different if I told you to just consider what an organized 8-year old would look like---or are organized ideas more universal in concept (you're either organized in your thinking or not, regardless of age)?

Suppose you were constructing a rubric for this...what would you include? Does organization need an audience---in other words, does your style only have to make sense to you...or do others have to be able to ferret out the method to your madness? Do there have to be levels of detail or an evident hierarchy, regardless of whether the organization is text-only or mindmappish? Are there any aspects to organization which transcend the medium used---if I organize using notecards, a task list in Outlook, or a flowchart in webspiration, can I identify the essence of organized ideas?

What would you suggest? What would you like to see?

Labels: ,

And In Other News...

23 January 2010

Not only did I have to deal with coming down off of a ScienceOnline 2010 high this week, I also had to kick off the assessment project which is the primary reason for my employment...all while having a rather nasty case of laryngitis. However, I have been looking forward to this week for months and even sounding like a 12-year old boy undergoing puberty was not going to stop me from enjoying the work.

There is an stunning group of educators working on this project. You never know when you put a call out for help who will respond---and even sifting through a pile of applications is no guarantee that you will have the cream of the crop. I am sure that I am not the only one who has been burned in the past by an applicant who looked beautiful on paper and was nothing but heartbreak in the flesh. This time, however, there appears to have been a perfect storm of events and I have roughly a dozen superstars from all walks of education to help guide this process.

Their presence comes at a time when I need them most---not simply for the task at hand, but as I wrestle with various ideas related to educational technology and what happens in a classroom. I had to listen this week to talk about the worthlessness of public schools and teachers from someone who has never worked in one (nor places any value on my lifelong passion for them and experiences within them). Public education is far from perfect, but it is not a useless social experiment either. How and where the most recent advent of educational technology fits remains to be seen. There are plenty of predictions out there---how these tools will transform education in the next 10 years. I don't agree. I have nothing to base that opinion on, other than anecdotal evidence. Over the last 20 years, we've seen computers and internet move into classrooms; but I am unconvinced that instruction has undergone any significant changes as a result of these tools. I think more change has been driven by policy, not tools.

I was thinking this week about the various stakeholders in the educational process and their buy-in for educational technology. It's simpler to think about those associated with higher SES; however, if I'm working a minimum wage job at Wal-Mart, should I care that my child is able to create a Voicethread or collaborate on a wiki when those tools have no impact on my world? Is that what I want schools teaching my child? Does a migrant worker care more about whether or not an interactive white board is in a classroom or whether his child feels safe at school? I don't know the answers to these questions, but I am thinking that it is a mistake to not know what these voices would say. It contributes to the ever increasing divide of haves and have nots.

I know that there is a lot of instructional power in educational technology. I know that the tools are engaging for students and can create opportunities for learning that did not previously exist. I also know that they aren't necessary in order to develop students who think critically and creatively...who can collaborate and organize information...who can read and write. As I move forward with the assessment group that I have, I will be looking for some answers as to how we justify change.

Labels: ,

Universal Design

10 January 2010

In most public school classrooms in the US, it isn't unusual to have at least one student on an IEP (Individualized Education Plan) or 504 Plan. These plans identify accommodations for students with one or more disabilities so that they may fully participate in the educational program offered at the school. Over the years, I've learned a lot about how to adjust curriculum, instruction, and assessment in the classroom for students with these plans---but I have to admit that until recently, I hadn't thought about accessibility on a large scale. In many circles (both inside and outside of education), the term Universal Design is used to refer to "solutions...that are usable and effective for everyone, not just people with disabilities."

What are the costs and benefits of using technology to achieve Universal Design?

As our state moves to a testing model that is computer-based, some have pointed out that there are great possibilities for Universal Design. It is relatively simple for all students (not just those who are blind or have reading disabilities) to plug in headphones and listen to the test. Although not currently under discussion, color options for text/graphics, the ability to magnify text, layout of questions to encourage focus, are all examples of ways we could change the testing experience for students. (I thought this idea on color-coding for the color blind was intriguing, and believe the symbols would be useful for nearly all students.) It doesn't change the content or structure of the test---only the way it can be presented.

I have already had several inquiries from the special education community in our state about our upcoming technology assessments. And why not? They have not always been included with the conversations, perhaps due to the view that students' IEPs could cover any accommodations as opposed to the test itself being flexible. I cannot guarantee that we will develop assessments that can be used by every possible group, but I will guarantee that Universal Design will be a consideration throughout the process.

With the possibilities that come with technology, there are also costs to consider. One of the most interesting articles I've run across in this regard was in the New York Times this week. It asks, "With New Technologies, Do Blind People Lose More Than They Gain?" The article is centered around the illiteracy developing in the blind because Braille is no longer as necessary as it once was. When you can have a computer read all your text, why learn to read yourself? Beyond that is an interesting cultural commentary from within the blind community as to those "elite" who use Braille vs. those who don't (and tend to suffer economically).

This makes me wonder about other possible pitfalls to increasing access and where the balance is. In our zeal to design universally, are we neglecting other considerations along the way?

Labels: ,

Makin' a List...Checkin' It Twice

08 January 2010

In my recent search to build a better rubric, I have run across the idea of using a checklist several times. Assessment gurus offered a checklist as an alternative to using a rubric. I wasn't convinced that this was a viable option for me in my current situation. It felt too binary (present/absent)---and if that was going to be the case, why not just give a test made of objective items?

And then I was pointed to an article on National Public Radio (NPR) this week about The Checklist Manifesto by Atul Gawande. Although the book is written by a surgeon about the world of medicine, I am wondering what the applications might be for education.

"Our great struggle in medicine these days is not just with ignorance and uncertainty," Gawande says. "It's also with complexity: how much you have to make sure you have in your head and think about. There are a thousand ways things can go wrong."

At the heart of Gawande's idea is the notion that doctors are human, and that their profession is like any other.

"We miss stuff. We are inconsistent and unreliable because of the complexity of care," he says. So Gawande imported his basic idea from other fields that deal in complex systems.

"I got a chance to visit Boeing and see how they make things work, and over and over again they fall back on checklists," Gawande says. "The pilot's checklist is a crucial component, not just for how you handle takeoff and landing in normal circumstances, but even how you handle a crisis emergency when you only have a couple of minutes to make a critical decision."

This isn't the route medicine has traveled when dealing with complex, demanding situations.

"In surgery the way we handle this is we say, 'You need eight, nine, 10 years of training, you get experience under your belt, and then you go with the instinct and expertise that you've developed over time. You go with your knowledge.' "

Might this be true for the classroom, too? The closest thing to a checklist I have ever seen in education was really more like a flow chart. We had it at an elementary school and used it for developing reading groups for students. If a kid scored X on the latest DIBELS test and the teacher had observed Y, then the kid was placed into Z group and given a particular curriculum. For kids who were behind, the flowchart guided a teacher toward which intervention materials should help eliminate the deficiency. For kids who were at or above standard, there were suggestions as to how to move them forward.

Teachers are diagnosticians, of a sort. We are expected to determine each child's abilities and then tailor our curriculum, instruction, and assessment to meet students' personal needs. Might a checklist of some sort help us along? I understand that every child is unique and that we aren't making widgets---but teachers are juggling either 25 kids engaged in several content areas of learning at elementary or 150+ kids at secondary in one more content areas. It isn't reasonable to assume that we can be an expert on every student in every subject area. Perhaps a checklist might provide some guidance?

Here is a sample one for surgeons from the World Health Organization (click to embiggen):


What would be included in a version for education? Who are the stakeholders? Would time for other classroom pursuits be freed up if checklists were available? I don't believe that there will ever be a checklist for instruction---just like we don't see a step-by-step sort of thing in the list shown above. This is more of a pre/post idea. The "during" is still quite flexible.

At the other end of the spectrum is the assessment piece, which is where I originally started. I'm still not 100% convinced that a checklist is appropriate for the kind of assessment and evaluation I want to build, but I am no longer going to rule it out. Perhaps by giving teachers another way to identify what a student can and cannot do in terms of using technology (and some ideas about interventions), a large-scale assessment might gain additional functions. This alone makes checklists worth a second look.

Labels: ,

Descriptipated

02 January 2010

I've been collecting a variety of rubrics recently, along with various bits and pieces of research and advice on their construction. It's not that I haven't written them before. I just haven't had to write them for standards that are like the one below.
Generate ideas and create original works for personal and group expression using a variety of digital tools.
  • Create products using a combination of text, images, sound, music and video.
  • Generate creative solutions and present ideas.
I've been feeling a little "descriptipated," that is to say, having trouble cranking out what I think would represent the levels of a rubric for the standard shown above (and others like it). As I mentioned in my last post, this sort of standard reminds me of something you might see in the arts---there is a creative process involved. I had some arts rubrics mentioned to me. Here is an example of one:


This type of rubric makes me a little sad. Why? First of all, it's about quantity---not quality. Even if every child is not a Monet or Picasso, I would like to think that their understanding of the basic principles should be assessed as opposed to how many of the principles show up in the product. If one student product has only 4 attributes, but those are demonstrated at an expert level, then this is not as important as a student product which shows 7 poorly executed attributes---because, hey, 7 is better than 4, right? I think there's something wrong with this approach.

Although I have not included this information with the graphic above, a score of "3" is at standard for this product (as described in its directions). I would like to see detailed descriptors for every level---but if you're only going to do one, make it at the standard, not at Level 4. I'm also a little concerned at the number of standards each part of the rubric ostensibly addresses. How do you give effective feedback to kids when there is a melting pot of standards present in a holistic rubric?

I have to say that quite a few of the rubrics I'm running across suffer from one or more similar issues. This is especially bothersome when I see things like this:

Why? Because there is no requirement that the student actually consider the validity of the source. If s/he comes up with any three sources and lists the basic information...it's "Excellent." We are missing opportunities for asking for critical thinking from our students in favor of something more rote. Apparently, reading three things is good enough (regardless of veracity), as long as you include the title, author, type of source, and date in a list. I have other beefs with this rubric, including the "Minimal" to "Excellent" labels and the whole "passing/not passing" thing at the top, but that is another rant for another time.

These sorts of examples are clogging up my thinking. I've been needing a healthy dose of brain fiber...a mental cleanse and new starting point in writing descriptors. What I'm starting with now is thinking about What is involved in creating a multimedia product? There's likely some research...some understanding about which tool is best for developing the product you're after (e.g. ppt, voicethread, wiki, etc.)...the ability to make original content (as opposed to just pull it from others into a single product)...a sense of how to use the various elements (graphics, text, audio) to enhance the overall message. Now I can start thinking about how a beginner might approach such a task (e.g. probably borrows all content) vs. an expert (records own audio/video) and using these to write some descriptors. It's not about quantities---did the ppt have 10 slides with three bullet points each? Are there 3 graphics and two outbound links from the webpage?---but the actual characteristics of a performance. This is admittedly a much more difficult thing to do. I think it will be more meaningful in the end in terms of what kinds of feedback students get and the instructional steps teachers can take next.

Onward we will go with this task this month. I will share what I am allowed to float along the way. I'm hoping not to feel the mental bloat of descriptipation much longer.

Labels:

Crossing the Rubricon

28 December 2009

Next month, an intrepid group of educators from around the state will be joining me to help construct our assessments for Educational Technology. While I can't say much about them individually (oh, those pesky confidentiality agreements...), I can say that collectively, they are a "dream team" of teachers from all walks of K-12. They have significant experience with developing, rangefinding, and scoring large-scale assessments. A few are nationally recognized for their contributions to the profession. I am totally stoked about meeting them and working with them over the next eighteen months, in part because we have some big issues to hash out. I will share what I can along the way as I will be needing your help, too.

As I plot, plan, and prepare for this project, I am struggling with thinking about how the rubrics will shake out. Take a standard like this:
Generate ideas and create original works for personal and group expression using a variety of digital tools.
  • Create products using a combination of text, images, sound, music and video.
  • Generate creative solutions and present ideas.
This standard is not about a tool. We aren't interested in whether or not a student can make a powerpoint presentation. This is a little bit like asking a student to create a picture. The kid might choose watercolors or charcoal or pastels or pen and ink or...the list goes on. The same is true for digital products. A student might choose powerpoint, but they could also choose Voicethread or Zuiprezi or GoogleApps or...the list goes on. So part of the challenge is to develop a way to score student products when there are no parameters around the media used.

The bigger challenge, however, is that these standards don't nicely fit into a rubric. I have been trying for awhile and you know what? I've decided not to try anymore, at least for now. If I am trying to make a square peg fit in a round hole---doesn't it make more sense to go find the square hole rather than keep pounding away at the round one in impotent frustration? (Okay, that sounds naughtier than intended.)

What are the alternatives to using a rubric to evaluate student performance tasks? Are there other scales of performance out there? I've been looking around...and there isn't much. The Council of Chief State School Officers (CCSSO) was working on a project called EdSteps that is making some attempts to do so, but they are some distance from showing off their efforts.

Or maybe we just need to get back to the roots to rubric-ness. I was reading something recently that reminded me that a Level One performance is not about identifying the worst characteristics of a product or a list of what is lacking---it is about describing what the work of a beginner looks like. This is an excellent perspective. I know that I have been guilty of building a rubric by identifying "at standard" performance and then taking away from that to get to Level One. Instead, the approach should be more individual for each level: here is what a student at standard looks like...and here is what a student who is just beginning to engage with the standard looks like. It is more about identifying what is present, rather than absent.

I'm glad that I will have a constellation of superstars joining me in a few weeks to have some real time conversation about these issues. However, for those of you reading this who have your own ideas about how you would evaluate standards like the one described above, leave a comment for me to pass along. Suppose you could create whatever system you wanted to score student performance---would it include rubrics? Or are there other/better ways?

Labels: ,

Taskmaster

14 October 2009

In the beginning, there was Bloom's Taxonomy for categorizing types of thinking. And it was---and continues to be---good. It provides a framework for educators to consider the rigor of the work provided to students. Generally speaking, Bloom's tends to be all about the verbs: identify, describe, explain, state, choose, evaluate, and so on.

But the assignments we provide in classrooms are more than verbs. They are also about objects: either the tasks we assign or the items students produce. And this is where Norman Webb with his Depth of Knowledge framework offers an alternative to Bloom's arrangement. It is a more holistic look at a learning target before determining cognitive demand.

For example, "identify" doesn't have to be part of the slacker Knowledge group of Bloom's. It would be if I ask a kid to identify the location of Ireland on a map of Europe. But, if I ask a student to identify a strategy which might resolve the civil conflict in Ireland, I've asked for something far more involved...something beyond mere Knowledge.

I am thinking about using Webb with the new standards for Educational Technology. Some targets are simple to assign to a classification (Recall, Skill/Concept, Strategic Thinking, Extended Thinking)...but I am struggling with others. For example, "Participate in an online community to understand a local or global issue." Is this a Level One target---because "understand a local or global issue" is the only cognitive piece represented...or is there some amount of demand on the student implied by "Participat[ing] in an online community..."?

How does one classify those targets and tasks involving intangibles like participation? Should these be included? Participation is one of those classroom values which is nearly impossible to standardize. What it looks like from grade to grade, teacher to teacher, and content area to content area can be very different. And while we might come to some sort of consensus about qualities of "good" participation, I still have to ask if there is any cognitive demand involved in the process. Could you write a task for it?

I don't expect any sort of elegant resolution to these questions. I may have to set them aside for now and concentrate on other issues. But if you have some insight to share on how we determine the depth of thinking associated with participating, engaging, and or collaborating, I hope you'll share it in the comments.

Labels: , ,

Curiouser and Curiouser

08 October 2009

You might recall that I am on the hunt for rubrics and other tools that support the evaluation of student skills in educational technology (and/or "21st century learning"). In my opinion, a lot of the problem with developing these sorts of things is that we are trying to capture and deliver feedback on skills that defy quantification. Can one consistently rate how well a student innovates? collaborates? thinks critically or creatively?

I am not any closer on developing these kinds of resources; however, two pieces I read last week are prodding my thinking along. The first came from the Harvard Business Review and represented an interview with the authors of a "six-year study surveying 3,000 creative executives and conducting an additional 500 individual interviews" to identify five discovery skills these innovators have in common: associating, questioning, observing, experimenting, and networking. Of these, the ability to associate (make connections between disparate pieces of information) was seen as the most important; but, it's really the synergy among these things that leads to inquisitiveness.
We think there are far more discovery driven people in companies than anyone realizes. We've found that 15% of executives are deeply innovative, meaning they've invented a new product or started an innovative venture. But the problem is that even the most creative people are often careful about asking questions for fear of looking stupid, or because they know the organization won't value it...

If you look at 4-year-olds, they are constantly asking questions and wondering how things work. But by the time they are 6 ½ years old they stop asking questions because they quickly learn that teachers value the right answers more than provocative questions. High school students rarely show inquisitiveness. And by the time they're grown up and are in corporate settings, they have already had the curiosity drummed out of them. 80% of executives spend less than 20% of their time on discovering new ideas. Unless, of course, they work for a company like Apple or Google.

Is this true for schools---both the adults within them, as well as the students? As much as I hate to admit it, we do drum out curiosity and value conformity over time. I don't know that technology will change that, but I do think it will return some of the power of learning to students. The more tools a student has at hand to demonstrate their knowledge, the greater value we place on variety. That being said, not everyone is going to grow up to be Steve Jobs...but not everyone will have to grow up to be Bubba, either.

More intriguing was this image from Inverting Bloom's Taxonomy by Sam Weinburg and Jack Schneider:


They report on a task given to two groups of history students. One group was comprised of AP US History students...the other graduate students in the field of history. Each participant was provided "a document and asked...to read it 'historically,' articulating what he thought the piece was about, raising questions about its historical circumstances, and sharing insights about the text...The document was a proclamation by President Benjamin Harrison in 1892."

As you might imagine, the two groups of students approached the task differently. AP students "marshaled background knowledge about Columbus and worked [their] way toward the Bloomian peak, eventually challenging President Harrison’s praise for Columbus with his own critical alternative. [The] response, though unpolished and in need of elaboration, seems like critical thinking. And that’s how the teachers we interviewed generally saw it." As for the graduate students...

From the start, it was clear what the young historians were doing differently. As one began his reading: “OK, it’s 1892.”

Our high school student Jacob knew the story of Columbus. But he didn’t know how to read a document as the product of a particular time and place. To the historians, critical thinking didn’t mean assembling facts and passing judgment; it meant determining what questions to ask in order to generate new knowledge.

Why, the young historians wanted to know, did Harrison make this particular declaration at this particular moment? Over and over, as they puzzled through the document, they asked “why?” In our dozens of interviews with high school students, not a single one ever did so.

Light bulbs soon started popping for the young historians. “The 1890s, the beginning of the Progressive Era, end of the century, closing of the frontier, Frederick Jackson Turner, you’ve got the Columbian Exposition coming up the following year. Biggest wave of immigration in U.S. history.” This one was on the scent. And then …

“That’s it!”

At the end of the 19th century, America was getting a makeover. Seemingly overnight, immigration had transformed the country’s look, bringing “Slavs,” “Alpines,” “Hebrews,” “Iberics,” and “Mediterraneans” to the United States. Among these newcomers were millions of Irish and Italian immigrants who formed a new political interest group—urban Catholics. Harrison, in honoring Columbus, was pandering. “Discovery Day” appealed to millions of new voters by bringing them, along with a hero who was one of their own, into the fold.

Mystery solved.

Now that’s critical thinking...

To the historians, questions began at the base of the pyramid: “What am I looking at?” one asked. “A diary? A secret communiqué? A government pronouncement?” They wanted to know when it was written and what else was going on at the time. For them, critical thinking meant determining the knowledge they needed to better understand the document and its time. Faced with something unfamiliar, they framed questions that would help them understand the fullness of the past. They looked up from the text curious, puzzled, and provoked. They ended their reading with new questions, ready to learn. The high school students, on the other hand, typically encountered this document and issued judgments. In doing so, they closed the book on learning.

Does this illustrate how curiosity becomes closed by some classrooms? In our zealousness to teach facts and figures, have we emphasized the right answer too much...and the right question not enough?

While I may be no closer in knowing how to evaluate curiosity and innovation in the classroom, I am appreciative of these reminders to build in supports for these skills along the way. Perhaps the instructional resources I gather and share will be grounded there. Maybe the answer to evaluating students' use of instructional technology will be the questions they create.

Labels: ,

Measuring Up

28 September 2009

Catching Up or Leading the Way is the most recent tome sent to me through my ASCD membership. Written by Yong Zhao, who was educated under the Chinese system, the book examines the whole "grass is always greener" machinations happening between the U.S. and China/India when it comes to education. In the west, we tend to believe that the hours, discipline, and testing present in the east represent a better system. After all, the Chinese are kicking ass and chewing bubble gum when it comes to international comparisons of student achievement. Zhao points out that the Chinese, on the other hand, are working to implement a more American approach because it allows for a workforce with more critical and creative thinking skills.

If you've been around the educational block, then the early chapters of the book will hold no surprises for you. Zhao does a nice job of summarizing the current American NCLB situation and how we got here. I'm curious to see where he goes from here in promoting "what schools can---and must---do to meet the challenges and opportunities brought about by globalization and technology."

What has me intrigued at this point in the book is Zhao's comparison between the benefits of biological diversity and diversity of talent in the workforce. He mentions the strength of populations which are not genetically identical. (Go, sex, go!) They are able to better adapt to changing environments. So, too, can countries adapt to changing economic times. I find this concept interesting, but Zhao has left out two important considerations.

First of all, while sexual reproduction results in variation and adaptability---asexual reproduction also has advantages. My students could never get that past "What fun would that be?" idea; however, the benefits include being able to become established in a new area quickly and jack up your population numbers in short order. You also save a lot of energy this way. No need for pesky mating dances or other displays. People who think lack of diversity is a species killer obviously haven't had to deal with dandelions in their yards.

If we take this a step farther and try to place it into Zhao's comparison between genetics and schools/workers, what does that get us? Is the standards-based education movement the amoeba of models?

Which brings me to my second thought on all of this. The argument that Zhao is making is that the standards movement is stamping out individuality and diversity of thinking---that in our bid to become more China-like in our systems we are losing the one thing that makes American education different: the belief in the individual...the can do. I believe there is some truth there---that the constant comparison by the US to other countries is leading to more of a focus on what we aren't, as opposed to building on strengths. An emphasis on testing is not a replacement for an emphasis on thinking. However, these are outcomes and are not the only possibilities. I also think that most teachers would claim that the standards movement is eliminating individuality and creativity in their instruction---not student thinking.

I do not believe that the [insert country of choice which outperforms US on international comparisons of student achievement] do it this way, therefore it must be the better way to teach X is the right starting place. It's knee-jerk and not purposeful. (And makes about as much sense as the Obama administration saying that we should lengthen the school day/year because that's what other countries do. Talk to me about what's best for kids, would you?) I do, however, think that the standards-based movement has the ability to ensure that students end up with choices. A student who is not expected to read, do math, write, and/or think scientifically ends up with very few choices as an adult. I really don't think this is the kind of diversity that we're after and will do nothing to break the cycle of poverty.

Zhao is right in that academic tests are not the only measure of a student's proficiency and talents; but standards are not inherently evil and not all testing is bad. It's what we do with them and why we do it that makes the difference. In the end, I keep coming back to instruction---that critical link between standards and assessment and the aspect most often ignored. It's the instruction where the magic happens with learning. It's the instruction where diversity of both teachers and students is honoured. And until it becomes part of the conversation, the rest of this discussion is no different than a "Mine's bigger than yours!" sort of argument among nations. Everyone knows that it's not the size of your (test scores; population) that matters, it's what you do with it. What's your position?

Labels: , ,

Modern Problems

21 September 2009

Part of my job includes guiding the fulfillment of the following legislative requirement:
Within funds specifically appropriated therefor, the superintendent shall obtain or develop education technology assessments that may be administered in the elementary, middle, and high school grades to assess the essential academic learning requirements for technology. The assessments shall be designed to be classroom or project-based so that they can be embedded in classroom instruction and be administered and scored by school staff throughout the regular school year using consistent scoring criteria and procedures. By the 2010-11 school year, these assessments shall be made available to school districts for the districts' voluntary use.
Kind of exciting, don't you think? I do. My mind has been abuzz with all sorts of ways that these "classroom or project-based" assessments could look. (Tech standards are here, in case you're interested to see what we will attempt to assess.) My goal is to make sure that these assessments rock so hard that teachers will just have to have them, even though it is voluntary. Most of my focus right now is on gathering resources that might be useful for the task ahead. Some things I've learned along the way:
  • NCLB requires that every school with 8th graders report a measure of those students' technology literacy. This does not mean a formal assessment is required---most states are sliding along using a simple survey or reporting tool.
  • According to the most recent version of Education Week's annual Technology Counts report, only 13 states had some sort of assessment of technology skills. Of those, 6 are using a canned on-line test, 4 have their own online versions of a test (I couldn't see what was behind the curtain), and 3 are a complete mystery---nary a shred of evidence on the state department of education websites (most of which are painful, at best, to navigate).
  • Bottom line: I'm hanging out on my own here. Sigh.
To that end, I've been scouring the interwebs, looking for any classroom examples of assessments and rubrics targeting educational technology and/or "21st Century Learning Skills." The good news is that there are lots of nice examples of assessments/projects (unlike the NY ones I shared last week). The bad news is that the rubrics are useless in nearly every case. Keep in mind that I am required by law to develop something with "consistent scoring criteria and procedures."



The problem is that most projects which ostensibly use educational technology end up with rubrics that assess other things, such as writing or speaking skills. These rubrics aren't bad. I have no beef with them other than they supply no way to measure the students understanding and use of technology. Those are the real targets we're after. I find this lack of presence not only frustrating, but careless. With all the passion being put into the educational mindset about 21st century skills---why doesn't anyone at least make some sort of effort to measure them? If we believe that the sorts of tools and thinking that occurs in a "modern" learning environment are important...why do we have no way to provide feedback to students about this? I don't buy the argument that only the product matters. When we say we are placing value on innovation and creativity using educational technology---then there must be some better guidance than "I'll know it when I see it."

I do think that I'm on the right track with rubrics that incorporate thinking skills or focus on the qualities of educational technology products (e.g. What makes for a good podcast?), but this all feels like very new territory. This is odd when I am more or less late to this game. Many others have been focusing on educational technology far longer and more deeply than I. I have no doubts whatsoever as to the high quality of lessons and instruction out there. I just wonder if kids are getting the scores and feedback that they should have.

Labels: ,

Kickin' It Old School

17 September 2009

I've been on the hunt recently for high-quality examples of assessments and rubrics for educational technology. So far, these items appear to be as rare as "ghosts, goblins, virgins, and other mythical creatures." I've found several multiple-choice tests for tech literacy. Yawn. It's far more amusing to find examples of assessments past their expiration date.

Consider Exhibits A and B (shown below) from Standard 5 of the Math, Science, and Technology curriculum from New York. But before we get there, note the cautionary tale posted on the website:
Some of the learning experiences sections are very graphically intensive in order to show the detail of student work. As an example, the 28 Learning Standards file (1310K) took 10.5 minutes to download on a 486/66 PC using a 28.8 modem and Windows 3.11. It took 35 minutes to print on a Canon 600 InkJet printer. It took less than 5 minutes on a laser printer. Your experiences may vary. If you have lower end equipment, your experience will be considerably slower. Many older printers with limited graphics capabilities may not be able to print these sections. Other printers may run out of memory. You may be able to get around this by printing in smaller pieces.
Yes, friends...these tech lessons/targets/assessments are brought to you fresh from the year 1996. They are vintage tasks. Antiques, as it were.

So, assuming that you've dusted off your 2880 baud modem...here are a couple of things for your students to do.


I got the giggles with this one. A coworker was convinced the book title was "Moose Code," until I corrected her. Is that a walkie talkie I see in the top set of, um, art? And keyboard keys attempting to escape the tech ghetto they're in? Why is there a ballpoint pen in the same set as the bongos? Do you think the floppy disk bay in the computer is for a 3.5" disk...or is really old school and awaiting a 5.25" version? I love the lines around the clip art. Somebody really did physically cut and paste these pictures. (Wonder if the images are/were copyright-free?)

But the best was yet to come:


If you can't read the task (and don't want to "click to embiggen" the graphic), you're missing the following suggestion: "...design a plan for the construction of a homemade radio speaker for the eight ohm speaker jack on an inexpensive transistor radio or cassette recorder."

We have a veritable museum of technology options.

Hey, I understand that websites have a tendency to grow beyond their original borders. It's easy to forget what pages are live and the paths they take. There's probably a lot of information from 1996 still floating around on state department of education websites.



But do you see what I see in the bottom right corner of the page? It says "Last Updated: May 27, 2009." Someone looked at this a few months ago and considered it current enough to keep. Does this mean these standards and assessments are still in use? Please, NY, please tell me you have something less than 13 years old for 13-year olds to work with...something a bit less old school.

Labels: , ,

Speaking of Unjust Rewards

30 June 2009

A few days ago, I posted about the continuing saga of paying middle school students for "good" scores on standardized tests. Here's another take on the issue:

For as long as students have had to take state assessment tests, middle school students have been bombing on them.

Even students who scored well in elementary school and those who go on to ace the high school Regents exams tend to get caught in the middle school slump.

Locally, a growing number of school administrators think they have come up with a solution: bribery.

Some schools base final exam grades on students’ scores on the state assessments. Others exempt students who score a 3 or 4 on a state test—on a scale of 1 to 4—from having to take the final exam in a subject.

For students at Hamburg Middle School, that means not having to come to school on exam day.

“Telling an eighth-grader you get an extra day off is a pretty good motivator,” said Gregg J. Davis, assistant superintendent of information services in the Hamburg School District.

“I’ve seen the scores go up, so there’s a lot of positives in that. Three years ago, I think our eighth-grade scores were in the 60s. Now they’re in the 80s,” he said of the percentage of students scoring at proficiency. “That’s a pretty good leap.”

Other schools offer equally glowing reports about their students’ improvements.

But some experts say the results don’t justify using student scores in a way the state never intended.

“The state assessments were designed to gauge student progress toward the [state learning] standards, not as individual student achievement measures,” said Ann K. Lupo, an assessment consultant to the state Education Department who teaches at Buffalo State College.

“The assessments are being debased if used in this fashion, contrary to their intent. The English language arts test is given in January, and the math test is in March — not at the end of the year, on purpose, to discourage using them as finals."...

Local school officials acknowledge that they’re using the state tests in a way that was never intended.

But by the time students reach eighth grade, the educators say, they’ve realized that there’s not much of a consequence for them if they get a low score on the state assessments. Generally, the worst that happens is that students with low scores are assigned extra help in whatever subjects they’re struggling with.

For schools, teachers and administrators, though, low scores can mean much more. If enough students do poorly on a test, a school can find itself on one of the state’s warning lists, a designation that can haunt a school for years.

Educators complain that the media have contributed to the situation by publishing scores released by the state Education Department and comparing schools, based on the percentage of students who pass each test.

“A lot of the fiddling around with how to use scores, and creating incentives for students to do well, is pure politics,” Lupo said. “Districts are very, very concerned not only about student performance, but how they will be perceived when the scores hit the paper.”...

“While giving them a break from not taking a final is a feel-good thing, I don’t know that it gets to the crux of the issue — how do I help you improve your knowledge base and your skills?” he said. “As a district, we don’t believe grades motivate students. We have to find other ways to motivate students.”

I don't believe that standardized tests are evil; however, I do think that their results can be used in unreasonable ways. For me, the the "unreasonable" part here is that the adults are admitting that they are using the carrot of a day off/no final as a way to boost public perception of the school via test scores. It's not about student learning at all. And we can pass the buck up the food chain---perhaps it's really the government's fault via NCLB, etc...but at the end of the day, the school administrators are making a choice that they don't have to make. I'm not willing to absolve them of using children.

Standardized tests should not be looked at as being all that (and a bag of chips), but I also think that school administrators are diminishing the usefulness of information for students and parents. If a student doesn't do well on the state assessment...then they get another test---where is the built in support and interventions? How does "Because you failed it the first time, we're going to let you fail it again." help families understand what is happening in terms of learning?

This kind of testing is not going to go away. I will not be surprised if NCLB is renamed (and retooled), but standardized tests are here to stay. We just need to find a way to repurpose them.

Labels: ,

No Thank You, I'm Full

29 March 2009

I went to a conference this past week where there were lots of people like me who wanted to geek out about assessment, grading practices/evaluation, and data. I understand that most people might not find such an event to be their cup of tea; but for me, this was about as good as it gets. In my day-to-day work, I don't get to have these kinds of conversations---and they are the ones I'm most interested in having right now. So this little convention will buoy me up for a few months.

There are two items that I am still chewing on. See what you think...

Is there such a thing as "too much assessment"? I haven't completely decided. I really think it depends on how the results are going to be used. If we're just talking about a classroom teacher monitoring the learning of his/her students---then, I don't think you can overassess. As a teacher, you are constantly gathering information and responding to students. It's the way the classroom works. But step outside of that, and my answer changes. When we start talking about district assessments, diagnostic tests (e.g. DIBELS), and/or state tests---then, I do think it's possible to go overboard in a hurry. Because here, teachers/kids are often not the users of that information. It's not as meaningful and I think there's a good argument to be made about these sorts of assessments taking away from instructional time. However, would we pay as much attention to equity issues without these?

My second thing to chew on is about the focus of teacher collaboration time. In December, I heard a national expert in assessment state that he thinks the development of department/district assesssments is a waste of time because sample size will never be large enough to achieve any reliability or validity. Even at the state level, developing high quality items takes a lot of time and money...so why waste precious resources at the district level for such things? And then, there was the national expert last week who had a very different view---perhaps the more "popular" one these days. His idea is that the process of planning school and/or district assessments provides rich opportunities for conversations about curriculum and instruction. So, perhaps validity and reliability don't matter as much because ending up with the highest quality assessment isn't the point. I find my own thoughts somewhere in between these two views. I do agree that collaborative conversations are important, but perhaps it's not the assessment that needs to be the focus. Perhaps it's just looking at student work that matters. Even if teachers don't give the same items to students, couldn't conversations about what the work shows and what instructional practices were used be just as rich?

I'll continue to try to digest these two chunks of information. Right now, though, my brain is full.

Labels:

Is That Your Final Answer?

05 January 2009

Every school I've worked in has had a "finals week." Sadly, I have to admit that until now, I never stopped to consider why they do this.

What is the purpose of a course final?

If assessment informs instruction---how would a final accomplish this? The class is over. It isn't as if the teacher can use the information for remediation purposes.

If the assessment is "practice" for college (for high school kids) or high school (for junior high tots), then is that a suitable purpose? Most kids aren't going to college...and while one might argue that a test may be required here and there for vocational certifications, can we really claim that taking a final is a life skill?

If we claim that it is a rite of passage or some sort, tradition, or "that's just the way it is," are those valid reasons?

Last year, I used The Final as the last ditch attempt opportunity for kids. They identified which standards still needed mastery and then only addressed those. They could choose an in-class opportunity on the allotted day of the final...or identify an alternative assessment that was due on the appointed time. All of this could only help them.

Is there a legitimate educational purpose (even for colleges) to have a final exam for a class?

Labels:

Good Things Come to Those Who Wait

03 January 2009

At long last, it finally happened. I did not have to engage in anything morally ambiguous, as it turned out. Some patience and good fortune scored me a beta testing login to Zuiprezi. I stayed up well past my bedtime last night to learn and play a bit. I took a chunk of a presentation on grading and used it as source material to see how this new software might be used.


So far, I'm generally satisfied. The interface (the "paw" looking thing in the upper lefthand corner) is easy to navigate. It's just a more visual way to display the contents of a tool bar than what we typically have. I like being able to easily resize text and graphics...position things however I like...and then connect the pieces in any sequence. The only drawback I can see at this point is that any graphics which aren't of a very high resolution appear quite pixelated when the presentation is running---far moreso than in Powerpoint. I won't say that the screenshot above represents fine design, but for a first attempt, I'm feeling pretty good about the possibilities.

I really hope that the developers for this tool are able to make a go of things, considering current economic conditions. I have to say, though, that I would definitely be willing to pay for access. I think it's an excellent tool with some great potential for the classroom. My plan for first using this tool for a grant-writing workshop I'll be doing in the coming weeks. If this style of presentation is better suited for text, then perhaps this will be the perfect opportunity to give things a try.

Mind you, my job assignment is shifting a bit. In fact, I was cc'ed on an e-mail yesterday requesting the keepers of the website to add my credentials to the "Science Ass Main Page." I didn't have the heart to tell them that I'm really only half-ass(essment), according to my contract. I am grateful to have some better job security, a raise, and access to better benefits. So if that means doing some big ass science, count me in. And with a tool like Zuiprezi in my back pocket, perhaps some of the other good things I've been waiting for will appear.

Labels: ,

AWOL

28 September 2008

I really don't know where the last week has gone. It's a blur of meetings, road warrior activities, and the occasional stab at sleep---some of it interesting, but mostly not. In other words, it doesn't make for very good blog fodder. And while I've never been 100% sure which purpose this blog would serve, I know that I don't want it to simply be a catalog of the days' minutiae. Most of the time I'm not interested in it. I don't think anyone else would be, either. Therefore, I've been AWOL from the blog.

Amongst the hodgepodge of my days, I have been trying to ponder something a bit larger. I'm just grasping at it for now, but perhaps my always astute Readers might have some direction for me.
What is the purpose in teaching science in public schools?
I think that when I was in the classroom, the answer to this question was much clearer to me. But from the level I operate in now, the answer is mushy. It comes from the difference between being someone trying to shape policy vs. my old life where I just had to carry it out; however, I can't help but think that at a state or national agency, there is an even greater need to have a clear vision. The reason I am wrestling with this now more than ever comes down to the issue of accountability. Here are the two driving questions:
Should adults and students in the public schools be held accountable for what students learn in science? If so, what should that accountability look like?
Let's talk about kids for a moment. If we hold students accountable, then what should that look like? Is earning credit for high school courses enough---if so, how many credits? Should we direct what kinds of courses would be eligible or leave it up to school districts? If we increase requirements, what do we do about schools which don't have enough lab space or can't find high qualified teachers? Do we, instead, insist on using standardized tests as a measure for kids? What does this mean if the number of credits required for graduation would be completed after the test? Do we need a second accountability factor? I've been pondering what types of accountability might make sense and how those might be implemented and monitored. I actually like our standardized test for science in this state---but I can't say that I like that it's tied to graduation (or will be in a few years). When I read something like What Does Educational Testing Really Tell Us? over on Eduwonkette's blog, I can't help but nod in agreement...and yet, I'm hard pressed to suggest alternatives.

As for adults, that's a more difficult issue in some ways. At my place of work, we've had a few discussions about the time students (especially in the elementary grades) have to engage with science content. It's no secret that with the increased pressure on schools to raise achievement in math and reading, science and other content areas are being squeezed out. (see previous posts on studies of time spent on elementary science and its push-pull with literacy) But this brings up another question: How much time is "enough" for each content area? I know that the answer really isn't simple---every child's capabilities are different and every school serves a different population. However, can we make some general observations? Education Week seems to think we might be able to draw a few conclusions on the Effects of Extra Time for Learning. Yes, quantity can help, but quality is more important. "More" does not automatically equal "Better."

The heart of this whole problem is that without an accountability measure (e.g. AYP), schools won't teach (very much) science to kids...which gets me back to my original question: What is our purpose? I think that if this was well-defined, it would be easier to determine whether or not accountability should be required and what that looks like. Instead, we're trying to figure out all of these things at once. It seems disrespectful not to give each part of this issue its own bit of attention.

So, if things have been a bit quiet around ye olde blog, just know that I'm trying to find a way to balance the noise and pressure of my day with what I think my job should really be about. What do you think I should be doing?

Labels: , , ,

Can I Get A Ruling from the Judges?

23 June 2008

On my most recent certification test, there was a question that asked something akin to "Which planet has the longest year?" The answer choices were Mercury, Venus, Earth, and Pluto.

As I see it, the "right" answer to this question comes down to two things: (1) When was this question added to the test bank? and (2) What is the definition of "planet" being used here?

If the question is pre-2006 vintage (which is entirely possible, considering the time frame of test development), then "Pluto" is the right answer. If not, then Earth would be the best choice (considering Pluto's demotion to "dwarf planet").

For the record, I picked Pluto. I figured that what the test-makers were really after was whether or not I understood the relationship between length of orbit around the Sun and the items listed. However, I'm left wondering if this is one of those times when I should have reported this "bad question" to the testing service. Is it possible that some test-taker somewhere is going to be denied passage because s/he read the question differently? "Earth" is technically the correct answer in this day and age...but I doubt that it is the one the machine will want to see bubbled in on the score sheet.

So, should I...

a. Contact the testing service and let them know about the question
b. Assume someone has already alerted them to the issue
c. Let some schmo whose score may hinge on this question deal with the problem
d. Do nothing because I'm not the schmo.

Judges?

Labels:

Two for Two

22 June 2008

I spent a chunk of yesterday afternoon taking my other elementary certification test---the backdoor test. Assuming that I passed this one (and I think I did well enough for that, although I certainly won't be garnering any accolades for high scoring), then the rest of the dominos can fall and I will be a genuine elementary teacher. Truth be told, this is little more than to be a line on my resume; however, I am of a mind that whatever one can do to get Opportunity knocking is a good thing.

There were several interesting things about yesterday's version: the Texas version vs. the Washington one from a week ago. Both are developed by ETS, so the script the test proctors used was identical, as were the printed directions, and answer sheets. There was even a math question that was the same for both tests. Plenty of opportunities for deja vu. Yesterday's test was more in depth, requiring pedagogical knowledge in addition to content knowledge. I'm still not convinced that a test can really allow anyone to determine whether or not someone can teach. Take me, for example. Just because I know a bit about phonemes and decodable books does not mean I'm ready for someone to toss me in with a group of kindergartners to teach them to read.

I'm wondering if teacher certification shouldn't be a bit more like those graduated drivers' licenses some states are using with teenagers now. The number and age of people you can have in your car changes with the more actual road practice you have. Perhaps teachers need some type of "learner's permit" and would gradually work up to a full-fledged cert on down the road. Along the way, there would be opportunities for mentorship and co-teaching.

At the moment, I'm just relieved that the tests are done, that I feel like I'm "two for two" on doing well (enough), and summer is on the horizon.

Labels: ,

I Submit to You

08 December 2007

For those of you who read the header of this post and immediately imagined fuzzy handcuffs, you're going to be disappointed with the rest of this. Just click the "Back" button on your browser and move on with your day.

The header is taken from a presentation by Rick Stiggins. He used the phrase no less than 11 times in one hour. I was more of a purist in tracking this than the people at the next table---who also included all of his "I suggest to you..." and similar sentence starters and therefore had a much higher count at the end of the speech. Stiggy has not been a fav of mine for a long time (long story), but his keynote yesterday morning really ticked me off.

His basic call to arms was around report cards. In his mind they are hopeless and outdated because they don't communicate the depth and quality of information all possible stakeholders might need. Grades and report cards are dinosaurs.

Okay---I would agree that a single report card is highly unlikely to tell kids, parents, teachers, admins, community members, etc. everything possible about where a student is in terms of achievement. Ricky-baby, they aren't meant to do so. A report card is one out of a myriad of ways schools communicate with stakeholders about student progress. Every time a teacher provides supportive feedback, every time kids peer edit work, every time a teacher calls home to a parent or writes a letter of recommendation---communication happens. There is no need for a report card to be everything to everyone. (I shiver to picture what it would look like if it did.) Maybe the "communication system" Stiggy was wailing about needing (he really does like to yell into the microphone) is already available. We have way more data at our fingertips than what is found simply in report cards.

My second issue with His Stigginsness was his view of motivation. One of his basic assumptions is that every classroom environment is one of performance---and he provided not a single frame of reference to mastery classrooms. Achievement motivation theory has been the preeminent framework for studying student motivation for more than 20 years. Don't stand up there and claim some expertise in assessment and motivation and then give your listeners only half the picture because the other half undermines the point you want to make. Maybe it isn't a question of what report cards do or don't communicate. Maybe the big picture is really on what happens in the classroom environment on a daily basis that supports student learning. "Assessment for Learning" is all well and good---but you have to give it a context.

So, I submit to you that Stiggins' grasp of standards-based environments is rather flawed. Those of us in those environments are going to have to raise our voices.

Labels: , ,

Adventures in Marking

05 October 2007

A friend and I were comparing experiences in our journey through Standards-Based Gradingland. We're swimming in the deep end of the pool this year, and instead of feeling like we're drowning, we're finding the waters surprisingly inviting. This includes marking tests kids have taken.

Certainly the test items are designed with a particular right answer in mind; but, with this sort of grading there's a rubric. As a teacher, I'm looking for the gestalt in terms of the standard(s). Does the kid get it or not? I'm not totaling points and calculating percentages. I'm not fussing over weighting of items. This is not only an enormous time-saver for me, but a good support for kids. It's more about credit than penalties.

I will say that the rubric development is where most of my think-time has to go. And yes, there is some fussing with the idea of "How many (and what kind of) items can a kid miss and still be at standard?" I'm not assigning this a particular point value---I plan to mark all of the tests first and then quick sort them into two groups (at standard, not at standard) based on what I've seen. If there are scores of 1's and 4's to assign from there, I'll work it out in a similar fashion. The forest first...trees later. :)

What will happen to my kids with 1's and 2's? A few options. Kids can reattempt the test provided they engage in some additional learning. For others who don't choose that pathway, I'm looking at two things. One is to sit down with them for individual conferences and help them come up with a course of action. The other is to pair off with another teacher who has biology the same period(s) that I do. We can split our classes into two groups---those who are at standard can have an enrichment lesson with one teacher while the other works on remediation with the other group. Should be quite the adventure for all of us.

Labels: ,

Book of the Moment Club

01 October 2007

I'm a card-carrying member of the Association of Supervision and Curriculum Development (ASCD). Out of various professional groups I've been a part of over the years, this one---more than any other---has provided me with the of source material for growing as an educator. I have a "comprehensive" membership, which is their middle tier. I get copies of Educational Leadership each month and the various updates throughout the year---and I get books hot of the presses, too. Kind of a "Book of the Moment" club.

This moment's title is Checking for Understanding: Formative Assessment Techniques for Your Classroom by Douglas Fisher and Nancy Frey. After scanning it, I would have to say that for the "with it" classroom teacher, there isn't much new territory here. Differentiated instruction, questioning techniques, authentic assessment, are all summarized here. I did like a protocol they included for looking at student work. They also have a fabulous "Checklist for Creating Common Assessments" adapted from work by Linn and Miller (2005). It has the desirable qualities for every kind of item you might use (true/false, matching, performance...) as well as an checklist for the assessment as a whole. The reality is that no teacher is going to be exhaustive with this checklist, but I plan to make a copy to glance at from time to time. It's a good reminder of how to take my goals for student learning and be congruent in terms of my testing.

The teacher who shares an office with me thinks I'm quite the nerd when it comes to staying current with best practices. He jokes about it in a nice way, mind you, and I certainly can't claim that he's saying something that isn't true. :) I do enjoy having time and opportunity to look at the latest publications and see what I can tweak in my repertoire to make things better. ASCD's Book of the Moment is good for prodding my thinking and my work in the classroom.

Labels:

Teaching to the Test

29 September 2007

I have a love-hate relationship with the phrase "teaching to the test." On one hand, I don't see a problem. If I'm teaching kids things which are different from what they will be tested on, what the heck am I doing? Shouldn't I be using class time to help kids learn the concepts I expect them to know when we get to the test?

It's the use of the phrase within the context of state-level exams that gets my goat. I can't speak for other states, but here in Washington, we don't know what's going to be on the test from year to year. We can only "teach to the standards," meaning that we help kids learn all the concepts...knowing that they won't be asked to demonstrate mastery of every single one. Teaching to the test gets a bad rap much of the time in this case. It conjures up visions of "drill and kill" in the classroom---something which can and does happen, but not in my classroom. When I work with kids on their expository writing skills (they are in love with But-Man this year), I am teaching to the test, in a sense. When they take the science WASL, they will need to be able write a thorough scientific conclusion. They aren't marked on their writing skills (there is a writing test for that), but the ideas they communicate. The "trick" is to help kids learn what information is important in a conclusion. This is the standard. Yes, I'm training them to demonstrate something for a test, but it is not drilled into them over and over out of context.

My students are about to take the first test of the year in my class. I use various summative forms of assessment---exams are just one piece of the puzzle. Several years ago, I started making my tests resemble the format kids would see on the science WASL. There is a scenario, consisting of a few sentences; a diagram or picture which relates to the scenario; and then some multiple-choice, short answer, and or extended response items which ask kids to use their knowledge within the context of the scenario. Am I "teaching to the test" by using this format? I suppose I am; however, I think it's unfair to expect students to be successful with the state test if the format is completely alien to them.

In the grand scheme of things, I am still learning (after 16+ years in the classroom) how to design good tests. The ones which come with the ancillary materials are often poor in quality---either because of the cognitive demand (only knowledge and basic comprehension questions) or because the items don't target the most important concepts. In building tests, I am getting better at organizing the items in terms of difficulty, balancing the points among selected response and short answer (to avoid gender bias), and targeting higher levels of thinking. I'm not just teaching to the test anymore---I'm using the test to teach me how to better prepare my kids.

Labels: ,