Assessment+and+Instruction


 * Name || Wiki page designer || Conversation starter || Reference/spell checker || Color Text ||
 * Breanne ||  || X ||   || Red ||
 * Dana ||  ||   ||   || Green ||
 * Rhonda ||  ||   || X || Grey ||
 * Emily ||  || X ||   || Black ||
 * Mollie ||  ||   || X || Orange ||
 * Yvonne || X ||  ||   || Purple ||
 * Kerri ||  ||   ||   || Pink ||
 * Jennifer || X ||  ||   || Blue ||

McMillan, J. H., Cohen, J., Abrams, L., Kathleen, C., Pannozzo, G., & Hearn, J. (2010). Understanding Secondary Teachers’ Formative Assessment Practices and Their Relationship to Student Motivation. Virginia Commonwealth University. 1-17. Retrieved from [].

The purpose of this study was to identify formative assessment practices of secondary teachers, the extent that these practices are used, and to identify there is correlation between the use of formative assessment and student motivation. Researchers distributed a self-report survey to 161 experienced secondary teachers to determine what assessment practices they are using. Then an additional self-report survey was administered to 3,242 sixth through twelfth grade students to measure student motivation, engagement, self-efficacy, goal orientation, and also to evaluate student perceptions of teacher’s grading and assessment practices. Results indicated that 70% of secondary teachers use formative assessment although they could not indicate specific strategies and only 32% reported using formative assessment to diagnose weakness in an effort to reinforce concepts. Students indicated that teacher feedback in the form of praise, stressing the importance of learning, and making written comments motivated their improvement. Based on the research teachers should be using formative assessment as indicators of concept mastery therefore formative assessments need to be targeted at providing teachers with quantitative and qualitative data to help guide their instruction. Also, teachers should aim to provide students with relative and timely feedback to motivate their improvement. While this qualitative study makes connections between student motivation and formative assessment, more research is needed to establish the level of correlation between the two.

Misco, T., Patterson, N., & Doppen, F. (2011). Policy in the Way of Practice: How Assessment Legislation is Affecting Social Studies Curriculum and Instruction in Ohio. International Journal of Education Policy & Leadership, 6(7). 1-10. Retrieved from [].

Social Studies education is often marginalized within the context of high-stakes testing legislation which places greater emphasis on quantifiable subjects including reading, writing, math, and science. The value of Social Studies education is its emphasis on citizenship education, which is teaching students to interact within a global community. The purpose of the study was to explore ways that teachers are responding to new legislation as it relates to classroom instruction. Surveys and interviews were conducted among 230 secondary teachers across the state of Ohio. Results indicated positive and negative responses to high-stakes testing legislation. Negative responses indicated that teachers are only able to cover topics at a surface level although they would like to spend more time on discussion. Also, districts have come to dominate curriculum decisions restricting teacher creativity. Teachers were generally supportive of the standards but critical of high stakes testing in that it encourages content memorization whereas citizenship education is not as easily quantified. High stakes testing has caused teachers to alter their instruction in the Social Studies classroom to a detriment. Citizenship education, the primary purpose of Social Studies education, as a result is suffering because teachers feel pressured to cover the curriculum at a rate that cannot facilitate in depth discussion and in effect interferes with content critical thinking standards. The ultimate question is, if high stakes testing helps or hinders curriculum coverage.

Montgomery, J. (2008). Uses of Formal and Informal Assessments of English Language Learners in a Language Experience Class, School Year 2007-2008. University of Phoenix. 1-16. Retrieved from [].

The purpose of this study was to identify methods of informal assessment that best help ELL students improve on the Measures of Academic Progress (MAP) test which measures yearly progress in language acquisition. Participants included ELL students at an Illinois middle school during the 2007-2008 school year. Students participated in a series of informal assessments including computer programs such as “Compass Learning,” “Skills Tutor,” “Kids College Contests and Workbooks,” and other strategies including Web Quests, Problem-Based Learning, observations, and electronic student portfolios. Overall it was concluded that the use of a variety of informal assessment strategies, especially those that are software based so as to provide immediate feedback, helped students demonstrate significant gains on the MAP test in math, reading, and writing. This study did not address the use of data gathered from informal assessment to guide future instruction rather it focused on strategies that promoted content mastery overall. However, it can be summarized that students learn best when presented with a variety of informal assessments.

Volante, L. & Beckett, D. (2011). Formative Assessment and the Contemporary Classroom: Synergies and Tensions Between Research and Practice. Canadian Journal of Education, 34(2). 239-255. Retrieved from [].

The purpose of this study was the explore teacher perceptions of formative assessment. Participants included 20 teachers from two school districts in southern Ontario, Canada. One district had an assessment consultant who supported effective assessment practices within schools and the other had no consultant. Data was collected through individual interviews lasting about 60 minutes each. Teachers were given leading questions such as, “What does formative assessment mean to you and what does it look like in your classroom?” Other questions centered on feedback, self-assessment, peer assessment, and how data is used to plan instruction. Conclusions were drawn based on emergent themes based on the responses of the participants. The results of the study concluded that teachers disliked self-assessment and peer assessment because of students’ lack of familiarity with the content and they emphasized the importance of the collaborative process between teachers when planning/analyzing assessments. Overall teachers agreed that state testing does guide formative assessment, but they favored collaboration/independent study as a means of improving in lieu of ‘meaningless’ professional development sessions hosted by disconnected administrators. The general consensus is that teachers need more incentive to collaborate on formative assessment to analyze data that further guides instruction. Survey data demonstrates that informal analysis is done inconsistently among teachers and it is not clear if this is due to lack of training or lack of follow-through on the part of their administrators.

Yeh, S. (2006). High-Stakes Testing: Can Rapid Assessment Reduce the Pressure? Teachers College Record: Columbia University, 108(4). 621-661. Retrieved from [].

This article explores the negative consequences of No Child Left Behind and its implications for the impact of high-stakes testing on instruction. As the act has increased pressure on schools and teachers to perform it has caused negative effects like ‘dumbing’ down the curriculum. The study aims to decrease pressure to perform and thus increase positive impact of informal assessment. This study explores the effectiveness of a strategy called curriculum embedded assessment in reading, writing, and math where students are tested and given frequent feedback to help them increase achievement. The idea being that the faster students can receive feedback, the faster they can make changes, thus when students master content quickly, they can spend more time on critical thinking activities that offer greater depth. Participants included 3,649 elementary and secondary students in McKinney ISD, 37 teachers and 11 administrators. Student progress was measured using quantitative data and teachers and administrators were interviewed. Conclusions of the study include that the use of rapid assessment: allows teachers to identify areas of student weakness quickly, increased collaboration between teachers, increased continuous adjustment and improvement of instruction, teachers became more reflective of their teaching, increased consistency, increased accountability, and created less dependence on extraordinary teachers. Also, student motivation and self-esteem was positively impacted because when they received instant feedback, they could move on to the next task going at their own pace. A major complaint regarding NCLB is that teachers only have enough time to cover their content standards very minimally although they would like to teach with greater depth. Rapid assessment might be a possible solution to this problem. In effect, through this process students master the content quickly and thus there is time left to spend on critical thinking activities that require more time.

Varlas, L. (2013). How we got grading wrong, and what to do about it. ASCD, 55 (10), 5-7. [] The author of this article focuses on the grading of learning instead of grading students on effort and work. Varlas claims that this will mean that teachers will have to clarify learning targets for students, remove disciplinary grades, take less grades on student work, and learn to assess better in order to ensure we are grading students on the content they master. Teachers should help students change their mindsets on assessment by motivating them to understand content instead of their fear of failing or earning a specific grade. To do this, teachers should focus on giving students feedback on formative work instead of a grade, which will redefine how students view the purpose of formative work. They will see their formative work as the process of learning, while summative assessments will be the proof that they have learned and master the curriculum.

Frey, N. (2013). A formative assessment system for writing improvement. English Journal, 103 (1), 66-71. [] Frey and Fisher explore how to use formative assessment and feedback to help students improve their writing. Instead of focusing so much on the summative or final product, they use their formative assessments to drive instruction and to develop students’ writing skills. By shifting the teacher’s time and energy to more feedback on formative work, Frey and Fisher were able to find patterns of errors to drive their instruction. This purpose driven instruction allows teachers to analyze the errors and catch mistakes before students complete their summative work, thus learning more and writing better.

Clidas, J. (2010). A laboratory of words: using science notebook entries as a preassessment creates opportunities to adapt teaching. Science & Children, 48 (3), 60-63. [] This teacher, Jeanette Clidas, used quick-writes about her classroom topics as a means for preassessing their knowledge. A quick-write as an open-ended, content related question that students can respond to in five minutes or less. After modeling some-quick writes at the start of the year, this becomes a norm for students at the start of each unit or before presenting new information. By examining the information her students already knew about science concepts, Clidas was able to use the inquiry method to expand their knowledge and understanding by creating hypotheses and discovering evidence to prove or refute their hypotheses. Soon, students’ notebooks contained their preassessment, their learning, and their post-assessment understandings based on their research and findings.

Deeley, S. (2014). Summative co-assessment: a deep learning approach to enhancing employability skills and attributes. Active Learning in Higher Education, 15 (1), 39-51. [] To make summative assessment more meaningful to college students, Susan J Deeley created a co-assessment, a cooperative and collaborative assessment, that would allow students to apply their academic study with community service. To analyze the effects of this practice, students gave a reflective oral presentation on how they used the content from the course and the employability skills while participating in their community service. Then, they were evaluated by themselves, peers, colleagues, and their instructor based on their summative presentation, interviews, and other reflections done throughout the process. The study was conducted through a class focused on service, however, researchers found that this type of assessment would be applicable to other contents as well.

McLaughlin, C. (2013). Simply performance assessment. Science and Children 51 (3), 50-55. [] McLaughlin used her fourth-grade science curriculum and Next Generation Science Standards (NGSS) to create a performance task that her students could complete over two class periods. For the first day, students were introduced to six different types of simple machines, their uses, and their descriptions. On the second day, students were put in to small groups and asked to use their knowledge from the previous day to build one of the simple machines as their performance assessment. They were given everyday materials and different roles in their group that would demonstrate their knowledge of the content, their ability to work together, and creating a final working product. The results showed that students were not only more engaged in science, but had a better understanding of simple machines and problems facing the 21st century.

_ 1. Classroom Assessment Techniques (CATS) are used by students and teachers to evaluate their progress. Feedback is key when it comes to making sure the students are learning rather than used to give grades and tests. This technique works because students like having a say in how they are being taught, and consequently, they enjoy the class more. Another method used in this technique is having students create their own goals for the class so that the teacher could adjust the class’s goals to meet their needs. Prior knowledge assessment also helped the students and teacher alike. Students had a chance to let the teacher know what they already knew and what they needed to review. This way the class time was valuable for everyone. “Minute Papers” are suggested after each class period in order for the students to tell the teacher what they understood, or not, in the class before leaving. This gives the students an opportunity to think about it from a learning perspective. When only a couple students seem to be having difficulties, it is a good idea to see them outside of the class time to work on their needs without taking away valuable class time from the students. The CATS goal is to have students and teachers work together to improve learning. This means the teacher and the student is active, as we know real learning requires.
 * (Yvonne) **

Carduner, Jessie. Using Classroom Assessment Techniques to Improve Foreign Language Composition Courses. Foreign Language Annals, ISSN 0015-718X, 09/2002, Volume 35, Issue 5, pp. 543 – 553. []

2. Portfolios are a great way for students to see where they are, decide where they want to be, and acknowledge where they have been. However, some teachers dislike the portfolio method because it can be a challenge to grade. Both the student and the teacher take on new roles when preparing a portfolio because they need to work together to make sure improvement is made over the class duration. The portfolio should be viewed as “an assessment tool rather than a grading device,” and there is a framework to help teachers prepare a portfolio assignment. The first step is to plan the purpose of the portfolio and give the expectations of each part. The assignment outcomes give the students the overall goal of the assignment. This second step needs to be open to encourage learning, but limited enough to be easily assessed. The third step is to make sure that the portfolio expectations meet those of the classroom. In this way the student can see progress from day one. Determining the actual portfolio project materials is the fourth step which tells the students what they can and cannot use. After this is decided, the teacher must adopt an organization of the materials that will be accepted. The fifth step is to choose what will and will not meet the criteria. This step is helpful for the student and teacher to know how the grading will take place. The sixth step is to monitor. Both the student and teacher must monitor the progress over time and make adjustments as needed. Evaluation is the final step. The teacher will evaluate the students’ work which will allow the students to evaluate themselves as well. In addition, the teacher must evaluate the ability of the portfolio based on the assessments. The goal of the portfolio is to literally see the progress and make changes for improvement when necessary. It’s unfair for this article to say that teachers might not like this great chance to assess students because it might take a while to grade.

Delett, Jennifer S; Barnhardt, Sarah; Kevorkian, Jennifer A. A Framework for Portfolio Assessment in the Foreign Language Classroom. Foreign Language Annals, ISSN 0015-718X, 11/2001, Volume 34, Issue 6, pp. 559 – 568. []

3. Formative assessments are compared to video games. When kids play a game and see that they are not doing well, they restart it and do better. Using failure to inspire rather than hurt students is the main goal of this assessment style. The assessment that teachers do not only give students feedback of their work, but it evaluates the instructor too. This feedback should allow students to “restart” and understand what they need to do better next time rather than continue on without a chance to improve. Observation and asking questions is the easiest way to assess the class. This also lets students have direct and quick responses before moving on. Assignments that would be assigned anyway help as well. The teacher can give the students a chance to improve on the next project by looking at the feedback of the previous one. Group work is explained as a way to evaluate the class too. This exchange of ideas can help the teacher see who understands the material and who needs clarification. A weekly writing assignment can also be of use so that the students have a chance to tell the teacher what they enjoyed learning and what it was that helped them to better understand the lesson. Then again, it can also let the instructor know what might need to be reinforced for the next week. Journaling is another suggestion to have students put their thoughts on paper. The idea of bringing past lessons into the new lesson reinforces the idea that students can “restart” what might have been a failure before, thus creating success. Formative assessment is the “reset” button in education which is used for understanding before grading.

Dirksen, Debra J. Hitting the Reset Button: Using Formative Assessment to Guide Instruction. The Phi Delta Kappan, ISSN 0031-7217, 04/2011, Volume 92, Issue 7, pp. 26 – 3. []

4. The article begins by describing the difference between formative and summative assessment. Formative assessment “provides feedback to the learner through which he or she can improve his or her learning capacity” and summative assessment “measures the level of education achieved by the child.” State tests are used to allow good students to choose the correct answer, while bad students will most likely get the response wrong. The questions asked are reviewed many times to ensure they are reliable. Yet, there are problems with exams. The state may require a lot of information to be covered during the school year even though only one question will be on the test about that topic. For economic reasons, the tests need to be simple and cheap to create. Third, the test needs to be state-specific to give students a fair chance which is slowly happening. Even more traps include: ethnicity, social class and background. The main solution that is suggested is formative assessment. The assessment should be year-long and offer feedback. Teachers should also receive feedback to improve their lessons. There should also be more than one type of assessment given to students. Plus, there is an importance for students, parents and community to work together. It is believed that the state tests do not meet any of these resolutions. Something unique mentioned is that in the Netherlands schools do not get punished for low grades. Instead, the school receives extra help. Since teachers cannot change state tests alone, it seems implementing the formative assessment suggestions could give the students the best chance for doing well with these exams.

Pahl, Ronald. Assessment Traps in K-12 Social Studies. The Social Studies, ISSN 0037-7996, 09/2003, Volume 94, Issue 5, pp. 212 – 215. []

5. Peer Assessment and feedback is becoming popular to understand. The outcomes of peer and teacher assessments do not differ greatly since students expect honest reviews. However, one problem that arises is that students may grade their friends work, or might not have the heart to be completely honest, thus the grading will be on an unknown scale for particular students. Yet, when this is not an issue, students benefit from larger responses from many people rather than a single teacher’s one reply. Peers also need to give justification for their assessments which teachers tend not to do. This helps the student gain better knowledge of their work. Studies show that students prefer peer evaluation over that from an expert in the subject they study because the group has had an equal chance to learn the material and discuss it together. Those who support peer assessment use these benefits to promote more studies on the case which has been greatly ignored until recently. As any student knows, the comments made by peers are much more inspiring than those by the teacher, so this type of assessment could change the way students respond to learning.

Strijbos, Jan-Willem and Sluijsmans, Dominique. Unravelling peer assessment: Methodological, functional, and conceptual developments. Learning and Instruction, ISSN 0959-4752, 2010, Volume 20, Issue 4, pp. 265 – 269. []

_

1. Baniabdelrahman, A. A. (2010). The effect of the use of self-assessment on efl students’ performance in reading comprehension in english. //The Electronic Journal for English as a Second Language//, //14//(2), 1-22. http://libproxy.library.unt.edu:2368/?id=EJ899764

Baniabdelrahman explores whether there are significant differences in students’ performance in English reading due to self assessment or traditional assessment in an attempt to determine the effect of self-assessment on reading. The study was conducted with 11th grade English as a Foreign Language students in a Jordanian public school. A section of male students and a section of female students were assigned to the control group that used traditional assessments. The other two sections, one male and one female, were assigned to the experimental group using self assessment. The sections were taught by two trained instructors who used 1 minute papers at the end of each reading class period and scored those papers with a rating scale. Students were given a reading test prior to treatment and a post test after treatment to measure reading achievement. Results indicated that there is not a significant interaction between the method of assessment and gender, but there is a significant difference between reading achievement of the experimental and control groups. This indicates that self assessment is more effective than the traditional methods of assessment. These results seem to affirm the value of peer and self assessments as valuable to improving students’ learning. This shows that students are capable of reflecting metacognitively on their own learning and can actually use this information to improve and further their learning.

2. Birjandi, P., & Tamjid, N. H. (2012). The role of self-, peer and teacher assessment in promoting iranian efl learners' writing performance. //Assessment and Evaluation in Higher Education//, //37//(5), 513-533. http://libproxy.library.unt.edu:3772/ehost/pdfviewer/pdfviewer?vid=13&sid=22e4c7f1-e6ca-4b95-a896-04e6a0f94803%40sessionmgr111&hid=101

Birjandi and Tamjid’s (2012) study of self, peer, and teacher assessment compared the three to each other. The main research questions included whether self assessment improved writing performance compared to teacher assessment and whether peer assessment improved writing performance compared to teacher assessment. The participants included 157 teaching English as a foreign language juniors who had already passed two writing courses. The researchers made use of five intact groups. The first group utilized journals and teacher assessment, the second used self assessment and teacher assessment, the third used peer and teacher assessment, the fourth used self and peer assessment, and the final group used solely teacher assessment. A pre and post test was used to measure writing performance. The tests and rating scales were teacher made. The study found no significant difference in the writing scores of the journal writing and teacher assessment group and the purely teacher assessment group. A statistically significant difference was found between the self and teacher assessment group and the teacher assessment group, with the former group outperforming the latter. There was also a statistically significant difference between the peer and teacher assessment group and teacher assessment group. A significant difference was found between the peer and teacher assessment group and teacher assessment group. No statistically significant difference was found between the self and peer assessment group and the teacher assessment group. These results illustrate the value of alternative assessment, including self assessment. They also indicate that writing performance is improved by teacher feedback. The results confirm other research suggesting writing ability is constructed and reconstructed. The study confirms my own beliefs that alternative assessments are perhaps more valuable for students than other forms. It also shows that students are capable of reflecting metacognitively and are aware of their learning and progress. This awareness means that we can help students to reflect and improve their learning as a result.

3. Chang, C. C., & Wu, B. H. (2012). Is teacher assessment reliable or valid for high school students under a web-based portfolio environment. //Educational Technology& Society//, //15//(4), 265-278. http://libproxy.library.unt.edu:3772/ehost/pdfviewer/pdfviewer?vid=15&sid=22e4c7f1-e6ca-4b95-a896-04e6a0f94803%40sessionmgr111&hid=101

Chang and Wu (2012) conducted a study over web-based portfolio assessment, seeking to determine if the results of these types of assessment are consistent among teachers, are consistent within an individual teacher, and are appropriate for determining students’ learning achievement. Participants included 79 11th graders in a computer application course taught by one teacher and three online assistants. There were 12 weeks with 3 hours of instruction where students created their portfolios and participated in peer and self assessment using the web-based portfolio assessment system. In addition, students participated in course activities and online discussions outside of class. A rubric, created by the teacher and researchers, was used to score the portfolios and contained six different components with 27 individual items. The data showed that teachers yielded a high degree of consistency with each other and individually. The study showed that portfolio scores and student achievement test scores were highly correlated and significant, indicating that rubrics are appropriate for detecting learning achievement. This research suggests that web-based portfolio assessment is reliable and valid, confirming previous research. However, the researchers noted that portfolios must be scored by experienced and trained raters who use clear criteria, understand the purpose, and have a deep understanding of the expected performance in order to be accurate and meaningful measures. This study seems to confirm the power of rubrics to achieve a number of instructional purposes. Not only can they guide instruction, but they can also guide learning because students know what they are supposed to be able to achieve ahead of time. The correlation between the portfolio scores and learning achievement also seem to make a strong case for the use of alternative assessments in the classroom and for standardized achievement tests because of their ability to accurately measure students’ mastery of skills.

4. Goldstein, J., & Behuniak, P. (2012). Can assessment drive instruction? understanding the impact of one state's alternate assessment. //Research & Practice for Persons with Severe Disabilities//, //37//(3), 199-209. http://libproxy.library.unt.edu:3772/ehost/pdfviewer/pdfviewer?sid=22e4c7f1-e6ca-4b95-a896-04e6a0f94803%40sessionmgr111&vid=7&hid=101

Goldstein and Behuniak (2012) examined the Connecticut state alternative assessment used with students with cognitive disabilities. The researchers wanted to determine if students receiving 0s on the skills checklist, which indicates that a student did not demonstrate the skill, really did not demonstrate the skill or if a 0 meant that the teacher had not introduced the skill. The participants included 240 special education teachers in Connecticut at grades four, six, and ten who submitted alternative assessment data. The researchers used an instructional survey identical to the skills checklist and an online survey to collect data from teachers. They then determined the proportion of teachers who instructed students in the areas on the skills checklist, reviewing the least frequently used skills. In both Language Arts and math, 2/3 of the 50% of students who had received 0s had not been instructed on the content. This suggests that a 0 on the skill more accurately indicates that the teacher had not introduced the skill, calling into question the accuracy of the state assessment. These results suggest that teachers modify the content of assessments to fit their students’ needs and that these assessments drive instruction. The researchers indicated a need to train teachers on how to use assessment data for instruction. This study seems to suggest that standardized assessments, even when alternative in nature, are overwhelming because they cover so much content. It is impossible to teach the entire content of the test in meaningful ways, especially since students’ needs are so varied when they arrive in our classrooms. These results are also reassuring to me because it shows that editing the content of instruction to meet needs is happening in many different classrooms and that the assessments are the problem, not necessarily the teachers making these decisions.

5. Gottheiner, D. M. N., & Siegel, M. A. (2012). Experienced middle school science teachers’ assessment literacy: Investigating knowledge of students' conceptions in genetics and ways to shape instruction. //Journa of Science Teacher Education//, //23//, 531-557. http://libproxy.library.unt.edu:3772/ehost/pdfviewer/pdfviewer?vid=9&sid=22e4c7f1-e6ca-4b95-a896-04e6a0f94803%40sessionmgr111&hid=101

Gottheiner and Siegel (2012) sought to examine experienced life science teachers’ ability to use assessment tools to monitor student difficulty. Their study examined which assessment tools these teachers use to determine students’ preconceived ideas, their predictions about student difficulties, and compared those ideas to students’ actual difficulties. The study also looked at what instructional strategies teachers then recommended to respond to student difficulty. The participants included five experienced life science teachers who participated in a background questionnaire and focus groups. Students were given a 4 question pre-instructional assessment and interviewed using state assessment tools after instruction. The researchers transcribed and coded video from the teacher focus group and coded the student interviews, and compared the results of the two. The results indicated that teachers primarily used small group discussions to determine students’ ideas and difficulties. While teachers were able to make predictions about students’ difficulties, they only identified the most typical of student difficulties. The study also indicated that the teachers had trouble using students’ initial conceptions diagnostically for instruction and assessment. The researchers suggested these results indicate that true assessment literacy requires that teachers know how to use data gathered from assessment to modify instructional plans. Like much of the other research I have read on assessment, this study furthers my belief that assessment literacy is a huge issue that must be addressed by teacher education programs and professional development opportunities. The need for training on assessment goes much further than providing information on how to create instruments, but how to interpret them and use the data in a meaningful way that will improve instruction and student learning.

American federation of teachers,. (2008, June ). The appropriate use of student assessment. Retrieved from http://files.eric.ed.gov/fulltext/ED511578.pdf.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">This article was summited by the American Federation of Teachers. In this article the AFT states that too many states and districts are testing inappropriately. Reasons for this include large scale assessments that do not provide diagnostic information, measuring student performance though benchmarks that can be unreliable, and narrowing of the curriculum by requiring teachers to spend time teaching to the test. The AFT offers alterative measures of assessments and provides recommendations of the test should be used. Some of these tests include norm and criterion referenced tests, formative assessment, summative assessment, adaptive tests, value added assessments, and diagnostic assessments. States and districts need to provide teachers assessments that give them useful and timely information that can be used to guide instruction.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">[]

<span style="color: #e46c0a; font-family: 'Georgia','serif'; font-size: 13px;">Even, Huhama. (2005). Using Assessment to Inform Instructional. //Mathematics Education Research Journal//. , 17(3), 45-61.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">This article discusses two main problems with the expectation that teachers use formal assessments to guide instruction in mathematics. The first problem is how teachers make sense of the data given to them. This is because teachers have difficulty in interpreting student understanding, knowledge, and learning of mathematics. The second problem is how to get teachers to use contemporary forms of assessment. In the contemporary from of assessment, teachers assess students daily during regular instruction. This assessment is less formal and should be incorporated into daily routines. Activities of instruction and assessment happen together, at the same time, unlike traditional assessments which are separate from instruction. Traditional assessments are typically paper, pencil tests with closed question formats. Contemporary assessments are rich, comprehensive assignments such as projects, portfolios, journals, and conversations. This article gives examples of how teachers use contemporary assessment and how teachers can misinterpret assessments.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">[]

<span style="color: #e46c0a; font-family: 'Georgia','serif'; font-size: 13px;">Greenlees, Jane. (2011). The Fantastic Four of Mathematics Assessment Items. //Australian Primary Mathematics Classroom//. , 16(2), 23-29.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">In this article the author examines four components of mathematical assessment and the need for implicit instruction for student success. The author relates these four components to the Fantastic Four comic book heroes. Like the action heroes, the components cannot stand alone. They work together as a team. The first component is mathematical content. Mathematical content is the fundamental basics students learn. The second component is literacy demand. Literacy demand refers to reading math questions thoroughly and carefully looking for important information. Third is contextual understanding. This requires students to understand how to apply concepts in various contexts. The final component is reading and understanding graphics. If students are to be successful, they must be taught all of the components of mathematical assessment.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">[]

Herman, J. L., Osmundson, E., & Dietel, R. (2010). //Benchmark Assessment for Improved Learning. An AACC Policy Brief//.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">Benchmarks are assessments which are done periodically throughout the school year to see where students are in relation to a long term goal. Benchmarks typically serve four purposes. First is to communicate expectations about what concepts are important to learn. Second is to plan curriculum and drive instruction. Benchmarks help teachers adjust instruction to meet the needs of his/her students. Third, benchmarks monitor and evaluate instructional program effectiveness by giving insight into which curriculum, resources, and educational programs are beneficial to students’ learning. Fourth, benchmarks predict future student performance by letting teachers know which students are on track to meet end of year goals.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">Good benchmark assessments can be an important addition to a comprehensive assessment system. When choosing a benchmark, one must consider the validity, reliability, and fairness of the assessment. The benchmark should also be aligned with curriculum and aligned to the purpose it is to serve. If the purpose of the benchmark is to guide instruction, it should give information to show students strengths and weaknesses.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">[]

<span style="color: #e46c0a; font-family: 'Georgia','serif'; font-size: 13px;">Kaftan, Juliann. Buck, Gayle. Haack, Alysa. (2006). Using Formative Assessments to Individualize Instruction and Promote Learning. //Middle School Journal//. , 37(4), 44-49.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">This article discusses an action research project about sixth grade science teacher and the way she assessed student learning. This science teacher wanted to know how to improve student learning. In the beginning of the project, the teacher would assess student understanding by grading a student’s worksheet about a concept being taught in class. For nine weeks a team of researchers interviewed students on the same concepts. They found that a student could get 11/11 questions correct on a worksheet, but could not explain the concept or answer questions by the research team. The teacher worked to create new worksheets which more accurately assessed student understanding. Instead of using readymade worksheets the teacher created a more valid assessment of student learning. Once materials were created to accurately assess student learning the teacher could then use the assessment to effectively guide instruction.

<span style="color: #e46c0a; font-family: 'Times New Roman','serif'; font-size: 16px;">[]

Au, W., & Gourd, K. (2013). Asinine assessment: Why high-stakes testing is bad for

everyone, including English teachers. //English Journal, 103//(1), 14-19. Retrieved from

[]

Abstract

This article focuses on the history of where high-stakes testing began and why. It started with the top-down theory showing that the government’s wisdom of knowing what is best for children in grades 3-8 and again high school should be tested to augment federal funding to schools, who do well on the tests. This article was persuasive in nature of taking a side on who should be tested and how the tests should be reviewed on the school’s rating and teacher initiatives. It also shared specific insight of a writing grader to how the tests are scored and the expectations of those scores based on previous years. This article concludes with examples of how ELA curriculum is being changed and refocused yearly on the type of writing for these tests and not the real-world connections to writing.

Minarechová, M. (2012). Negative impacts of high-stakes testing. //Journal Of Pedagogy// /

// Pedagogický Casopis //, //3//(1), 82-100. Retrieved from

[|http://libproxy.library.unt.edu:3799/ehost/pdfviewer/pdfviewer?vid=4&sid=ff5763e7-]

[| 30f3-4307-bf35-4b84b1d3620b%40sessionmgr4002&hid=4102]

Abstract

High stakes testing is one in which, districts, administrators and teachers are held accountable for improved academic standards for children. These scores are then used to give financial funds, reallocate funds or share with parents the information about how students are learning the standards and to what degree learning is taking place. The connection to the testing in England and the testing done in the United States is made through the attempt to divulge historical implications and decisions that parents face on what school to send their child to learn and understand curriculum that is primarily focused on scores vs. academics as a whole. Through the Nation at Risk report, the government has sought to control the standards and curriculum based on a one-day test of students. The research conducted reported that hidden curriculum is maintained based on what group the children will fall in to and to what degree they are labeled as test takers.

Johnson, C. S., & Gelfand, S. (2013). Self-assessment and

writing quality. //Academic Research International, 4//(4), 571-

580. Retrieved from

[]

Abstract

Using qualitative and quantitative analysis, this researcher in Alberta, Canada, focused on the ideas behind student motivation and self-assessment in writing. Using co-created generated rubrics and guidelines, a positive correlation was found in respect to writing improvement and motivation. This research article also alludes to the idea that if students are generally involved in the structure of the writing in regards to topic and format, then a more positive collaboration occurs. The growth in self-assessment and personal concepts of student writers were shown to have positive impacts on the students and their writing abilities. The second focal point in the research showed a positive correlation behind how teachers structure the classroom to allow for the voice and self-assessment with students.

Brimi, H. (2012). Teaching writing in the shadow of standardized writing

assessment: An exploratory study. American Secondary Education, 41(1), 52-

77.

Abstract

This research article focuses on the ideas behind writing instruction and writing instruction for the TCAP assessment. The explanatory research found that the five teachers who were followed and interviewed were English teachers who due to the amount of pressure on the five-paragraph essay for the TCAP were bound to stray from the idea of real writing to imbed more focus on the genres tested and the formal based assessment. The report also shared that the writing was more geared toward the mimicking of famous writers rather than on authentic writing from the students. Also, they reported that the writing process was taught, but not explicit or dedicated ample amounts of time to the process of revision since the TCAP does not allow for this amount of time to be spent in the testing setting.

Cope, B., Kalantzis, M., McCarthey, S., Vojak, C. & Kline, S. (2011). Technology-

mediated writing assessments: Princilpes and processes. Computers and Composition,

28(2), 79-96. Retrieved from []

Abstract

This research paper focuses on the idea that writing assessment can and should happen, but for a variety of reasons and in a variety of disciplines. They offer six transformative measures in writing across the curriculum to embed good writing techniques, as well as strategies, to help encumber the techniques and purposes for writing. They argue that formative assessment of writing needs to feed directly into the learning process of writing. They also argue that writing is a form of complex idea making and should be taught as such with an emphasis on knowledge building and responsive feedback built on the content of the writing. Socio-technical possibilities in writing will change the way that students learn and show their learning.