Assignment Task

 

 

Consequences of Experimentalism

 

 

 The Consequences of Experimentalism in Formulating  Recommendations for Policy and Practice in  Mathematics Education 

In this response to Foundations for Success: The Final Report of the  National Mathematics Advisory Panel (2008), the authors argue that the  Panel's assumption that only experimental research studies can pro  duce scientific evidence limits the power of the Panel's recommen  dations to improve mathematics teaching and learning. The authorsfirst discuss the theoretical underpinnings, potential contributions,  and limitations of experimental studies. Against this background, they  focus on three issues that are central to improving mathematics learning and teaching, those of equity, the nature and content of text  books, and graduate education. In doing so, the authors illustrate thelimitations of developing implications for policy and practice by rely  ing exclusively on research conducted using a single methodology Keywords: equity; mathematics education; research methodology The National Mathematics Advisory Panel's (2008) recent  report sought to synthesize the '"best available scientific  evidence' to [recommend] ways to foster greater  knowledge of and improved performance in mathematics among  American students'" (p. xiii). We support the effort to develop  recommendations for policy makers and teachers that are based  on high-quality research. The Panel produced a comprehensive  report with an impressive array of supporting documents.  Unfortunately, the Panel took an overly narrow view of what  counts as scientific evidence, thereby failing to capitalize on much  of what is known about mathematics learning and teaching.

 As a  consequence, the Panel's report is less effective than it would oth  erwise have been in supporting policy makers and teachers to  make substantial improvements. In this response to the report,  we first discuss the theoretical underpinnings and the potential  contributions and limitations of experimental research studies.  We go on to argue that other methodologies produce different  forms of knowledge that complement the findings of the Panel  and would have increased the usefulness of the Panel's report for  policy and practice. Against this background, we then focus on The Approach the Panel Used to Produce Its  Recommendations The trustworthiness of research findings depends on the sound  ness of the method used to produce those findings (Bernstein,  1983; Lakatos, 1970; Popper, 1972). Similarly, the value of  the Panel's recommendations depend on the soundness of the  method the Panel used to (a) discriminate between trustworthy  and suspect research findings and (b) synthesize findings judged  to be trustworthy. In our view, there is good reason to be con  cerned about both of these steps in the Panel's approach.  The Panel used three categories to discriminate between trust  worthy and suspect studies. The first category of high-quality sci  entific evidence is reserved for "studies that test hypotheses, meet  the highest methodological standards (internal validity), and have  been replicated with diverse samples of students under conditions  that warrant generalization (external validity)" (p. 7-4). The Panel  assumed that only one methodology can produce high-quality scientific evidence: experimental and quasi-experimental studies. The  studies in the Panel's second category of promising or suggestive  findings "represent sound, scientific research that needs to be fur  ther investigated.

 

Policy and Practice in  Mathematics Education 

The Theoretical Grounding of Experimental Research 
Slavin (2004) succincdy described the forms of knowledge experi  mental studies produce under the best of circumstances when he can contribute to the establishment of causal claims about the effec  tiveness of instructional interventions. We follow Maxwell (2004)  in arguing that this assumption is unwarranted. Maxwell distin guishes between two complementary treatments of causal explana  tion. The first of these two treatments, which Maxwell terms the regularity type of causal description, is central to the experimental  methodology and is based on observed regularities across a num  ber of cases. Maxwell calls the second treatment process-oriented  explanation and clarifies that it "sees causality as fundamentally referring to the actual causal mechanisms and processes that are  involved in particular events and situations" (p. 4). Process-oriented  explanations are therefore concerned with "the mechanisms  through which and the conditions under which that causal rela  tionship holds" (Shadish, Cook, & Campbell, 2002, p. 9, cited in  Maxwell, 2004, p. 4). In contrast to the regularity conception of  causality, viable explanations of this type can be developed based  on a relatively small number of purposefully selected cases  (Maxwell, 2004). For example, studies employing the design  research methodology have been conducted to develop process  oriented causal explanations of the relations between teachers'  instructional practices, instructional tasks as they are actually  enacted in the classroom, the learning opportunities that arise for  students as they engage in the tasks, and students' resulting learn  ing in particular mathematical domains (P. Cobb, McClain, &  Gravemeijer, 2003; Confrey &C Smith, 1995; Lehrer & Sch?uble, clarified that well-designed studies of this type are not limited to x  versus y comparisons but can "also characterize the conditions  under which x works better or worse than j/, the identity of the stu  dents for whom x works better or worse than y, and often produce 
 rich qualitative information to supplement the quantitative com  parisons" (p. 27). The key point to note for our purposes is that  knowledge claims associated with experimental studies reflect a par ticular conception of the individual. The knowledge claims refer to  an abstract, collective individual or statistical aggregate that is con  structed by combining measures of psychological attributes of the  participating students (e.g., measures of mathematics achieve ment). This statistically constructed individual is abstract in the  sense that it does not correspond to any particular student.

 

Policy and Practice in  Mathematics Education 

 

This Mathematics Assignment has been solved by our Mathematics Experts at UniLearnO. Our Assignment Writing Experts are efficient to provide a fresh solution to this question. We are serving more than 10000+Students in Australia, UK & US by helping them to score HD in their academics. Our Experts are well trained to follow all marking rubrics & referencing style.

Be it a used or new solution, the quality of the work submitted by our assignment Experts remains unhampered. You may continue to expect the same or even better quality with the used and new assignment solution files respectively. There’s one thing to be noticed that you could choose one between the two and acquire an HD either way. You could choose a new assignment solution file to get yourself an exclusive, plagiarism (with free Turnitin file), expert quality assignment or order an old solution file that was considered worthy of the highest distinction.

Eureka! You've stumped our genius minds (for now)! This exciting new question has our experts buzzing with curiosity. We can't wait to craft a fresh solution just for you!

  • Uploaded By : Roman
  • Posted on : March 05th, 2019

Whatsapp Tap to ChatGet instant assistance