Investigating the role of model-based reasoning while troubleshooting an electric circuit

We explore the overlap of two nationally-recognized learning outcomes for physics lab courses, namely, the ability to model experimental systems and the ability to troubleshoot a malfunctioning apparatus. Modeling and troubleshooting are both nonlinear, recursive processes that involve using models to inform revisions to an apparatus. To probe the overlap of modeling and troubleshooting, we collected audiovisual data from think-aloud activities in which eight pairs of students from two institutions attempted to diagnose and repair a malfunctioning electrical circuit. We characterize the cognitive tasks and model-based reasoning that students employed during this activity. In doing so, we demonstrate that troubleshooting engages students in the core scientific practice of modeling.


I. INTRODUCTION
Recently, there have been national calls to study [1] and improve [2] lab instruction in the sciences. Along these lines, the American Association of Physics Teachers (AAPT) released guidelines for learning outcomes in undergraduate physics lab courses [3]. The AAPT guidelines focus on skill-based learning outcomes that align with the cognitive tasks involved in, for example, tabletop experimental physics research [4]. In the present work, we investigate the overlap of two major learning goals of instructional physics lab environments: (1) the ability to model experimental systems, and (2) the ability to troubleshoot a malfunctioning apparatus. While these two abilities are sometimes presented as distinct, we aim to show that they are in fact overlapping both in theory and in practice. We show that, for some students, modelbased reasoning plays a key role in the troubleshooting process.
Modeling is the nonlinear, recursive process of constructing, testing, and refining models [5]. Modeling has been identified as an important physics practice at both secondary [6] and post-secondary levels [3]. While traditional introductory physics lab courses have been criticized as rote and inauthentic [4,7], there nevertheless exist innovative approaches that engage students in the iterative process of modeling at the introductory level, such as ISLE [8] and Modeling Instruction [9]. At the upper-division level, the Modeling Framework for Experimental Physics (hereafter, "the Modeling Framework") has been developed to characterize students' model-based reasoning [5] and to inform development of instructional lab environments that engage students in the practice of modeling [10][11][12]. * dimitri.dounasfrazer@colorado.edu Like modeling, troubleshooting is also a nonlinear, recursive process, though the goal is more narrow: to repair (or revise) a malfunctioning apparatus [13][14][15]. Indeed, Zwickl et al. [5] noted similarities between modeling and troubleshooting in their work characterizing student reasoning on an experimental optical physics activity. They found that, while troubleshooting, students sometimes engaged in "what appeared to be a rapid modeling cycle involving a series of qualitative predictions and qualitative measurements . . . in order to identify the source of the problem" (Ref. [5], p. 8). It is precisely this overlap that we interrogate in the present work.
Three factors make electronics courses an ideal context for studying physics students' troubleshooting abilities. First, the physical systems and models with which students interact are relatively simple. Second, the electric circuits that students build during lab activities consist of low-cost, easy-to-replace components, thus facilitating multiple revisions to the experimental system. Finally, students often construct circuits that don't initially work, and the need to troubleshoot arises naturally in most lab activities. Previous work in the domain of electronics courses for physics students has focused on: characterizing college students' understanding of electric circuits [26][27][28][29][30]; characterizing expertise-related differences among high school students troubleshooting simulated circuits [18,19]; and designing teaching strategies to develop college students' conceptual understanding [31][32][33], engage college students in model-based reasoning [12], and improve high school students' troubleshooting ability [20][21][22][23]. However, we are not aware of work that focuses on physics students' ability to troubleshoot physical (as opposed to simulated) electric circuits, or of work that focuses on the troubleshooting processes employed by post-secondary physics students.
In this paper, we report on a study that explores the overlap of modeling and troubleshooting in the context of an activity that is typical of an upper-division electronics course for physics students. In this study, eight pairs of students attempted to diagnose and repair a malfunctioning electric circuit. Using audiovisual data collected from two institutions, we characterize the cognitive tasks and model-based reasoning that students employed during this activity. The work herein builds on preliminary analyses of a subset of our data, which have been reported elsewhere [34,35].
Our work expands current knowledge of instructional physics laboratory environments in two ways. First, we apply frameworks for both modeling and troubleshooting to a new domain in physics education, namely, upperdivision electronics. Second, we examine the synergies of two nationally-recognized learning outcomes for lab courses, namely, (1) the ability to model experimental systems and (2) the ability to troubleshoot a malfunctioning apparatus. In doing so, we demonstrate that electronics courses-whose content is sometimes dismissed as not real physics-can engage students in important experimental physics practices. This paper is organized as follows. In Sec. II, we describe the two theoretical perspectives that inform our work: a cognitive task analysis of troubleshooting and a framework for describing the modeling process. In Sec. III, we provide institutional context for our study and a description of our study participants. In Sec. IV, we describe our research methods, including a detailed description of the troubleshooting activity. Our results are presented in Sec. V and discussed in Sec. VI. Finally, we summarize our findings and discuss future directions for our work in Sec. VII.

II. THEORETICAL PERSPECTIVES
Throughout this work, we define troubleshooting as the process of repairing a malfunctioning system. Troubleshooting is a type of problem-solving for which the solution state is known, but the troubleshooter must determine what information is needed for problem diagnosis [15]. Our goal is to identify and describe examples of how students use (or don't use) model-based reasoning while troubleshooting. We grounded our design and analysis in two different, complementary theoretical perspectives: a cognitive task analysis of troubleshoot-ing [14,15,17], and the Modeling Framework, which describes physicists' use of models when conducting physics experiments [5]. The motivations for using these two perspectives are twofold. First, we are able to map the Modeling Framework onto existing analyses of the troubleshooting process. Second, when analyzing students' approaches to troubleshooting, we use the cognitive troubleshooting tasks and the Modeling Framework to provide complementary coarse-and fine-grained descriptions of students' thought processes. In this section, we elaborate on each of these perspectives and identify areas of overlap. Because the system we consider here is a circuit, we provide examples from electronics to help clarify ideas throughout the discussion.

A. Cognitive Task Analysis of Troubleshooting
Cognitive task analysis is "a family of methods used for studying and describing reasoning and knowledge" (Ref. [36], p. 3). Our summary of various cognitive task analyses of troubleshooting [14,15,17] explicates both the types of knowledge and the types of tasks that facilitate effective troubleshooting.

Types of troubleshooting knowledge
Other work [15,18] has identified six kinds of knowledge that facilitate competent troubleshooting: Domain, System, Procedural, Strategic, Metacognitive, and Experiential. Domain Knowledge consists of the theories and principles upon which the system was designed [15]. In the case of a circuit, Domain Knowledge may include underlying principles like electron transport or conservation of charge as well as models like Ohm's Law or the Golden Rules for op-amps. These principles enable the troubleshooter to both represent the problem and identify relevant problem-solving operations [18], such as which voltages to measure in order to determine whether or not a particular component is functioning properly.
System Knowledge includes understanding of the structure and function of the system and the components within the system [37]. In a circuit, this may involve recognizing that a complex circuit is composed of multiple subsystems or identifying a particular resistor as a "feedback resistor." System Knowledge further includes understanding of spatial representations of the system and flow control within the system [15]. For circuits, diagrams and schematics of the configuration of subsystems and components are common representations that are used to trace current through the system.
Procedural Knowledge refers to the appropriate use of test equipment and procedures [15]. For electronic systems, this includes understanding how to use oscilloscopes, multimeters, and power supplies.
Strategic Knowledge includes heuristic techniques and systematic approaches to troubleshooting the sys-tem [18]. Strategic Knowledge is an essential part of competent troubleshooting [13]. For example, experts employ particular sequences of operations to reduce the problem space by reducing the number of potential locations for faults [17]. Three commonly used troubleshooting strategies are [15]: 1. Exhaustive, which involves identifying all possible faults and testing them one-by-one until the actual fault is discovered; 2. Topographic, which involves performing a series of tests that follow a trace through the system, either moving "downstream" from a point where the system behaves correctly or "upstream" from a point of malfunction; and, 3. Split-Half, which involves checking the functionality of the system at a midpoint in order to reduce the problem space by isolating the fault in one half or the other.
Metacognitive Knowledge "is used to monitor [the troubleshooting process] by keeping track of the progress toward the goal state" (Ref. [18], p. 237). Such monitoring is required in order to evaluate the effectiveness of a strategy and, if needed, to switch to a different strategy [13]. In ongoing work [35], we are exploring the role of sociallymediated metacognition in troubleshooting [38].
Finally, Experiential Knowledge is the historical information accumulated by experienced troubleshooters [15]. Experiential Knowledge enables troubleshooters to propose likely faults by recalling historical information that links symptoms to likely causes. This process can be faster than relying on other types of knowledge to create logical connections between symptoms and causes based on the function of the system and its components. Recall of historical information has been shown to be a frequent diagnosis strategy among technicians in manufacturing [39], machining [40], and maintenance [16] contexts. In an electronics course, neglecting to properly power a circuit is a common mistake. Thus, when students encounter a malfunctioning circuit whose output voltage is zero, students' experience might prompt them to immediately check power connections rather than speculate about the configuration or misbehavior of the circuit.
These six types of knowledge-Domain, System, Procedural, Strategic, Metacognitive, and Experiential-are brought to bear when attempting to repair a malfunctioning system, though they play different roles at different stages of the troubleshooting process.

Cognitive troubleshooting tasks
Several models of the troubleshooting process exist, all of which describe the process as recursive and nonlinear [14,15,17]. In the present work, we draw on the cognitive task analysis proposed by Schaafstal et al. [14],  1. Cognitive troubleshooting tasks. These tasks describe the iterative process of repairing a malfunctioning apparatus. Performing each task requires up to six types of knowledge: domain, system, procedural, strategic, metacognitive, and experiential. This figure is based on the cognitive task analysis proposed by Schaafstal et al. [14].
which subdivides troubleshooting into four iterative subtasks: Formulate Problem Description, Generate Causes, Test, and Repair and Evaluate. A graphical representation of these tasks is provided in Fig. 1.
Formulate Problem Description refers to the early stage of the troubleshooting process, during which the troubleshooter determines both what the system is doing wrong and what it is doing right [14]. During this stage, the troubleshooter performs initial checks, measurements, and inspections of the apparatus. In an electronic system, this process involves orienting to the circuit [19] by building mental representations of the system structure and functions or using external representations [15], such as schematics, datasheets, and equations.
Generate Causes involves generating causal hypotheses, either by recognition of common symptoms (typical of experts) or by using reasoning skills, functional thinking, and external documentation (typical of troubleshooters encountering a problem for the first time) [14]. In addition to generating hypotheses that propose explanations for symptoms, this phase of troubleshooting also involves the use of Strategic Knowledge to propose procedures to facilitate identification of faults [15].
Test involves performing measurements, tests, or checks to determine whether or not a proposed cause is indeed the actual fault that needs to be repaired. According to Shaafstal et al. [14], this task includes "choosing the right testing methods and the right testing [equipment]" as well as "correctly setting up and operating the testing [equipment] and correctly reading the outcome of the test" (p. 79). In the case of a malfunctioning circuit, testing requires correct use of oscilloscopes, multimeters, and powers supplies. Performing tests further involves evaluating and interpreting the outcome [14,19], which requires that the troubleshooter form expectations about the behavior of a functional system and compare those expectations to the actual performance of the system. If the observed and expected outcomes of a test are in alignment, the proposed cause that informed the test must be rejected and the troubleshooter must generate additional causes. Alternatively, if a fault is identified, the next task is to repair the system.
Lastly, Repair and Evaluate includes generating, enacting, and verifying solutions, in direct service to the goal of returning the system to its normal working state [13][14][15]. Simple repairs involve replacing a component, though other types of repair are possible (e.g., soldering a broken connection). After performing a repair, evaluative measurements must be performed to determine whether the system is functioning normally. If not, the troubleshooter may conclude either that the repair did not address the fault or that the malfunction is due to multiple faults. In either case, the troubleshooter must return to the task of generating causes. If, on the other hand, the system behaves normally, then the troubleshooter may conclude that the repair is complete.
In this paper, our theoretical understanding of troubleshooting is partially informed by the six types of knowledge and the four subtasks described above. However, one goal of the present work is to understand the troubleshooting process through the lens of the Modeling Framework, which we describe in the following subsection.

B. Modeling Framework
The Modeling Framework describes the dynamic process through which experimental physicists develop and refine models and apparatus. A diagram of the Modeling Framework is provided in Fig. 2. To explicate the framework, we define both "models" and the process of "modeling." In doing so, we draw heavily on the work of Zwickl et al. [5].

Models
Models are abstract representations of the real world. A well-defined model is associated with a target system or phenomenon of interest, and the model can be used for either explanatory and/or predictive purposes. Models are embedded in underlying principles and concepts relevant for understanding the target system. In addition, models are externally articulated through equations, diagrams, descriptions, and other representations. These representations are often informed by the topography of the target system and the flow of matter, energy, or information through the system. A circuit diagram, for instance, is a graphical representation of a circuit that shows how components are connected to one another and how charges flow through the circuit.
Importantly, models contain simplifying assumptions that yield tractable mathematical, graphical, and other representations. These assumptions limit the applicability of a model, meaning that users of the model must understand whether and when it can be accurately applied. Moreover, model limitations give rise to the possibility of model refinement by eliminating some assumptions, thus increasing the complexity of the model and broadening its scope of applicability. The iterative improvement of models to make them more accurate and sophisticated is one path in the process of modeling.

Modeling
Modeling is the process through which models and systems are brought into better agreement, either by refining the model or the target system itself. The Modeling Framework subdivides the target system into two parts, each with its own corresponding model (Fig. 2): the physical system and the measurement system. This subdivision reflects the fact that experimental physicists often operate measurement equipment in regimes where the limitations of that equipment become important. Such limitations must be accounted for either by making modifications to existing equipment or by developing an understanding of the tools' performance in new parameter regimes.
In many cases, the division between physical and measurement systems is fuzzy. For example, in circuits such as the one considered here, the physical system consists of the circuit itself (wires, resistors, and other components) whereas the measurement system comprises voltmeters, ammeters, and other measurement tools. Whether power supplies and other "test equipment" are included in the physical or measurement system reflects an arbitrary choice on the part of the modeler. Here, we include power supplies in the measurement system.
Modeling is a dynamic and iterative process, involving the following phases: model construction, interpretation, prediction, comparison, proposal, and revision. Model construction refers to the development of models of the measurement and physical systems, depicted at the top left and right of Fig. 2, respectively. This process involves: identifying general principles and concepts that underly the model; making assumptions that simplify the model and identifying the corresponding limitations on the model's applicability; and choosing realistic values for model parameters.
While the model of the physical system is used to make predictions about the performance of the physical apparatus, the model of the measurement system is used to interpret raw data output by the measurement apparatus. In an optical system, this might involve using a known calibration factor to convert the output voltage of a photometer into a measurement of optical power. In electrical systems, when using a digital multimeter to measure the voltage of an oscillating signal, it is important to know whether the multimeter is displaying the amplitude of the signal or its root-mean-square value.

Abstraction Abstraction Abstraction Abstraction Measurement Measurement
Interpretation of data Prediction This Framework describes the iterative process of constructing models of the measurement and physical systems, comparing measurements to predictions, proposing explanations for discrepancies, and revising models and/or apparatus. Darker shades of gray correspond to phases common in the troubleshooting process.
Bold phrases indicate aspects of the Framework that informed our a priori analysis scheme. This figure is adapted from the visualization presented by Zwickl et al. [5].
Comparison is the act of comparing predictions to interpreted measurements. Discrepant measurements and predictions prompt physicists to propose potential explanations for, and/or solutions to, those discrepancies. Resolving discrepancies requires a revision to either the models or apparatus. The framework describes four pathways of revision: refine the measurement system model, the measurement system apparatus, the physical system apparatus, or the physical system model. Prioritization of one particular revision pathway over others depends on many factors, including the nature of the task. For example, based on the definition of troubleshooting used here, a troubleshooting activity will likely result in revision to the physical system apparatus.

C. Synthesizing the Frameworks
While the cognitive task analysis of troubleshooting and the Modeling Framework provide two distinct perspectives through which to understand the troubleshooting process, they are nevertheless connected. In this subsection, we synthesize these two perspectives by describing how both the types of knowledge and the cognitive tasks involved in troubleshooting relate to the modeling process.

Modeling and types of troubleshooting knowledge
Domain, System, and Procedural Knowledge can be directly connected to modeling. For example, these types of knowledge are required for the construction of models of the physical and measurement systems. Strategic and Metacognitive Knowledge, on the other hand, are only implicitly connected to modeling. For example, the process of modeling involves deciding which measurements to perform, in what order, and for what purpose. Alternatively, in response to a discrepancy between measurement and prediction, a physicist must decide which of the four revision pathways to enact. While such strategic and metacognitive decisions are necessary parts of the modeling process, they are not explicitly represented in the Modeling Framework. Rather, they are implicitly embedded in the arrows of Fig. 2: each arrow represents different possible metacognitive and strategic choices on the part of the experimentalist while navigating between different phases of the modeling process.
Depending on the circumstances, Experiential Knowledge can also be implicitly embedded in the Modeling Framework. For example, a troubleshooter may rely on historical information when making decisions about what to measure or what to revise. In this sense, the role of Experiential Knowledge in modeling is similar to the roles of Strategic and Metacognitive Knowledge. In other cases, however, Experiential Knowledge may limit the relevance of the framework for understanding a particular instance of troubleshooting. Experienced troubleshooters call on event schemas based on their historical experience with a system and its specific fault tendencies, often shortening their diagnostic process [15]. Using schemas to solve problems quickly is a common feature of expert problemsolving in physics and other contexts [41]. Thus, Experiential Knowledge may facilitate direct connections between symptoms and diagnoses without the need to engage in the recursive, nonlinear processes the Modeling Framework was designed to describe. In these situations, the framework may not be the most appropriate tool for characterizing the troubleshooting process.

Modeling and cognitive troubleshooting tasks
The cognitive troubleshooting tasks provide a taxonomy for some of the modeling phases. For example, consider the role of measurement in troubleshooting. Measurements can be classified into three types according to the cognitive task with which they are affiliated: formative measurements, which serve to formulate the problem description during initial stages of the troubleshooting process; diagnostic measurements, used to test causal hypotheses during the testing phase; and evaluative measurements, used to determine whether the system has been restored to its functional state after a revision has been made. Similarly, the cognitive troubleshooting tasks discriminate between two types of proposals: proposed explanations for discrepancies between measurement and prediction, which facilitate generation of causes; and proposed solutions for resolving those discrepancies, which inform repairs to the system.
Conversely, the Modeling Framework provides a taxonomy of repair types: any of the four revision pathways in the Modeling Framework could constitute a type of repair during the troubleshooting process. In our study, however, repairs primarily consisted of revisions to the physical system apparatus, ultimately privileging one recursive pathway in the Modeling Framework (shaded in dark grey in Fig. 2).
One major goal of the present work is to identify and describe examples of how students use (or don't use) model-based reasoning while troubleshooting. To help us unpack the mapping between the Modeling Framework and the troubleshooting process, we designed an observational study in which pairs of students were tasked with repairing a malfunctioning circuit. In the following section, we describe the institutional context and the participants involved in our study.

III. CONTEXT AND PARTICIPANTS
Our study was carried out at two universities, the University of Colorado Boulder (CU) and the University of Maine (UM). Both institutions are predominantly white four-year public research universities with high undergraduate enrollment. CU is a large, more selective insti- tution with very high research activity; UM is a medium, selective institution with high research activity [42]. Demographic information about the physics programs at each institution is summarized in Table I. These demographics reflect the makeup of the students enrolled in the Electronics Courses at CU and UM as well as those who participated in our study.
The Electronics Courses at CU and UM share many similarities. Each course is required for all physics majors, with students typically completing the course during their third year of instruction. Both courses convene three times per week: twice for one-hour lectures and once for a multi-hour lab (three hours at CU, two at UM). Both Electronics Courses consist of 2-3 lab sections, with 15-20 students per section at CU and 5-8 at UM. Lectures and labs are taught by tenured or tenure-track physics faculty members. Teaching and/or learning assistants support instruction at each institution. Both courses focus on analog components (e.g., op-amps, diodes, and transistors) and circuits (e.g., dividers, filters, and amplifiers). To learn this material, students work in pairs on guided lab activities. There is no formal instruction about troubleshooting in either course; instead, discussion about troubleshooting is limited to impromptu conversations between students and instructors in response to problems that inevitably arise during lab.
The CU and UM courses differ in several ways. At CU, for example, the course is offered every semester and enrollment varies from 30-60 students per term. Students have keycard access to the lab room at all hours of the day, including weekends. In addition, the CU course culminates in a five-week final project. Finally, the CU Electronics Course was recently redesigned to engage students in modeling of canonical measurement equipment and analog circuits [12], in alignment with consensus learning goals for upper-division labs identified by physics faculty members at CU [10]. Additional learning goals for this course were identified through interviews with graduate students who use electronics as part of their experimental physics research [43]. At UM, on the other hand, the course is offered only in the fall, with roughly 10-15 students per term. Moreover, the UM Electronics Course is designated a "writing intensive" laboratory course, which means that students are required to complete formal lab write-ups that are critiqued by an outside technical writing expert (in addition to the Electronics instructor).
Study participants were physics or engineering physics majors enrolled in the Electronics Course at either CU or UM during Fall 2014. During that time, two of the authors (HJL and MRS) taught lab and lecture sections for the CU and UM courses, and one of the authors (KLVDB) was a teaching assistant for the UM course. We solicited participation in the study via email and inperson requests during the last few weeks of Fall 2014 and the first few weeks of Spring 2015. The study was not an official part of either the CU or UM course, and no course credit was associated with participation in the study. Participants were consenting volunteers who received small monetary incentives for their participation.
In total, 16 students participated in the study, 8 each from CU and UM. We interviewed students in pairs, forming four pairs at each institution. Two pairs consisted of students who were lab partners during their Electronics Course, and six pairs consisted of students who were not lab partners. The latter six pairs were formed by the research team by pairing students who had expressed interest in participating in the study. Fifteen participants earned grades ranging from A to B-in their Electronics Course, which required students to work with the components and systems used in our study. One student did not receive a passing grade due to a failure to submit all of the lab reports and lab notebooks, per the grading policy of the course. During the interviews, all eight student pairs attempted to repair a malfunctioning electrical circuit, as described in the following section.

IV. METHODS
To probe whether and how students engaged in modelbased reasoning while troubleshooting, we conducted Think-Aloud Pair Problem Solving (TAPPS) interviews with eight pairs of students. TAPPS interviews involve students working on an activity while concurrently verbalizing their thoughts aloud [44,45], providing the research team with information on student reasoning about actions and outcomes [46]. In the troubleshooting literature, TAPPS interviews have been used in both training [24] and research [47] contexts. During our TAPPS interviews, student pairs were tasked with repairing a malfunctioning electrical circuit, namely, an inverting cascade amplifier. In this section, we describe the design of the TAPPS interview and elaborate on data collection and analysis methods.

A. Troubleshooting activity
We designed an inverting cascade amplifier that contained two subsystems, or stages: a noninverting amplifier (Stage 1) and an inverting amplifier (Stage 2). Each stage consisted of an op-amp and two resistors: R 1 and R 2 in Stage 1, and R 3 and R 4 in Stage 2. A diagram of the circuit is provided in the left panel of Fig. 3.

Functioning circuit behavior
In a functional circuit, each stage would have amplified its input voltage by a multiplicative factor called the gain. The theoretical gains for Stages 1 and 2 were G 1 = (1 + R 2 /R 1 ) and G 2 = −R 4 /R 3 , respectively. Because the 10 kΩ two stages were connected in series, the overall gain of the cascade amplifier was the product of the gains of Stages 1 and 2: G tot = G 1 G 2 . For a given input voltage, V IN , a functional circuit's output voltage, V OUT , would be given by: Equation 1 is valid under the following two conditions: first, the magnitude of the input voltage is sufficiently small that the outputs of Stages 1 and 2 are smaller in magnitude than the corresponding supply voltages, V S1± and V S2± ; and second, the frequency of the input is sufficiently low that bandwidth limitations of the op-amps can be ignored.
The resistances and voltages characteristic of a functioning circuit are given in Table II. Nominal resistances and supply voltages were communicated to interview participants via the schematic and datasheet which were provided to them during the activity. For these nominal values, G 1 = 2, G 2 = −10, and G tot = −20. That is, in a functional circuit, Stage 1 would double the input voltage, Stage 2 would both invert the output of Stage 1 and amplify it by a factor of 10, and the circuit as a whole would invert the input voltage and amplify it by a factor of 20. In the context of alternating current (ac) signals, "inverting" is equivalent to shifting the phase of the signal by 180 • . Plots (a) and (b) in the right panel of Fig. 3 show the outputs of Stages 1 and 2 when an ac signal is input to a functioning inverting cascade amplifier.
Finally, each of the op-amps in a functional circuit would obey the following "Golden Rule" for op-amps in a closed loop with negative feedback: there is zero voltage difference between the two input terminals.

Malfunctioning circuit behavior
To ensure that students engaged in more than one iteration of troubleshooting, we introduced two faults in the circuit, as shown in the left panel of Fig. 3. First, the resistor R 3 had a value of 100 Ω rather than the nominal value of 1 kΩ, increasing the actual gain of Stage 2 by an order of magnitude compared to the nominal gain. Second, we used a malfunctioning op-amp in Stage 2, which manifested in a direct current (dc) output voltage of −15 V regardless of the input voltage. Both faults were localized in Stage 2 so that the cascade amplifier consisted of both a functional subsystem (Stage 1) and a malfunctioning one (Stage 2), making it possible for students to use the Split-Half Strategy. The output of the malfunctioning circuit is shown in Plot (c) of Fig. 3. Additional characteristics of the malfunctioning circuit are provided in Table II.
If students were to replace the 100 Ω resistor with a 1 kΩ resistor but leave the faulty op-amp in the circuit, the output of the circuit would remain unchanged. On the other hand, if student were to replace the faulty opamp with a functioning chip but leave the 100 Ω resistor in the circuit, the circuit would effectively be a functioning inverting cascade amplifier with an overall gain of −200. In this case, the output of the circuit could potentially be a clipped signal. A"clipped signal" refers to a sinusoid with flattened peaks, as shown in Plot (d) of Fig. 3. The flattening is due to limitations of the opamp, which cannot output voltages larger than about 13 V in magnitude. A gain of −200 would result in clipping for any ac input signals with amplitude larger than only about 6 mV. Even in a functioning circuit, clipping may arise for input ac signals with amplitude larger than about 650 mV.
In the malfunctioning state, the op-amp in Stage 2 did not obey the Golden Rule for op-amps. There was a nonzero dc voltage difference between the input terminals of the faulty op-amp, as indicated in Table II. To troubleshoot the circuit, student pairs had access to test and measurement equipment that was typical of the equipment used in their Electronics Course. All students had access to an oscilloscope, digital multimeter, low-voltage dc power supply, ac function generator, pliers, wire strippers, various types of cables, and extra resistors, op-amps, and wire. CU students used a breadboard that needed to be connected to external voltage sources. UM students, on the other hand, used a commercial prototyping board that had on-board ac and dc voltage sources. In both cases, the malfunctioning circuit was pre-built on the board by the research team. A photograph of the setup used at CU is provided in Fig. 4.
We designed the think-aloud troubleshooting task to closely mimic the types of troubleshooting events that students typically encounter during the Electronics Course. After students completed the task, we asked them to comment on the extent to which the task felt like a typical Electronics Course activity to gain insight into the ecological validity of the task [48]. Student responses indicated that the think-aloud activity was similar to their typical course experiences along several dimensions, including the components used, the equipment used, the faults they encountered, and the processes they used to troubleshoot the circuit. We conclude that the activity was similar to students' in-class experiences.

B. Data collection
We conducted TAPPS interviews in which pairs of students were tasked with diagnosing and repairing the malfunctioning inverting cascade amplifier shown in Fig. 3. At the start of each task, the interviewer provided students with the following materials: a schematic diagram of the circuit, a datasheet for the op-amp, and a pre-built malfunctioning circuit. The circuit schematic and corresponding text are shown in Fig. 5. No expressions for the gains of either of the two individual stages were provided to the students.
The interviewer read a short prompt to the students before they began the task. The prompt framed the activity as follows: For this activity, you will be repairing a malfunctioning circuit. Specifically, you'll be working with an inverting cascade amplifier, described on this page here [ Fig. 5]. For context, let's imagine that some of your peers built this circuit as part of class. They built the circuit using the same chip you've been using in class this semester. Here's the standard data sheet for that chip. Your tasks are to diagnose any issues and make the circuit work properly.
This interview is very similar to what you've been doing in class. You'll have access to much of the equipment from class, including power supplies, measurement tools, and a limited selection of electrical components.
One difference from class is that you're working with a circuit someone else built. Another difference is that I'm interested in what you say to yourself as you perform this task, so I will ask you to talk aloud as you work on the circuit.
What I mean by talk aloud is that I want you to say out loud everything that comes into your mind while doing the task. Put another way, I want you say out loud what you might otherwise say to yourself silently. Of course, you should also feel free to ask each other questions and interact as you would when working together in [the Electronics Course]. But the more you both say out loud what you're thinking in your head, the more helpful it will be.
Act as if I am not in the room. Just keep talking. If you are silent for any length of time, I will remind you to keep talking aloud.
After reading the prompt, the interviewer asked the students to begin working on the task. During this time, the interviewer interacted only minimally with the students. The activity ended when either the students repaired the circuit or an hour had passed. After the activity was over, the interviewer asked students a few short follow-up questions, including a question about the extent to which the activity felt typical of students' experiences in their Electronics Course.
The interviewers' prompts and follow-up questions accounted for only 5-10 minutes per activity. In six interviews, students spent 40-45 min troubleshooting the circuit. In the other two interviews, students repaired the circuit in 20-25 min. Together, all eight TAPPS interviews lasted a total of six hours, with about five hours devoted to troubleshooting the circuit. Video and audio data were collected for all interviews, and audio data were fully transcribed.

C. Data analysis
Our approach to data analysis involved two parts. First, we used the cognitive troubleshooting tasks to code successive two-minute intervals that spanned the duration of each of the interviews. Second, we used the Modeling Framework to code two types of events present in most interviews: (1) isolation of the second stage as the source of faults, and (2) evaluation of the circuit after replacing the faulty op-amp.
Both the cognitive troubleshooting tasks and the Modeling Framework were used as a priori analysis schemes. For each scheme, we initially developed operational code definitions based on global definitions from the troubleshooting and modeling literature [5,14] and a review of the content logs. Operational definitions were refined through iterative cycles of collaborative coding  1)]. The main advantage of a cascade amplifier over a regular amplifier is that we can achieve high gain while maintaining a relatively large bandwidth. Disadvantages of this circuit compared to the regular amplifier include more components and increased power consumption." and discussions with the research team. Whereas codes related to the Cognitive Troubleshooting Tasks were applied to a total of 5 hours of troubleshooting activity across all 8 interviews, the Modeling Framework codes were applied only to a total of about 30 minutes across all eight TAPPS interviews. Below, we describe the coding schemes in greater detail.

Cognitive troubleshooting tasks
To characterize students' approach to troubleshooting the malfunctioning cascade amplifier, we developed codes corresponding to the four cognitive troubleshooting tasks described in Sec. II A 2: Formulate Problem Description, Generate Causes, Test, and Repair and Evaluate. Each code was associated with three or four subcodes that were generated through an emergent and iterative process, starting with a review of the audiovisual recordings. The subcodes are listed in Table III. To apply the subcodes to our data, we divided each video into successive two-minute intervals. We collaboratively coded each interval through three iterations of coding, discussion, and refinement of subcode definitions and applications. Depending on the nature of student activity during a given time interval, we assigned no subcodes, one subcode, or multiple subcodes to the interval.
As an example of our coding scheme, the Formulate Problem Description subcodes (italicized font) and their operational definitions (normal font) were: • Map circuit onto schematic and/or datasheet: Students orient themselves to the circuit topographically by mapping the circuit onto the schematic or datasheet. This typically involves looking back and forth between the circuit, schematic, and datasheet, articulating which chip corresponds to which stage, which resistors correspond to R 1 -R 4 , which pins are input and output, and/or where the power and ground connections are located.
• Discern functions of systems, components: Students do at least one of the following: identify the circuit or one of its subsystems as an inverting or noninverting amplifier; discuss the function of a component (e.g., "this is a feedback resistor"); or rationalize the absence of capacitors (e.g., "capacitors are just needed to clean up high-frequency noise from the signal").
• Perform formative measurements: Students perform initial checks of the circuit configuration, resistor values, pin voltages, or the performance of the test and measurement equipment. These measurements are typically accompanied by statements like, "I'm just trying to figure out what's going on." According to our subcode definitions, a measurement of, say, voltage could be an example of either Formulate Problem Description, Test, or Repair and Evaluate depending on whether it was performed in a formative, diagnostic, or evaluative capacity. For example, an initial check that the op-amps are powered would be an example of a formative measurement. On the other hand, measuring the midpoint voltage as part of a Split-Half Strategy would constitute a diagnostic measurement. Finally, checking the output signal after replacing the op-amp in Stage 2 would be an evaluative measurement.
Although Schaafstal et al. [14] include setup and operation of test and measurement equipment in their global definition of "performing the test", we did not include these actions in our final definition of Test. In our dataset, students were adjusting settings on the oscilloscope, multimeter, power supply, and/or function generator throughout the activity, which effectively contributed a "constant background" of this aspect of testing to our analysis.

Modeling Framework
To characterize students' model-based reasoning during the troubleshooting process, we developed codes based on the Physical System half of the Modeling Framework. We applied these codes to two types of events: (1) isolation of the second stage as the source of faults, and (2) evaluation of the circuit after replacing the faulty op-amp. For both types of events, we used the Modeling Framework codes to perform line-by-line analyses of the corresponding transcribed student dialogue. A detailed example of this approach is provided elsewhere [34].
In total, we identified 12 excerpts (5 isolation-type events and 7 evaluation-type events) which lasted about 2-3 minutes each. Excerpts for isolation-type events started when one or both students suggested measuring the output of Stage 1 and ended when the students concluded that Stage 1 was functioning as expected and the faults could therefore be isolated in Stage 2. Excerpts for evaluation-type events started just after the students replaced the faulty op-amp and ended once the students concluded that the circuit as a whole had been repaired and was now behaving as expected. Isolationtype events, when they occurred, took place between half and two-thirds of the way though the think-aloud activity. Evaluation-type events, on the other hand, spanned the last three minutes of the activity.
The Modeling Framework codes (bolded font) and their operational definitions (normal font) were: • Model Construction (Physical System): Students do any of the following: model the circuit as being comprised of two abstract subsystems, each with its own gain; identify relevant principles and concepts from electronics, such as the Golden Rule for op-amps in a closed loop; or identify limitations of the transfer function, such as the gain-bandwidth product or voltage-related limits on the amplitude of the output signal.
• Prediction: Students compute expected outputs, such as: the phase, amplitude, or frequency of the signal at various points in the circuit; or the gain of a subsystem or the system as a whole. This includes articulating expectations about what would happen if a component had a different value (e.g., they compute the gain of a hypothetical circuit with, say, R 4 = 1 kΩ).
• Comparison: Students compare expected and measured values of amplitude, phase, or frequency of a signal (e.g., "We see 100 mV, but it should be 4 V."). This includes making relational statements about the size of a signal (e.g., "This signal is too small.") and making evaluative judgements about the observed signal (e.g., "This is strange," or "This isn't what it's supposed to do.").
• Proposal: Students suggest explanations for a discrepancy between measurement and prediction. Alternatively, students suggest solutions for bringing the actual performance of the circuit into alignment with expectations.
• Revision (Physical System): Students change the circuit configuration, replace a resistor, or replace an op-amp in response to a discrepancy between measurement and prediction.
The definition of Prediction does not require that students' computations or expectations are correct. Neither does the definition require that computations or expectations are carried out or articulated before a measurement is performed. Indeed, in our dataset, many instances of Prediction occur after a measurement takes place in an effort to determine whether or not the measured value "makes sense." While Zwickl et al. [5] include "identify parameters" as part of constructing models, we did not address this aspect in our final definition of Model Construction. In the episodes we chose to analyze with the Modeling Framework, there were no instances of students identifying parameters in service of constructing a model of the circuit.

V. RESULTS
We describe the troubleshooting processes of eight pairs of students. Since we are not performing a comparative analysis, we do not distinguish between students at CU and those at UM. Pairs of students are labeled A-H and individual students are labeled according to their pair membership. For example, the students in Pair A are labeled A1 and A2. When providing examples of students' verbalizations, we indicate the speaker as well as the time interval during which they were speaking.
In the following subsections, we describe the students' troubleshooting process and the changes they made to the circuit. Using results from two coding schemes (detailed in Secs. IV C 1 and IV C 2), we show that each pair of students engaged in all of the cognitive troubleshooting tasks and demonstrated model-based reasoning during strategic and/or evaluative stages of the troubleshooting process.

A. Cognitive tasks
The results of coding for the four cognitive troubleshooting tasks are shown in Fig. 6. Formulate Problem Description, Generate Causes, Test, and Repair and Evaluate are represented as orange, blue, green, and yellow bands, respectively. Based on these codes, several patterns can be discerned. For example, all eight pairs engaged in all four cognitive troubleshooting tasks. Most pairs transitioned from formulating the problem description to generating causes about halfway through the activity, though the transition isn't always clear cut. Testing happened almost continuously throughout the duration of activity, whereas repairs and evaluations were performed more sporadically.
The filled black triangles and stars in Fig. 6 correspond to times when students replaced the 100 Ω resistor and the faulty op-amp, respectively. These symbols reveal additional patterns. For example, all pairs replaced the resistor and all-but-one replaced the op-amp. In each case, the resistor was replaced before the op-amp was replaced. Indeed, most pairs replaced the resistor early in the troubleshooting process, while they were still formulating the problem description and before they started generating causes. In addition to replacing the 100 Ω resistor and/or the faulty op-amp, many pairs performed additional, unnecessary revisions to the circuit. In Fig. 6, such revisions sometimes manifest as yellow blocks which contain neither a triangle nor a star.
The filled black circles in Fig. 6 indicate the times when students employed the Split-Half Strategy. Pairs D-H employed this strategy at some point during the second half of the activity, often within a few minutes after the onset of generating causes. Pairs who used the Split-Half Strategy did not repair the circuit significantly faster or slower than those who did not. For example, Pairs A and E repaired the circuit about twice as quickly as other groups, but only E employed the Split-Half Strategy.
Below, we elaborate on the results from our cognitive task coding scheme.

Formulate Problem Description
Formulate Problem Description refers to the early stage of the troubleshooting process, during which students oriented themselves to the activity and performed formative measurements to determine what was work-ing and what was not. Students engaged in this task throughout the first half of the activity. We report verbalizations that reflect three different aspects of problem formulation: (1) mapping the circuit to the schematic, (2) discerning the intended function of the circuit, and (3) performing initial measurements and inspections to check various aspects of the circuit.
All eight pairs mapped the schematic to the circuit almost immediately after the interviewer finished giving instructions. For example, as soon as the activity started, Student B1 said: "So, we should identify which one [stage] is which to start with. . . . This is a noninverting [stage] to start with. This part only. And the next one is an inverting [stage]." (B1; 2:35-3:09) Here B1 correctly mapped the components in the circuit to their symbolic representations in the schematic. In addition, B1 simultaneously recognized the existence of noninverting and inverting stages, an important part of discerning the intended function of the circuit and its subsystems. Seven pairs (B-H) identified the existence of the two stages and discerned their intended functions. For example, soon after examining the schematic for the first time, Student E1 said: "That makes sense, just like inverting and non-inverting smashed together." (E1; 2:45-2:49) Thus, E1 parsed the circuit as comprising two subsystems early in the activity, before performing any measurements. Pairs B and C did the same. Pairs D and F-H, on the other hand, discussed the intended function of the circuit after they had begun performing measurements and generating hypotheses about potential faults. In these cases, discerning the function of subsystems was a crucial part of understanding the results of diagnostic measurements. For example, after measuring the output of Stage Here F2 discerned the function of the subsystems in response to a diagnostic measurement. In Fig. 6, this and similar occurrences in Pairs D, G, and H are depicted by instances of Formulate Problem Description (orange bands) that occur after the onset of Generating Causes (blue bands).
To formulate the problem description, students performed formative checks of configuration, resistances, and voltages. All eight pairs began checking the circuit configuration within a few minutes of receiving instructions. For example, after mapping the circuit to the schematic, Student A1 said: "Let's see if everything's connected right first off." (A2; 2:50-2:53) Here A1 suggested checking the circuit configuration as one of the first steps in the troubleshooting process. Six pairs (B-E, G, and H) also measured the resistances of all four resistors during this phase, leading them to identify and replace the 100 Ω resistor early in the activity. Students employed the Topographic Strategy when checking circuit configuration and resistor values, starting from the input and tracing through the circuit to the output, or vice versa. Here G1 suggested checking the resistors in order of where they occur along the path from the input to the output of the circuit. Each pair also performed formative voltage measurements to ensure that the circuit was properly powered and grounded. In all cases, measurement of the output signal triggered students to begin generating causes and testing the circuit.

Generate Causes
Generate Causes involves making hypotheses about potential faults. Students generated causes throughout the second half of the activity, starting after measuring the output signal for the first time. Students proposed many different potential explanations for the malfunctioning behavior of the circuit, including short circuits, saturation, and faulty chips. We provide three examples of proposals: one that was dismissed, one that resulted in a revision to the circuit, and a third proposal that gave rise to diagnostic measurements.
The following explanation, proposed by Student C1, is an example of a hypothesis that was dismissed: Here, C1 suggested that the observed dc output signal was caused by output limitations of the op-amps; when the expected output voltage exceeds the limitations of the op-amp, the circuit is sometimes referred to as being "saturated." This idea was immediately dismissed by C2, who noted that the output was a dc signal rather than the clipped ac signal characteristic of saturated amplifier circuits, as in Plot (d) of Fig. 3.
Not all hypotheses were dismissed. For example, during a brainstorming session, Student F1 simultaneously proposed an explanation and a revision: "Could the op-amps be faulty? Should we just replace them with new ones? For the second op-amp, let's just replace it with a new one." (F1; 40:18-40:26) Here F1 suggested that faulty op-amps might be the cause of the dc output signal, and then immediately proposed replacing the op-amp in Stage 2 with a new chip. F1 and F2 went on to replace the op-amp in Stage 2 as suggested.
Many proposed causes informed follow-up diagnostic measurements. For instance, upon seeing a dc output signal equal to the negative supply voltage, Student B1 said: "Maybe this red one [wire], the power is somehow touching the output." (B1; 27:57-28:03) Here, B1 suggested there might be a short circuit connecting the negative dc supply voltage and the output of the circuit. After making this suggestion, B1 went on to perform a diagnostic visual inspection of the circuit for such a short.

Test
Test involves performing diagnostic measurements to determine whether or not a proposed fault is an actual fault. Test further includes prioritizing measurements, making plans, and making predictions about expected outcomes. Students engaged in testing throughout the duration of the think-aloud activity, focusing on prioritizing, planning, and predicting during the first half of the activity. During the second half of the activity, students began performing diagnostic measurements to check proposed causes. We report verbalizations that reflect the planning, prioritizing, and predicting aspects of testing.
The following exchange is an example of students prioritizing measurements. Immediately after observing the erroneous dc output of the circuit for the first time, Pair E discussed their plans for diagnostic measurements. Student E1 outlined the following plan: "What we could do is get out a probe and we can just go through the first one [stage] and measure V OUT , and we could see if that's what we expect it to be." (E1; 15:41-15:52) Here, E1 suggested performing diagnostic voltage measurements at various points in the first stage as well as of the output. E2 agreed with this plan, and suggested performing diagnostic measurements of the supply voltages as a follow-up: This plan, which the students carried out, allowed the students to test Stage 2 as an isolated system. In addition to making a plan, H2 also made a new prediction; H2 correctly predicated that the second stage, if functioning properly, would invert the input signal and amplify it by a factor of 10. Through testing, students were able to determine that some of their proposed causes were indeed actual faults in the circuit. For example, after isolating the second stage to test the performance of the second op-amp, Pair H concluded that the second op-amp was indeed faulty and in need of replacement. Testing thus paved the way for repairs to the circuit.

Repair and Evaluate
Repair and evaluation involves proposing, enacting, and evaluating revisions to the circuit. Repairs and evaluation typically happened in short bursts (Fig. 6). Here we focus on three aspects of repairing and evaluating the circuit: replacing the 100 Ω resistor, making erroneous revisions to the circuit, and replacing the faulty op-amp.
All eight pairs correctly identified the 100 Ω resistor as a fault and replaced it with a 1 kΩ resistor. Six pairs (B-E, G, and H) identified the resistor early in the activity, while checking that the circuit was constructed properly. In these cases, there could be no evaluative measurement of the revised circuit's performance because the students hadn't yet observed the output signal. Pairs A and F, on the other hand, identified the resistor as part of the testing process. Both of these pairs performed a quick evaluative measurement of the output voltage to determine whether their revision repaired the circuit. For example, after replacing the 100 Ω resistor with a 1 kΩ resistor, Student F2 measured the output signal and said: F2's statement that the circuit was "still not working" frames the measurement of the output signal as an evaluative measurement. Five pairs made unnecessary or erroneous changes to the circuit. There were three types of erroneous revisions. First, Pair A replaced R 1 , nominally 460 Ω, with a resistor whose measured resistance was closer to the nominal value than the original resistor. Second, Pairs B, D, and H changed the circuit configuration. In each case, these changes were due to incorrect mapping of the circuit to the schematic or datasheet and the students eventually realized their mistake and restored the original circuit configuration. Finally, Pairs A-C replaced the op-amp in Stage 1 with a new chip.
For Pairs A and B, the decision to replace the first opamp was based on an Exhaustive Strategy in which both op-amps were replaced simultaneously. For example, after Pair A had checked the circuit configuration, measured the resistor values, and replaced R 3 , Student A2 made the following suggestion:  measurements while others adopted a more extensive approach to evaluation. We used the Modeling Framework to gain insight into students' evaluation of the repaired circuit, as described below.

B. Modeling
All eight pairs engaged in model-based reasoning during either employment of the Split-Half Strategy to isolate Stage 2 as the source of faults (Pairs D-H) and/or evaluation of the circuit's performance upon replacement of the op-amp in Stage 2 (Pairs A-C and E-H). A summary of students' model-based reasoning during these episodes is given in Table IV. While students engaged in a variety of Modeling phases (Sec. IV C 2) during one or both of these episodes, there are nevertheless differences in the nature of students' model-based reasoning in each case. For example, Model Construction and Prediction were more common-and Proposal and Revision were less common-during isolation of Stage 2 as a fault source than during replacement of the faulty op-amp.

Isolating Stage 2 as the fault source
Five pairs (D-H) employed the Split-Half Strategy to isolate Stage 2 as the source of faults. All five pairs made explicit statements in which they correctly identified Stage 1 as functional and/or isolated Stage 2 as the source of faults. For example, D2 concluded that "the first one [Stage 1] is giving us a good voltage" and F1 said, "The problem is in the second one [Stage 2]." All five pairs engaged in Model Construction, Prediction, and Comparison (Table IV). In all cases, Model Construction involved recognizing that the cascade amplifier consisted of two distinct stages, each characterized by a unique gain. Because Model Construction included recognizing that the overall gain of the circuit was the product of the gains of two stages, Model Construction was intertwined with Prediction. For example, after E1 then suggested that this measurement was in good agreement with the prediction ("that seems right") because the expected gain of Stage 1 was two. While Pair E identified the existence of noninverting and inverting subsystems early on in the troubleshooting process, they did not identify algebraic expressions for the gains of either subsystem until this utterance. Thus the statement "the gain is one plus R 2 over R 1 " is an example of Model Construction, which was intertwined with both Comparison and Prediction. Two pairs (D and F) engaged in Proposal. In addition to successfully eliminating Stage 1 as a potential source of faults, these pairs offered potential explanations for the discrepancy between expected and actual performance of the cascade amplifier: D2 suggested that the 1 kΩ and 10 kΩ resistors had accidentally been switched, and F1 suggested that the inputs to the op-amp in Stage 2 were incorrectly wired. While neither of these examples of Proposal yielded correct explanations for the observed discrepancies, both focused on the existence of potential faults in Stage 2.

Replacing the faulty op-amp
Seven pairs (A-C and E-H) successfully repaired the circuit by replacing the op-amp in Stage 2. Here, we describe how these pairs engaged in all five aspects of the Modeling Framework listed in Table IV. Each of the seven pairs engaged in both Proposal and Revision: the students first proposed that the op-amp in Stage 2 was faulty and/or needed to be replaced, and then the students revised the circuit by replacing the faulty op-amp with a functional chip. For example, while brainstorming potential faults, Student F1 said: Here, C2 relied on their Domain Knowledge (i.e., their knowledge of the Golden Rule for op-amps in a closed loop) to identify the op-amp in Stage 2 as faulty.
Each of the seven pairs also engaged in Comparison: after replacing the faulty op-amp, the students performed evaluative measurements during which they compared the actual performance of the repaired circuit to their expectations. Four pairs (C and E-G) also engaged in Prediction by explicitly stating their expectations during the evaluation process. The depth of evaluation varied among pairs of students. For example, after replacing both op-amps, Pair B performed evaluative measurements of the amplitude and phase of the output of Stage 2. Upon performing these measurements, Student B1 said: "That was it. Nice. And now the output is 10 volts, maximum. And it's [the output signal is] inverted, which is good." (B1; 46:35-46:46) B1 concluded that that the op-amps were indeed a fault source ("That was it."). B1 further articulated that the measured amplitude ("10 volts") and phase ("inverted") of the output signal were "good," indicating satisfactory agreement with the expected amplitude and phase. In addition to verifying the amplitude and phase of their output signal, Pair G also checked to see that the new chip was satisfying the Golden Rule for op-amps in a closed loop: Before measuring V 2− of the new op-amp in Stage 2, G1 predicted that the voltage should be zero volts. G1 determined that the measured value, 0.9 mV, was "basically zero" and that the circuit had been repaired ("everything's behaving as it should now."), indicating that the expected and actual performance of the circuit were in agreement.
Whereas Pair G performed a more rigorous evaluation than Pair B, Pair A performed a more cursory evaluation. Like all other groups, Students A1 and A2 measured the output of Stage 2 immediately after enacting their Revision. Because the amplitude of their ac input signal was larger than 650 mV, the output of the repaired circuit was a clipped ac signal, as shown in Plot (d) of Fig. 3 A2 observed that the signal was being clipped ("it's railing"), but did not reduce the amplitude of the input signal in order to produce a sinusoidal output signal. This prevented Pair A from determining the actual gain of the repaired circuit, which in turn prevented a quantitative comparison of the predicted and measured gains. Instead, A2 was satisfied ("That's fine.") with basic qualitative features of the output signal: it was an ac signal that was inverted with respect to the input. A2 concluded that the circuit has been repaired ("It works."). Four pairs (A, C, E, and G) engaged in Model Construction during the evaluation of the repaired circuit. In all four cases, students articulated limitations of Eq. (1). For example, after determining that their circuit had been repaired, Pair E briefly discussed whether the observed behavior of the circuit made sense given the voltage limitations of the op-amps in the circuit: E1 wondered how it was possible for the repaired circuit to provide an output of 20 V given that the positive and negative power rails of the op-amps were only ±15 V. E2 noted that the output signal was 20 V from peak to peak, so the output was "only going from minus 10 to plus 10." Thus, E1 and E2 engaged in Model Construction through articulation of model limitations.

VI. DISCUSSION
Our results demonstrate that each pair of students in our study engaged in all four cognitive troubleshooting tasks and used model-based reasoning during key strategic and/or evaluative phases of the troubleshooting activity.
Not only did all pairs engage in all four cognitive troubleshooting tasks, but the students were engaged in one or more tasks during almost every two-minute time interval throughout the activity. Moreover, the emergent patterns of engagement give rise to a sensible troubleshooting narrative with a beginning, middle, and end. In the beginning of the activity, students got their bearings on the problem by discerning the intended function of the circuit, performing visual inspections of the configuration, checking component values, replacing the faulty resistor, and making plans for how to test the circuit. Halfway through the activity, students began proposing potential faults, performing diagnostic measurements, and-in some cases-isolating the source of faults to the second stage of the circuit. Finally, at the end of the activity, almost all pairs identified the faulty op-amp, replaced it with a new chip, and favorably evaluated the performance of the repaired circuit.
While the cognitive tasks give us a coarse picture of students' approach to troubleshooting a malfunctioning circuit, the Modeling Framework allows us to look at two types of episodes in finer detail. All five pairs who employed the Split-Half Strategy engaged in model construction, prediction, and comparison. The Modeling Framework allows us to tell a sub-narrative about the Split-Half Strategy: the students first constructed a model of the circuit which consisted of two subsystems, each with its own gain; they were then able to form expectations about the outputs of the first and second stages; and, finally, by comparing their predictions to their measurements, the students successfully identified Stage 1 as functional and isolated Stage 2 as the source of faults.
Similarly, the Modeling Framework gives rise to a subnarrative about the students' approach to evaluating the repaired circuit. In this case, all seven pairs who repaired the circuit engaged in proposal, revision, and comparison. The students first proposed a revision to the physical system apparatus, namely, replacing one or both of the op-amps with new chips; they then enacted this proposed revision; and, finally, the students compared the performance of the revised circuit to their expectations, thus ensuring that the circuit had indeed been repaired. In order to more thoroughly understand the performance of the repaired circuit, four pairs engaged in model con-struction by articulating constraints on the amplitude of the output voltage and checking to see that those constraints were being satisfied. For these students, model construction facilitated evaluation of their repairs.
The narratives above demonstrate the nonlinear, recursive nature of modeling. During the testing phase of the troubleshooting process, some students engaged in a modeling cycle to isolate the second stage as the source of faults. Later, another modeling cycle was needed to repair and evaluate the circuit. Furthermore, each modeling cycle had its own particular signature: pairs predominantly engaged in construction, prediction, and comparison in the former case compared to proposal, revision, and comparison in the latter. Thus, study participants engaged in multiple, distinct iterations of model-based reasoning while navigating the cognitive tasks required to troubleshoot a malfunctioning electrical circuit.
Because our participant pool was both small and homogenous (e.g., most participants were white men and both CU and UM are selective research-intensive institutions), our findings do not represent a comprehensive picture of students' approaches to troubleshooting electric circuits, nor do they necessarily speak to common or typical student responses to a troubleshooting activity. Rather, our findings show that the process of troubleshooting can engage students in the core scientific practice of modeling.

VII. SUMMARY
We designed a think-aloud activity in which pairs of students attempted to repair a malfunctioning electrical circuit. The circuit was designed such that several troubleshooting strategies could be employed. Audiovisual data were collected for eight pairs of students from two different institutions. We used both a cognitive task analysis of troubleshooting and the Modeling Framework as a priori schemes to analyze the data. Two types of episodes were chosen for in-depth analysis using the Modeling Framework: (1) isolation of one subsystem of the circuit as the source of faults, and (2) repair and evaluation of the circuit.
We found that all eight pairs engaged in all four cognitive troubleshooting tasks. Furthermore, in each of the two episodes chosen for in-depth analysis, we found a good mapping between students' actions and the Modeling Framework. Thus, we have shown that model-based reasoning facilitates the cognitive tasks required for troubleshooting. We have also demonstrated that the process of troubleshooting can engage students in the core scientific practice of modeling.
In ongoing work [35], we are analyzing the data described here using a framework for socially mediated metacognition. Ultimately, we aim to use our understanding of the cognitive, metacognitive, and modelingoriented aspects of troubleshooting to inform the development of activities to develop and assess students' troubleshooting abilities.