Systematic study of student understanding of the relationships between the directions of force, velocity, and acceleration in one dimension

We developed an instrument to systematically investigate student conceptual understanding of the relationships between the directions of net force, velocity, and acceleration in one dimension and report on data collected on the ﬁnal version of the instrument from over 650 students. Unlike previous work, we simultaneously studied all six possible conditional relations between force, velocity, and acceleration in order to obtain a coherent picture of student understanding of the relations between all three concepts. We present a variety of evidence demonstrating the validity and reliability of the instrument. An analysis of student responses from three different course levels revealed three main ﬁndings. First, a signiﬁcant fraction of students chose ‘‘partially correct’’ responses, and from pre-to post-test, many students moved from ‘‘misconception’’ to partially correct responses, or from partially correct to fully correct responses. Second, there were asymmetries in responding to conditional relations. For example, students answered questions of the form ‘‘Given the velocity, what can be inferred about the net force?’’ differently than converse questions ‘‘Given the net force, what can be inferred about the velocity?’’ Third, there was evidence of hierarchies in student responses, suggesting, for example, that understanding the relation between velocity and acceleration is necessary for understanding the relation between velocity and force, but the converse is not true. Finally, we brieﬂy discuss how these ﬁndings might be applied to instruction.


I. INTRODUCTION
One of the earliest and most studied areas in physics education research is student understanding of force, velocity, and acceleration.For example, perhaps the most widely known and documented phenomenon in this field is the (incorrect) student belief that the net force on an object and its velocity must be in the same direction [1][2][3][4][5].It is also well documented that students often have difficulty distinguishing between the velocity and acceleration of an object [6,7].
Nonetheless, even though this topic is relatively well studied, there remain many unanswered questions that are critical to both advancing our knowledge of student difficulties with force, velocity, and acceleration and applying this knowledge to improve student learning of these fundamental concepts.For example, empirically speaking, to what extent does the correct understanding of the relationship between, say, force and acceleration depend on the correct understanding of another relation, say, between force and velocity?Does the path to correct understanding of these relations empirically occur in steps?If so, what are the steps?Furthermore, it is important to point out that when assessing student understanding of the relations between force, velocity, and acceleration, the questions posed typically involve conditional relations, though this has not been explicitly acknowledged or systematically studied in previous work.For example, in a landmark paper, Viennot posed questions of the form ''given the velocity of an object, what is the (net) force on the object?''which is a conditional relation of the form ''given x, what is y?''There were no questions in Viennot's study probing the converse conditional relation ''given a net force on an object, what is its velocity?''Nor were there any questions regarding the relations between velocity and acceleration or acceleration and force [4].Certainly, in other studies that followed Viennot's paper, other conditional relationships were studied.However, as can be seen from Table I, which summarizes the relationships studied in many of the existing research papers on students' conceptual understanding of the directional relationships of force, velocity, and acceleration, there has been no systematic study of student understanding of all six possible paired conditional relations between the concepts of force, velocity, and acceleration.Furthermore, there has been an abundance of work on some of the six relations and little, if any, on others.
A systematic study of all possible pairs of conditional relations between force, velocity, and acceleration is important for two reasons.First, a within-student study of all possible pairs of relations will allow for a more holistic picture of student understanding of all relations and the possibility of determining whether understanding one relation may effect (or predict) the understanding of another relation.Second, it is not unreasonable to expect that for a given pair of variables, a conditional relation between the pair and its converse may not be answered similarly by the student.For example, the question ''An object is accelerating in a certain direction, what can you infer about the object's velocity?''may be answered differently than the question ''An object has a velocity in a certain direction, what can you infer about the object's acceleration?''Furthermore, if there is a causal relation between the variables (real or believed), such as between force and acceleration, then making inferences about the effect of a given cause may be different than making inferences about the cause of a given effect [11].
Therefore, in this paper we will investigate student understanding of all possible pairs of relations between force, velocity, and acceleration.To more precisely focus the investigation, we will only study student understanding of the relations between the directions of force, velocity, and acceleration in one dimension, and leave the investigation of multiple dimensions and the relations between the magnitudes of these variables for other studies.
While this investigation included a significant amount of student interviews and open-ended written answers, the bulk of the analysis is based on a multiple-choice test that we developed for this study.The multiple-choice test allows for, in principle, the identification of reliable patterns based on a large number of students.On the other hand, such a test can lack the subtlety and depth compared to a more qualitative study; nonetheless, the validity and reliability of the results claimed here were corroborated by the interviews and written answers of students.Clearly an in-depth study using more qualitative data would also yield interesting results, but here we focus on some of the important, replicable patterns found via the carefully constructed instrument.
Finally, we have one further introductory comment before proceeding.In a relatively recent study, Alonso and Steedle [12] have investigated middle-school student (12-14 years old) understanding of force and motion.They hypothesize increasingly expertlike levels of understanding of force and motion through which middle-school students pass in a progression towards mastery of these concepts.Specifically, they construct a formal ''learning progression'' of force and motion for this population.The topic of learning progressions has recently generated significant interest in the science education community (e.g., see [13]) and is somewhat relevant to the study in this paper since we examine longitudinal and cross-sectional data on student performance and we are interested in the steps and hierarchies in understanding the relation between the directions of force, velocity, and acceleration.While the topic of learning progressions is not the focus of this paper, we will briefly comment on this topic and Alonso and Steedle's study in the final discussion section.
The paper proceeds as follows.We first briefly describe the careful construction of the short, multiple-choice assessment instrument and report on its validity and reliability.Next we present test results pre-and post-instruction, and results of students at different levels of physics knowledge.These results include an analysis of within-and between-student answering patterns for all six conditional a In the problem set up, many of the questions specify the speed in addition to the direction of the velocity.
In the problem set up, five of the questions specify a force, two of the questions imply, but do not specify, gravity, and two of the questions imply zero force.c In the problem set up, six of the questions specify the speed in addition to the direction of the velocity and six do not.
relations and how answering patterns change both over one course and from first-to second-year university physics students.Finally, we summarize and discuss how the findings might be applied to the design of instruction aimed at improving student understanding of the relations between the directions of force, velocity, and acceleration.

A. Development of assessment
We constructed a 17-item multiple-choice test, called the FVA test, designed to assess student understanding of all six conditional relationships between the directions of force, velocity, and acceleration in one dimension.Each item in the test presents a simple scenario indicating the direction of one of the vectors for an object, say, acceleration ã, and asks what this implies about the direction of one of the other vectors, say, velocity ṽ.We label such a question as ã !ṽ, which briefly means ''given the acceleration, what can be inferred about the velocity?''(See Table II for examples of an ã !ṽ and an F ! ṽ question.)Ten of the 17 items include two questions for each of the conditional relations F ! ṽ, ṽ !F, ã !ṽ, and ṽ !ã and one question each for ã !F and F ! ã.These 10 items directly probe the six conditional relations between force, velocity, and acceleration, which are of particular interest in this paper.
The specific results of the remaining seven items also provide additional interesting information.However, except for being part of the reported total score results and item statistics of the FVA test, a detailed analysis of response patterns from these seven items are not reported here as they are not the focus of this paper.Nonetheless, it is worth mentioning that these seven items were included in the FVA test for several reasons.First, they provide variety in answering, so that the correct answer is not always ''a, b, and c are possible'' (see Table III), which is the case for the eight ṽ !F, F ! ṽ, ã !ṽ, ṽ !ã items.We have found that if the answer choice is always (or often) the same, then students start thinking more about the ''tricks'' of the item format rather than the content of the question.Second, these items probe different aspects of understanding the directional relations of force, velocity, and acceleration and as such are part of a more valid and reliable FVA test.For example, two of the seven items provide situations in which an object is explicitly at rest (item 12) or has zero net forces acting on it (item 3).Furthermore, three of the items (2, 7, and 8) provide (or ask for) information of both the velocity and the change in TABLE III.Available student response choices for each question.Almost all possible choices for the relationship between two vectors are available responses for a student to choose from when answering a question.The other possible choices were almost never chosen, thus excluded.Consider the ã !ṽ question as an example: ''A car is on a hill and the direction of its acceleration is uphill.Which statement best describes the motion of the car at this time?''

Response choices
Symbolic representation of choices Description of most common choices a) it is moving uphill ã "" ṽ Common ''misconception'' b) it is moving downhill ã "# ṽ c) it is not moving ã " 0 ṽ d) both a and b are possible ãð""; "#Þ ṽ ( ṽ) ''cannot-be-zero'' (partially correct) e) both a and c are possible ãð""; " 0Þ ṽ ( ṽ) ''cannot-be-opposite'' (to ã) (partially correct) f) a, b, and c are possible ãð""; "#; " 0Þ ṽ Correct TABLE II.Explanation and examples of the x !ỹ notation.A x !ỹ question is designed to probe a student's understanding of how a given vector's direction x is related to another vector's direction ỹ.For example, an ã !ṽ question provides a simple scenario indicating the direction of the acceleration on an object and asks the student what this implies about the direction of the object's velocity.Two specific examples from the developed test are provided below.
Example of ã !ṽ question: ''A car is on a hill and the direction of its acceleration is uphill.Which statement best describes the motion of the car at that time?'' Example of F ! ṽ question: ''At a particular instant of time, there are several forces acting on an object in both the positive and negative direction, but the forces in the negative direction (to the left) are greater.Which statement best describes the motion of the object at this instant?''a) it is moving uphill a) it is moving to the right b) it is moving downhill b) it is moving to the left c) it is not moving c) it is not moving d) both a and b are possible d) both a and b are possible e) both a and c are possible e) both b and c are possible f) a, b, and c are possible f) a, b, and c are possible speed.Finally, two of the items (2 and 15) are very familiar and easy, and help to establish a baseline of student understanding.Detailed analysis of response patterns on these items is a topic of further study.The complete instrument is reproduced in the appendix.Item construction occurred over a period of several years, beginning with open-ended pencil and paper questions and over 40 individual student interviews in a thinkaloud format.This was followed up by over 60 individual debriefings for students completing the final versions of the FVA test.The test development involved two major iterations, as explained in more detail in Ref. [14].This process revealed that there were several possible student response choices for the questions posed, and it was important to include all of these possibilities as response choices in the multiple-choice format.
Of these seven possible combinations, we found that students rarely if ever considered the physically unnatural possibility of ''can only be opposite or zero''; thus, usually only six response choices were provided.Table III provides an example of an item and the six possible response choice ''models.''

B. Reliability and validity of the FVA test
The construct validity of the items, including the question and answer choice format, was supported through several stages of interview and testing-based modifications, as reported above.We report here on other measures of validity and reliability, including correlations of the test with other measures such as course level, course grade, and the Force Concept Inventory (FCI) (all measures of student knowledge) as well as various reliability measures of the instrument.Finally, we determine the extent to which the story context of the questions might affect student response choices.

Increases in FVA score with increasing course level and instruction
We administered the FVA test post-instruction to four different class levels-standard calculus mechanics, honors calculus mechanics for engineering majors, honors calculus mechanics for first-year physics majors, and mechanics for second-year physics majors.See Table IV for a description of each course and the enrolled students.Figure 1 reveals that the average score on the FVA test tended to increase with course level such that the average score of the second-year physics majors was 0.9 standard deviations above that of the standard mechanics course.One exception was the higher score for the first-year honors physics majors compared to the second-year majors course.This difference may be due to slightly different population, since the second-year course enrolls some student who are not physics majors and not honors students.Note also that the increase in average score with course level was not an artifact of an increase in score on a small number of questions; rather the increase was spread among all the question types.Similarly, for all questions, the percentage of ''misconception'' responses decreased as class level increased.Thus, the average student in the higher level class did better on the FVA test, by both decreasing his or her misconception responses and increasing his or her correct responses, than the average student in the lower level course.
In addition to measuring post-test score differences between different course levels, we also administered preand post-tests to measure any changes in scores within a given course.For two courses, Fig. 1 reveals a withinstudent pre-to post-test increase in correct responses.In particular, a paired t test reveals significant gains from preto post-test for the honors engineering introductory calculus mechanics class [tð229Þ ¼ 16:50, p < 0:001, effect size d ¼ 0:95] and for the honors physics majors introductory calculus mechanics class [tð48Þ ¼ 7:13, p < 0:001, effect size d ¼ 0:71].Furthermore, it is interesting to note that we pre-and post-tested students in the standard calculus mechanics course with a very similar, earlier version of the FVA test and found no significant difference between pre-and post-test averages.This suggests that for the standard calculus course there may be little gain or evolution in the concepts the FVA test assesses.This lack of significant gain in the traditional course is consistent with previous research in student conceptual understanding of force and motion [1][2][3][4][5].

Psychometric properties and correlations
with course grade, course level, and FCI The FVA test has a reasonably high Kudor-Richardson reliability coefficient, KR-20 ¼ 0:7-0:85, indicating that the correct responses to all of the FVA items are fairly well correlated.Furthermore, there were moderate (0.3-0.4) correlations between FVA score and final course grade.Likewise, the FVA misconception responses were negatively correlated with grade in the class and were on average about À0:4.These correlations tended to be larger for the higher level classes.These data are consistent with the FVA test assessing a portion of the skills necessary to do well in the class.The correlations between FVA score with course level and the gains, pretest to post-test, suggest that the FVA final-grade correlation is not simply caused by something more general such as intelligence but rather by gained knowledge of force, velocity, and acceleration.
Furthermore, we administered the Force Concept Inventory to the Winter 2009 calculus mechanics class in order to compare the FVA test to a standard benchmark and further assess the validity of the FVA.The FCI is a multiple-choice concept inventory developed to assess understanding of basic concepts in force and motion.It has been widely used and generally accepted as a standard    and reasonably reliable assessment, and has also been used to evaluate instructional interventions at the high school and university level [1,15].We found a relatively strong correlation between FVA score and FCI score (r ¼ 0:569), while the correlation of FCI with final grade was 0.387, about the same as the FVA test-final grade correlation.
In summary, the positive correlations of FVA score with other measures (or expectations) of force and motion conceptual understanding such as course level, pre or post, grade, and FCI score help to support the validity of the FVA test.

Effect of story context on responses
One significant threat to the validity of a particular item is its potential sensitivity to construct irrelevant changes to the item.Thus far, we have only addressed issues of potential sensitivity to the item structure and format.
Here we would like to address the issue of potential sensitivity to the story context of the item.For example, a force and motion question about a playground ball might be regarded differently by the student compared to an analogous question (from the perspective of the expert) about satellites in space.In order to limit the test to a reasonable length, the FVA test has at most two different story contexts for each question category, F ! ṽ, ã !ṽ, ṽ !F, etc.Therefore, if the effects of story context are significant, this could severely limit the generalizability of any conclusions based on student response patterns in the FVA test.We constructed a series of tests and analyzed the results in two ways to investigate the possibility that our results were simply an artifact of story context.
We constructed and administered three separate ''multiple context'' tests to assess consistency of responses across a variety of story contexts for each of the three major question categories with which students have the most difficulty.Specifically, each multiple context test consisted of ten questions, six of which were all either ã !ṽ, F ! ṽ, or ṽ !F, and, for variety, four of which were ã !F or F ! ã questions.Students were randomly assigned to complete one of the three multiple context tests, with 40 students in the standard mechanics course per test.We analyzed the results in two ways.
First, we analyzed the data to determine whether there were consistent within-student response patterns for a given question category.We found that on average across the three tests (see Fig. 2) 37% of students consistently (withinstudent) chose the same answer choice for all six of the questions, and 61% of students answered at least five of the six questions with the same answer choice (within-student).
It is also worth noting that each of the major answer choice models corresponding to correct, misconception, cannot-be-zero, and cannot-be-opposite were consistently answered on all or five out of six questions by at least some students.This suggests that these four answer choices were not just random distractors that were occasionally attractive to the student for certain question contexts, rather, they were consistently chosen.In contrast, on the regular FVA test, only 3% of students consistently (within-student) answered all six of the ã !ṽ, F ! ṽ, and ṽ !F questions on the FVA test with the same answer choice and only 24% used the same answer choice on five of these six questions.Overall, these results suggest that for a given question type, for a variety of story contexts, withinstudent responses follow a specific model, such as cannot-be-opposite, but students do not necessarily use this model for other question types.
Second, we compared the answering patterns in the multiple context tests with the answering patterns in the FVA test to determine whether the response patterns for FIG. 2. Percentage of students in the Standard Mechanics course who responded using only one model-either correct, cannot-be-zero, cannot-be-opposite, or misconception-for 3, 4, 5, or 6 out of the 6 questions of the same question type-ṽ !F, F ! ṽ, or ã !ṽ-on the ''multiple context'' tests.Note that a majority of students used only one model for 5 or 6 out of 6 questions.test for independence reveals that there are no significant differences between answer patterns on the multiple context tests that focus on one question category, and the answer patterns on corresponding questions in the FVA test [ 2 ð3Þ ¼ 3:47, p ¼ 0:325 for ṽ !F questions, 2 ð3Þ ¼ 0:11, p ¼ 0:991 for F ! ṽ questions, and 2 ð3Þ ¼ 1:05, p ¼ 0:789 for ã !ṽ questions].In summary, the results of both kinds of analysis of the focused tests reveal that the averaged FVA test responses for each question type are relatively insensitive to story context and in that sense the results are fairly generalizable.

III. ANALYSIS OF FVA TEST RESULTS
The previous two sections have focused on the development and validation of the FVA test.The rest of the paper focuses on analyzing pre-and post-FVA test data from students enrolled in different levels of physics courses.This analysis will allow for an investigation of possible structure and hierarchy of student understanding of the relations between the directions of force, velocity, and acceleration as well as an investigation of evolution of this understanding.
The FVA test was administered to students either during an extra session (counted as part of the total homework grade with full credit for participation) in which students came to our lab to complete the test or as an in-class activity completed during the regular laboratory or lecture for the course.In both situations, students had plenty of time to finish the quiz and appeared to take the activity seriously.

A. General response patterns for different course levels
Figure 3 presents average student response patterns for all six question types for three class levels.There are four important observations about the response patterns presented in Fig. 3, as described below.

B. Evidence of intermediate levels of understanding
There was a small but significant fraction of students (20%-30%) who displayed intermediate levels of understanding of relations between the directions force, velocity, and acceleration as suggested by their choice of partially correct responses.By partially correct, we mean that the response included some physically valid possibilities not considered in the common misconception response.For example, for a ṽ !F question, the common misconception response assumes that the inferred force must be nonzero and aligned with the velocity.In contrast, the somewhat common response choice that includes the possibilities that the net force is aligned or is zero (i.e., the ''cannot-be-opposite'' model) is more accurate than the common misconception response, and could be considered an intermediate, partially correct response.As seen in Fig. 3, intermediated levels of understanding occurred in all of the conditional relations between force, velocity, and acceleration.
Interviews with students further revealed that those choosing partially correct answers were often confident about their answers, for example, allowing for the possibility that a moving object can have a net force aligned with the motion or a zero net force, but certain that the net force cannot be opposite of the motion.

C. Asymmetry in response patterns
between x !ỹ and ỹ !x Figure 3 also reveals two significant asymmetries in response patterns between a given conditional relation x !ỹ and its converse ỹ !x.First, there were often asymmetries in scores, depending on the course level and the question types.For example, while there were only small differences between the ṽ !F scores and F ! ṽ scores (effect sizes less than 0.12 standard deviations), there were significant differences between the ã !ṽ and ṽ !ã scores for the standard calculus-based physics course [16] [31% versus 57% correct, paired t test, tð110Þ ¼ 5:78, p < 0:001, effect size d ¼ 0:54] and for the honors physics majors course [39% versus 74% correct, paired t test, tð85Þ ¼ 7:49, p < 0:001, effect size d ¼ 0:74].Clearly, most students correctly understand that a moving object can have an acceleration in any direction or zero acceleration, but many students also believe incorrectly that an accelerating object must be moving in the direction of its acceleration.
Another perhaps more surprising asymmetry in scores occurs for the F ! ã versus ã !F questions.While there were no significant differences in responses for the firstand second-year physics majors courses (perhaps because they were answering at ceiling), there was a difference in responding to these two questions categories for the students in the standard calculus-based course.Specifically, the average score for the F ! ã question was 21% lower compared to the 82% score for the ã !F question, which is a significant difference [paired t test, tð227Þ ¼ 5:50, p < 0:001, effect size d ¼ 0:36].Interestingly, a similar asymmetry in scores occurs in pretest results for the first-year physics majors course [58% correct for F ! ã and 84% correct for ã !F, paired t test, tð227Þ ¼ 6:94, p < 0:001, effect size d ¼ 0:46], but not for the post-test.This asymmetry in responding might be considered somewhat surprising since the relation F ¼ m ã is a central relation in these physics courses (and readily recited by all students), but the results of the FVA test demonstrate that students in lower level courses often did not consider the conditional relationships between (net) force and acceleration to be symmetric.
A second significant kind of response asymmetry occurred in the kinds of intermediate, partially correct responses chosen by students.For example, for ṽ !F, the partially correct response chosen tended to be cannot-beopposite, while for F ! ṽ the partially correct response chosen tended to be cannot-be-zero.Therefore, it appears as though if it is given that an object is moving, students more readily accepted that it may have a zero net force acting on it rather than accepting that it could have an opposing net force acting on it.On the other hand, if it is given that an object has a net force acting on it, students more readily accepted that it can move opposite the net force, rather than accepting that it is not moving at all.For ṽ !ã versus ã !ṽ questions, there are significant differences in all response choices, with the ṽ !ã questions tending to be answered correctly significantly more often.Similar to the ṽ !F versus F ! ṽ questions, students tended to choose the cannot-be-opposite partially correct response for ṽ !ã and cannot-be-zero for ã !ṽ questions.

D. Other differences in scores between question types
In addition to differences in scores between a given conditional relation and its converse, there were also significant differences between other combinations of relations.For the standard calculus-based physics course the question types can be ranked as ṽ !F, F ! ṽ, ã !ṽ, ṽ !ã, F ! ã, ã !F, in order of increasing average score.The scores varied the most for the standard calculus-based physics course, but there were similar but reduced differences for the higher level courses.We will investigate the possible hierarchy of understanding of these relations in more detail in Sec.IV.

E. Difference in course levels: Evidence of evolution of understanding
While there were qualitative similarities between the patterns of the different course levels for each question type, there appears to be an ''evolution'' of the patterns from lower to higher course level.However, while the percentage of correct responses increased as the class level increased, the change in the misconception score between two classes was not always equal in magnitude to the change in the correct score.This appears to have been caused by a significant fraction of students choosing the Note that roughly half of the students did not change their answer from pre-to post-test-these students are represented in the three diagonal columns from back left to front right of each graph.Also, note that a little less than half of students improved by moving into or out of a partially correct response, or directly from the misconception response to the fully correct response-these students are represented in the three columns behind and to the right of the diagonal columns-and roughly 10% of students answered less correctly from pre-to post-test-represented in the three columns to the front and left of the diagonal.(A few examples from the honors for engineers ṽ !F plot: 23% of students responded with a misconception on the pretest and a misconception on the post-test.15% of students responded with a misconception on the pretest and a Part.Correct, partially correct response, on the post-test.8% of students responded with a Part.Correct, partially correct response, on the pretest and a correct response on the post-test.) partially correct cannot-be-zero and cannot-be-opposite responses, depending on the course level.For example, comparing the standard mechanics course to the honors for engineers mechanics course, the decrease in misconception responses was greater than the increase in correct responses, and the difference was comprised of students choosing one of the partially correct responses.Furthermore, when the difference between the honors and second-year course is considered, it is apparent that the increase in correct responses is greater than the decrease in misconception responses, and the balance is comprised of a decrease in the partially correct responses.These differences in response patterns between course levels suggest that a significant number of students evolved from an initial high level of misconceptions to the correct answer by passing through a partially correct response ''state,'' which indicates more knowledge than the common misconception but lacks the completeness of the correct response.
However, there is a danger in interpreting these data as evidence of evolution of understanding since it is cross sectional rather than longitudinal, and sometimes represents different kinds of students (e.g., physics majors versus engineering majors).Nonetheless, these data are consistent with the interesting possibility that students evolve though a partially correct state on the path to fully understanding the relations between the directions of force, velocity, and acceleration.We will investigate longitudinal data in the next section.(Much of the data presented here in Secs.III A-III E was shown and discussed in greater detail in Ref. [17].)

F. Pre-and post-FVA responses: Evidence of progression through intermediate levels
Pre-and post-FVA test data (i.e.longitudinal data) were analyzed in order to more closely investigate the evolution of student understanding of the relations between force, velocity, and acceleration.We were especially interested in determining whether the progression of student understanding involved passing through an intermediate, partially correct level of understanding.We administered the FVA test both pre-and post-instruction to 230 students in an honors calculus mechanics class for engineers and 49 students in a first-year honors calculus mechanics for physics majors class.As mentioned in Sec.II B and presented in Fig. 1, there were significant gains in the average scores for both classes.Perhaps more interesting, we examined within-student pre-versus post-test shifts in response choices for each item in the FVA test.Figure 4 presents a cross-tabulation of within-student pre-and posttest responses on a select set of FVA test items for the two courses.There are two important observations about the data represented in these figures.
First, it is helpful to describe the general patterns of shifts in student answering.Considering the honors mechanics for engineers course for the ṽ !F, F ! ṽ, and ã !ṽ questions, on average 51% of students did not change their answers, 43% answered ''more correctly'' on the post-test versus the pretest by either changing from the misconception response to either a partially correct response or to the correct response or changing from a partially correct response to the correct response, and conversely 6% answered less correctly.The results for the first-year physics majors course were somewhat similar, where 52% of students did not change their answers, 38% answered more correctly, and conversely 5% answered less correctly from pre-to post-test.
Second and more importantly, on average approximately 15% of students moved from the misconception response to a partially correct response and approximately 10% of students moved from a partially correct response to the correct response.These averages are representative of ṽ !F, F ! ṽ, and ã !ṽ questions for both courses.This is to be compared with approximately 20% of students who moved directly from the misconception response to the correct response.
These results provide strong evidence that for many students the progression of student understanding involves passing through an intermediate, partially correct, level of understanding.Specifically, over half of the students who changed their answer changed either to or from an intermediate, partially correct, response.

IV. INVESTIGATING POSSIBLE HIERARCHIES IN STUDENT RESPONSES
In this section we are interested in investigating the question, ''Does correctly answering a given conditional relation necessarily imply that another specific conditional relation was also answered correctly?''For example, does correctly answering ṽ !F questions necessarily imply that ṽ !ã questions were also answered correctly?Note that, while one might make reasonable physical arguments to answer this question from an expert point of view, we are first interested in this as strictly an empirical question.
If such patterns in answering do exist, then one can proceed to make inferences as to the causes of these patterns.There are some standard analysis practices, such as Guttman scaling or scalogram methods, for determining the hierarchical-like structures of items for a unidimensional instrument.In fact, a full item response theory analysis of the FVA test can be used to find such hierarchies.However, here we are interested in the hierarchical relation between a number of different dimensions, such as understanding ṽ !F or ṽ !ã probed by different items within the FVA test.Therefore, we will examine crosstabular results between pairs of question types within the FVA test.A full Guttman scaling and/or item response theory analysis could also be informative from a more global perspective and is worth further study, but here we will focus on hierarchies within the six conditional relations of interest.
Table VIII provides an example of a simple method to rule out or provide supporting evidence for the existence of hierarchies in response patterns for pairs of question types.In this hypothetical example, all of the students that answered ṽ !F questions correctly also answered ṽ !ã correctly, but only half (25 out of 50) of the students that answered the ṽ !ã questions correctly answered the ṽ !F correctly.These hypothetical data are consistent with the statement ''correctly answering ṽ !F questions necessarily implies correctly answering ṽ !ã questions'' (the data are also consistent with the logically equivalent statement ''incorrectly answering ṽ !ã questions implies incorrectly answering ṽ !F questions'').Furthermore, these hypothetical data provide evidence to disprove the converse statement ''correctly answering ṽ !ã questions implies correctly answering ṽ !F questions,'' since 25 out of 50 students are counterexamples to this statement.Consider the generic contingency table on the top.Here, for example, a is the number of students answering both x and y incorrectly.If c ¼ 0, d ) 1, and a ) 1, then this is consistent with the statement ''answering x correctly implies answering y correctly'' and the logical equivalent, ''answering y incorrectly implies answering x incorrectly.''One could reasonably also use the conditions d ) c and a ) c, since c will not be zero in practice due to random guessing, unusual students, etc.Furthermore, if these conditions on c are violated, then these statements are disproved, since c represents the number of counterexamples to these statements (and b represents the number of counterexamples to the converse of these statements).The table on the bottom presents a hypothetical example for ṽ !F versus ṽ !ã questions.In this case one can claim this hypothetical data is consistent with the statement ''correctly answering ṽ !F questions implies correctly answering ṽ !ã questions'' and ''incorrectly answering ṽ !ã questions implies incorrectly answering ṽ !F questions.''On the other hand, the relatively high counts in the ''b'' cell disproves the converse statement ''correctly answering ṽ !ã questions implies correctly answering ṽ !F questions.'' a There are no counterexamples to the statement, ''correctly answering ṽ !F questions implies correctly answering ṽ !ã questions.''TABLE IX.Within-student cross tabulations of scores between various question types.Cells represent numbers of students.Data reported are from the standard calculus mechanics course.A cell count is represented in bold face for tables which roughly satisfy conditions (discussed in Table VIII) which are consistent with significant hierarchies between the indicated question types.Some question types had two questions posed; in this case the label ''Correct'' indicates that at least one question of that type was answered correctly, and ''Incorrect'' indicates that zero questions of that type were answered correctly.Note that is the mean squared contingency coefficient between the question types, equivalent to the correlation coefficient for a 2 Â 2 table.Therefore, one can analyze pairs of question types in this manner to either provide evidence disproving or supporting (but not proving) the existence of a particular hierarchy in answering, namely, that answering question type x correctly requires answering question type y correctly (but not the converse).We analyzed within-student response patterns for all 15 possible pairs of question types on the FVA test (see Table IX) using the simple method shown in Table VIII, and found that there were no cases in which there were zero counterexamples for the statement ''answering relation x correctly requires answering relation y correctly.''However, there were a number of cases in which there were a relatively small number of counterexamples.
These few counterexamples may be due to uninteresting causes such as random guessing.Thus, when there are only a relatively small number of counterexamples (rather than zero), this can still suggest the existence of a hierarchy in the answering pattern.
Using the constraint of a small number of counterexamples rather than zero counterexamples to indicate a hierarchy in answering, inspection of Table IX reveals a trend: for most pairs of relations, if the average score of a question type x was significantly less than the score for question type y, then it is also the case that correctly answering question type x implied correctly answering question type y, but not the converse.More specifically, if a student correctly answered the question types with the lowest average scores (the ''difficult'' question types), namely, ṽ !F or F ! ṽ, then most of the time this student also correctly answered question types with high average scores (i.e., ''easy'' question type), namely, ṽ !ã, F ! ã, or ã !F.
For example, when comparing student responses to both F ! ṽ and ṽ !ã questions for the standard calculus-based physics course, we found that the scores are 20% and 56% for F ! ṽ and ṽ !ã questions, respectively.As shown in Table IX(G), when comparing within-student responses, over 90% of students answering F ! ṽ questions correctly answered ṽ !ã questions correctly, 32 32þ3 % 91%, and over 90% of students answering ṽ !ã questions incorrectly answered F ! ṽ questions incorrectly, 28 28þ3 % 90%.For example, only about 40% of students answering ṽ !ã questions correctly answered F ! ṽ questions correctly, 32 32þ44 % 42%.These results are consistent with (but do not prove) the statement ''correctly answering F ! ṽ questions necessarily implies correctly answering ṽ !ã questions, and the logical equivalent ''incorrectly answering ṽ !ã questions implies incorrectly answering F ! ṽ questions.''Furthermore, this contingency table disproves the converse statement: ''correctly answering ṽ !ã questions necessarily implies correctly answering F ! ṽ questions.''Each table also includes the coefficient, which is a measure of correlation between the scores of each question type in the table.For example, in Table IX(G) discussed above, ¼ 0:314, denoting a medium-level correlation between the scores on F ! ṽ and ṽ !ã questions.
Finally, we use the method described in Table VIII on the data patterns in Table IX to present a summary of potential hierarchies in Table X.Note that these hierarchies are only suggested by trends in the data tables.Nonetheless, these tables do provide strong evidence that statements converse to those in Table X are not true.For example, there is strong evidence (via a significant number of counterexamples) that the statement ''correctly answering ṽ !ã questions implies correctly answering ṽ !F'' is not true.

A. Comments on hierarchies in responses
There are three points we would like to address concerning the determination of hierarchies of understanding.First, from the perspective of traditional design constraints on item statistics of a valid and reliable instrument (i.e., items have high internal consistency), it is not altogether unexpected that an answering hierarchy is aligned with increasing relative item score.Specifically, the constraint of choosing only items with a relatively high discrimination index implies that, for a given student, if items with low averages are answered correctly then items with high averages also are answered correctly.Nonetheless, this does not diminish the significance of the finding that answering patterns of some pairs of question types, such as F ! ṽ versus ṽ !ã, are not independent and have a hierarchy (i.e., x implies y, but y does not imply x).Second, it is worth pointing out that this analysis of hierarchies of question types can be viewed from the perspective of diagnostic assessment.Specifically, if students answer the difficult ṽ !F and F ! ṽ questions correctly, then they are very likely to answer all other questions on the FVA test correctly.Therefore, to the extent that the FVA test measures understanding of the relations between force, velocity, and acceleration, one could view the ṽ !F and F ! ṽ questions as the most diagnostic for determining understanding, at least for the level of students tested in this study.
Finally, while we have found evidence of hierarchies in student responses, our claims about hierarchies of student understanding of force, velocity, and acceleration are more qualified.It is important to keep in mind that ''evidence of understanding'' and any inferences of hierarchies that follow from such evidence depend on a careful characterization of how ''understanding'' is operationally defined.For example, when judging whether a student adequately understands the relation between force and velocity, it could be considered reasonable to require that a student correctly and explicitly distinguishes the differences between velocity and acceleration as part of their explanation of (or answers to question about) the relation between force and velocity.However, one might not expect the converse; namely, one might not require a student to explicitly distinguish the relation between force and velocity in order to demonstrate an understanding of the relation between velocity and acceleration.In this case, the evidence for a hierarchy of understanding ṽ !ã before F ! ṽ is strongly determined by the nature of the definition of understanding.
In this paper we have instead asked questions about a specific relation, say, ṽ !F, without any explicit reference to other relations, such as ṽ !ã.To the extent that specific items in the FVA test measure understanding of each relation individually, without reference to other relations, observed hierarchical answering patterns suggest hierarchies in student understanding of relations between force, velocity, and acceleration.This finding is not simply an inevitable result of the operational definition of understanding of the relations, but appears to suggest that students at least implicitly connect different relations.

B. Hierarchies and evolution of responses
If there are hierarchies in responses to questions about the relations between force, velocity, and acceleration, then, in the course of learning, the evolution of responses should follow paths consistent with these hierarchies.Generally speaking, if correctly answering x necessarily implies correctly answering y, then gains in scores on y should precede gains in scores on x.For example, in the previous section we provided evidence that correctly answering F ! ṽ questions implied answering ṽ !ã questions, but not the converse.Therefore, for students initially performing poorly on both ṽ !ã and F ! ṽ questions, we would expect that within-student gains in answering F ! ṽ questions correctly would not occur without gains in answering ṽ !ã (though one might expect to see the converse).
The pre-and post-FVA test data described in Sec.III F allow for such a comparison of within-student gains in correct answering of the various question types.We analyzed all 15 possible pairs of question types (Table XI) using the simple hierarchy method shown in Table VIII, and found that there were no cases in which there were zero counterexamples for the statement ''a gain in score for relations x necessarily implies a gain scores for relation y.''However, for a few pairs of relations we did find cases in which there were relatively few counterexamples to such a statement, and these cases were exactly the ones that one would expect from the evidence of hierarchies of understanding described in the previous section.
Specifically, as shown in Table XI(G), over 84% of students who improved their score on F ! ṽ questions also improved their score on ṽ !ã questions, 38 38þ7 % 84%, and over 80% of students who did not improve their score on ṽ !ã questions also did not improve their score on F ! ṽ questions, 29 29þ7 % 80%.Note that these patterns are not found for the converse.For example, less than 45% of students who improved their score on ṽ !ã questions also improved their score on F ! ṽ questions, 38 38þ47 % 45%.As mentioned, with a relatively small number of counterexamples notwithstanding, these findings are consistent with the evidence of hierarchy of understanding the relations F ! ṽ and ṽ !ã.
In Table XII, we compiled the data even further to demonstrate the general pattern that a gain in either a ṽ !F or a F ! ṽ question necessarily implies a gain in either a ṽ !ã or a ã !ṽ question.That is, over 92% of students who improved their score on either F ! ṽ or ṽ !F questions also improved their score on either ṽ !ã or ã !ṽ questions, and over 77% of students who did not improve their score on ṽ !ã or ã !ṽ questions also did not improve their score on F ! ṽ or ṽ !F questions.This would support the finding discussed earlier that correctly answering questions about the relations between acceleration and velocity tends to be required in order to correctly answer questions about the relations between force and velocity.We did not find significant hierarchies for gains involving F ! ã or ã !F questions, most likely because the score to these questions were already near ceiling, leaving little room for gain.
In summary, the contingency tables of the gains, pretest to post-test, in student scores are in agreement with the hierarchies deduced from the contingency tables of within-student answering at a single time.This provides yet more evidence that specific hierarchies exist in student understanding of the relations between force, velocity, and acceleration and these hierarchies affect the evolution of student understanding.

V. SUMMARY
We have developed a 17 item multiple choice test, the ''FVA test,'' designed to probe students' understanding of the relationships between the directions of net force,  First, we consistently found evidence of an intermediate, partially correct level of understanding of the relations between force, velocity, and acceleration held by up to 30% of the students pre-or post-instruction.This is in addition to finding that a significant number of students answer consistent with the well-known and common student misconception that the vector quantities should always point in the same direction.Specifically, we found two intermediate models.The first model is the belief that two vectors, such as force and velocity, need not be aligned, but they may also be pointed in opposite directions, but one cannot be zero (''cannot-be-zero'' model).The second model is the belief that the two vectors need not be aligned, though one of them could be zero but not pointed in the opposite direction as the other (''cannot-beopposite'' model).Furthermore, we found that about half of the students who improved their understanding of the relations between the directions of force, velocity, and acceleration did so by evolving through these partially correct ''states.''Roughly speaking, from pre-to post-test in the honors physics sections, we found that about half of the students did not change their responses, about 1=4 changed from the misconception answer to the correct answer, and about 1=4 either changed from the misconception answer to the partially correct answer or from the partially correct answer to the correct answer.
Second, we found an asymmetry in student responses to conditional relations.That is, students often treated questions that probe the concept motion implies acceleration differently than the concept acceleration implies motion.Likewise they often treated questions about motion implies force differently than force implies motion, and perhaps surprisingly, they often treated questions about force implies acceleration differently than acceleration implies force.The differences are reflected in the response frequencies of each answer choice.For example, for ṽ !ã versus ã !ṽ questions, there were differences in the number of correct and misconception responses as well as in the kinds of partially correct responses (cannot-beopposite versus cannot-be-zero).For the ṽ !F versus F ! ṽ questions, there were no differences in the correct and misconception response frequencies, but there were differences in the partially correct cannot-be-opposite versus cannot-be-zero responses.
Third, we found evidence of specific hierarchies in correct responses to different question types.The evidence included both within-student scores at one point in time and within-student gains in scores from pre-to post-test.For example, we found evidence that if students correctly answered F ! ṽ questions, then they were very likely to also correctly answer ṽ !ã questions, but not the converse.Further, if ṽ !ã questions were answered incorrectly, then it was very likely that F ! ṽ questions were also answered incorrectly (but not the converse).Likewise, we found that for a given student, gains in F ! ṽ scores most likely occurred in the presence of gains in ṽ !ã scores, but not the converse.These findings are indeed interesting and suggest that it may be required to understand the relationship between the direction of velocity and acceleration in order to understand the relationship between the direction of force and velocity.However, to more firmly establish a possible causal link between understanding these relations, one must first be careful to explicitly define what is meant by understand, and, second, a controlled intervention (for example, manipulating the amount of velocity-acceleration instruction) is needed.

VI. COMMENT ON LEARNING PROGRESSIONS
As mentioned in Sec.I, Alonso and Steedle have hypothesized successive levels of understanding of force and motion [12].There are several major differences between our study and theirs.For example, they studied middleschool students, they studied understanding of magnitude (including change in magnitude) of quantities of force and motion, and only to a lesser extent did they also study understanding of relative direction of force and motion.Furthermore, they did not systematically study student understanding of the concept of acceleration (including direction) and its relations to velocity.Instead they focus on ''motion'' and occasionally make explicit references to ''acceleration.''Nonetheless, an examination of their hypothesized levels of understanding reveal that, in their model, students tend to understand issues concerning the relations between the direction of force and motion before they come to understand issues about the relations between the magnitude (and changes in magnitude) of force and motion.This result is certainly worth confirming in further focused empirical studies.
In contrast, our study is focused solely on the understanding of the relations between the directions of force, velocity, and acceleration, and as such our results do not contradict or confirm their results.Instead, our results may add more detail and depth to a portion of a larger learning progression framework for force and motion that may also include Alonso and Steedle's work.While the term ''learning progression'' has not been uniquely defined (for example, see the discussion in Alonzo and Steedle's work [12]), the general idea is one of successive stages of student understanding of a concept or topic, starting from incomplete or incorrect knowledge and ending with some defined level of mastery, usually described by a particular science education standard.We have not set out a priori to construct a learning progression, rather we found that a consistent progression emerged out of our longitudinal and cross-sectional data.This is somewhat in contrast to typical work on learning progressions (including that of Alonzo and Steedle), which, rather than being primarily empirically driven, were typically constructed by an expert as some logical progression (from an expert point of view) toward mastery, with only some input on empirical data on how students are thinking or how they might progress toward mastery.As Alonzo and Steedle's article states, ''the learning progression represents a hypothesis about student thinking, rather than a description'' [12].
Indeed our approach is more empirical.We carefully designed questions to probe student understanding of logically and scientifically relevant dimensions (from an expert's perspective), namely, the six conditional relations.Nonetheless, understanding these relations could also be seen as subgoals of understanding force and motion in general, and while an expert might logically order how these subgoals would best be learned, this does not preclude the order in which students actually learn them, which is an empirical question investigated here.
In summary, some of the results in this paper could be used to link with recent efforts to identify learning progressions of force and motion.Specifically, our results could be used to construct a more formalized, empirically based learning progression of student understanding of the directions of force, velocity, and acceleration, and this could be useful for instruction.Other implications for instruction are discussed in the next section.

VII. IMPLICATIONS FOR INSTRUCTION
We will focus on some of the most important implications of the three major findings summarized in a previous section.First, instruction may be more effective if it accounts for the existence of intermediate states of understanding, especially since these intermediate states vary, depending on the specific conditional relation.For example, if an instructor focuses on the point ''an object moving at constant velocity must have zero net force acting on it,'' this may help some students move from the misconception level into the somewhat common ''cannot-be-opposite'' intermediate state for that conditional relation, but it is unlikely to help the significant population of students who were already in that intermediate state to advance to fully correct understanding.Instead, instructors should be aware of the importance of focusing on the point that ''an object which is moving may have a net force opposite to its direction of motion.''Furthermore, if instructors are not careful in their assessment, they may incorrectly infer that students in the intermediate, partially correct, state have a complete understanding.
Another implication for instruction stems from asymmetry in responses to conditional relations between force, velocity, and acceleration.This implies that students may consider conditional examples differently during instruction.That is, a student who sees an example demonstrating that an object with a given instantaneous velocity can have any value of net force acting on it may perceive this differently than an example in which an object with a given net force can have any value of velocity.Furthermore, these two different examples may address different intermediate levels of understanding, as mentioned earlier.Therefore, attention must be given to both kinds of examples in order to fully address student difficulties with understanding these relations.
Finally, evidence for potential hierarchies in understanding the relations between the directions of force, velocity, and acceleration naturally has important implications for the order of instructional units and priorities for their mastery.For example, the results of this study imply that instructors must first ensure that students understand the relation between the direction of velocity and acceleration as well as force and acceleration in order to ensure that the students understand the relation between velocity and force, which is the source of common, compelling misconceptions.While from an expert point of view this order seems quite reasonable and perhaps expected, it is important to keep in mind that this study implies that teaching in the reverse order will not be as effective.Namely, teaching students first about the common misconceptions involving the relations between velocity and force may not be effective in preparing them to learn about the relations between velocity and acceleration or force and acceleration.While this and other implications of the order of instruction following from evidence of hierarchies of understanding is an interesting result, clearly more carefully controlled intervention studies are needed in order to better establish their validity.
In summary, we have found that the carefully designed FVA test has provided more comprehensive insight into student understanding of the relations between the directions of force, velocity, and acceleration in one dimension.Clearly the levels of understanding of these concepts has a rich and interesting structure, and the results of this study can help to inform careful decisions about the order and priorities of instruction as well as the identification and use of critical types of example questions to improve student understanding of this fundamental topic.

FIG. 1 .
FIG. 1. Summary of FVA scores.The graph on the left shows a trend of increasing post-test score with increasing class level for four different calculus mechanics courses ranging from first-year introductory to second-year physics majors.The graph on the right shows pre-and post-test scores for the two different honors introductory mechanics courses.

FIG. 3 .
FIG. 3. Mean student response percentages for all six conditional relation question types and for three course levels.Black: standard calculus mechanics, N ¼ 228; white: honors for engineers calculus mechanics, N ¼ 86; gray: second-year physics majors, N ¼ 65.Error bars are AE1 standard error.

FIG. 4 .
FIG. 4. Within-student, pretest versus post-test response choice percentages for the three lowest scoring question types.Responses were categorized as either Correct, Part.Correct (partially correct response), or Miscon.(misconceptionlike response).The data presented are from the honors Calculus Mechanics course for engineers, N ¼ 230, and the first-year honors physics majors Calculus Mechanics course, N ¼ 49.Note that roughly half of the students did not change their answer from pre-to post-test-these students are represented in the three diagonal columns from back left to front right of each graph.Also, note that a little less than half of students improved by moving into or out of a partially correct response, or directly from the misconception response to the fully correct response-these students are represented in the three columns behind and to the right of the diagonal columns-and roughly 10% of students answered less correctly from pre-to post-test-represented in the three columns to the front and left of the diagonal.(A few examples from the honors for engineers ṽ !F plot: 23% of students responded with a misconception on the pretest and a misconception on the post-test.15% of students responded with a misconception on the pretest and a Part.Correct, partially correct response, on the post-test.8% of students responded with a Part.Correct, partially correct response, on the pretest and a correct response on the post-test.)

TABLE I .
Question types investigated in a sample of previous studies.( x !ỹ notation indicates a question of the form: Given x, what can be inferred about ỹ?)

TABLE IV . Outline of test administration and description of courses and populations.
Table V reports overall test statistics for several class levels and Table VI reports individual item statistics for several class levels as well.

TABLE V .
Summary of FVA test statistics.

TABLE VI .
Summary of individual FVA test item statistics for three course levels.Reported are the response percentages for each available response on all 17 items.Correct responses choices are in bold.(Note that question 9 required students to ''circle all that apply.''Thus, response percentages reflect the percentage of students circling that response.The correct response was circling a, b, and d.) Pt-Bis. is point biserial coefficient.

TABLE VII .
Comparison of average response choice percentages for the FVA test and the ''multiple context'' tests (which include six different story contexts) for each question type.Reported are averages AE standard error between the questions.questiontypes agree.Table VII reports the average percentages and standard errors across the six different questions for each test and compares it to the averages from the two questions used for each question type on the FVA test.Both inspection of TableVII and a 2 corresponding

TABLE VIII .
A simple method for ruling out or finding supporting evidence for possible hierarchical structure in answering.

TABLE X .
A summary of hierarchy trends suggested by TableIX.

TABLE XII .
Summary of hierarchy in gains for v !F and F ! v with v ! a and a !v.

TABLE XI .
Within-student cross tabulations of gains in scores between various question types.Cells represent numbers of students.Data reported are from the honors for engineers course.A cell count is represented in bold face for tables which roughly satisfy conditions (discussed in TableVIII) which are consistent with significant hierarchies between the indicated question types.Cases in which the score was 2 out of 2 both on pre-and posttests on a specific question type were removed.This helps to remove the less interesting ''ceiling cases'' that would register as ''no gain.''Some question types had two questions posed; in this case the label ''Gain'' indicates an increase of at least one correct response for that question type, and ''No gain'' indicates either no increase in correct responses or a loss in correct responses for that question type.Note that is the mean squared contingency coefficient between the question types, equivalent to the correlation coefficient for a 2 Â 2 table.

TABLE XI
While previous research has examined one or two of these relationships at a time, the goal here was to holistically examine answering patterns for all six possible pairs of conditional relationships in order to obtain a more coherent picture of student understanding of these relations among the concepts of force, velocity, and acceleration.The development of the instrument included multiple stages of revisions with feedback via interviews and testing with standard and honors calculus-based introductory university students as well as second-year physics majors.The test has been shown to have significant statistical reliability as well as validity for the population tested, as shown, for example, by significant correlations of FVA test score with course grade, level of the student, and the Force Concept Inventory score.The overall test scores indicate that traditional calculus-based physics students performed poorly on the test, with an average score of about 40%, and even second-year physics majors find these questions somewhat challenging, with an average score of 70%.Furthermore, detailed patterns in student responses to the FVA test were analyzed, and several interesting findings were reported, as summarized below.