# [LON-CAPA-cvs] cvs: modules /gerd/correlpaper correlations.tex

www lon-capa-cvs@mail.lon-capa.org
Wed, 09 Aug 2006 18:43:41 -0000

This is a MIME encoded message

--www1155149021
Content-Type: text/plain

www		Wed Aug  9 14:43:41 2006 EDT

Modified files:
/modules/gerd/correlpaper	correlations.tex
Log:
Per reviewer comment:
* clarify who did the classification
* explicitly state hypotheses

--www1155149021
Content-Type: text/plain
Content-Disposition: attachment; filename="www-20060809144341.txt"

Index: modules/gerd/correlpaper/correlations.tex
diff -u modules/gerd/correlpaper/correlations.tex:1.6 modules/gerd/correlpaper/correlations.tex:1.7
--- modules/gerd/correlpaper/correlations.tex:1.6	Wed Aug  9 10:25:59 2006
+++ modules/gerd/correlpaper/correlations.tex	Wed Aug  9 14:43:39 2006
@@ -72,9 +72,9 @@

\section{\label{measures}Measures and Instruments}
\subsection{\label{discussion}Discussion Analysis}
-We analyzed the online student discussions that were associated with the online homework given in the course, using the scheme first suggested in Ref.~\cite{kortemeyer05ana}. There were a total of 2405 such online discussion contributions over the course of the semester.
+The author analyzed the online student discussions that were associated with the online homework given in the course, using the scheme first suggested in Ref.~\cite{kortemeyer05ana}. There were a total of 2405 such online discussion contributions over the course of the semester.

-Each contribution was classified according to the classification scheme of Ref.~\cite{kortemeyer05ana}, however, with the additional refinement that each contribution could be member of more than one class, and that the contributions were weighted by their length. For example, a certain contribution might include both a procedural solution-oriented question and a surface-level mathematical answer, and would thus receive 50\% membership in both classes, weighted by its total length.
+Each contribution was classified by the author according to the classification scheme of Ref.~\cite{kortemeyer05ana}, however, with the additional refinement that each contribution could be member of more than one class, and that the contributions were weighted by their length. For example, a certain contribution might include both a procedural solution-oriented question and a surface-level mathematical answer, and would thus receive 50\% membership in both classes, weighted by its total length. The student names were not available during classification in order to avoid bias.

The analysis was carried out based on discussion superclasses~\cite{kortemeyer05ana}, for example, all conceptual classes were combined, independent of their features. A given contribution can thus belong to more than one superclass.

@@ -208,10 +208,19 @@

\subsection{\label{performance}Measures of Student Learning}
As a measure of student conceptual understanding and learning, we deployed the revised Force Concept Inventory (FCI)\cite{fci} at the beginning and the end of the course, again with voluntary participation. As an additional measure of student performance, the performance on the final exam and the course grade for each student were taken into consideration. For the grade we used the raw percentage score, not the number grades, since it provides finer grained information about the overall student performance in the course.
+
+\section{\label{hypo}Hypotheses}
+Between the measures described in section~\ref{measures}, a number of correlations are to be expected:
+\begin{enumerate}
+\item Student responses to clusters on the MPEX should correlate with corresponding discussion behavior patterns
+\item Student performance on the FCI should correlate positively with desirable and negatively with undesirable discussion behavior
+\item Student performance on the FCI should positively correlate with performance on the MPEX
+\end{enumerate}
+Somewhat less general, since dependent on the grading mechanism implemented by the instructor, corresponding correlations should exist with the final exam and the course grade.
\section{\label{results}Results}
In this section, we present the correlations among the different instruments and measures.
\subsection{Correlation Table}
-Table~\ref{fullresults} shows the complete results of the study. In the columns of the table, we listed:
+Table~\ref{fullresults} shows the complete correlation results of the study. In the columns of the table, we listed:
final exam performance;
final (post) FCI score;
@@ -311,7 +320,7 @@

\begin{figure}
\includegraphics[width=9cm]{fcipostmpexpost}
-\caption{\label{mpexfci}Correlation of the final FCI score with the MPEX score ($R=0.24$; $n=97$).}
+\caption{\label{mpexfci}Correlation of the final FCI score with the MPEX score ($R=0.24 [0.04 - 0.42]$; $n=97$).}
\end{figure}

\begin{figure*}
@@ -323,12 +332,25 @@
\includegraphics[width=9cm]{fcipostsolutionT}
\caption{\label{solutionfci}Correlation of percentage solution-oriented discussions with final FCI score ($R=-0.58 [-0.73 - -0.38]$; $n=57$).}
\end{figure}
-\section{Discussion of the Results}
+\section{Discussion of the Correlation Results}
Correlations between Grade, Final Exam, FCI, MPEX, and student discussion behavior have turned out lower than expected. The strongest correlations exist with the final score on the FCI, namely $R=0.56$ with the grade percentage in the course, $R=0.51$ with the prominence of physics-related discussions, and $R=-0.58$ with the prominence of solution-oriented discussions.

An unexpected result were the low correlations between the MPEX cluster scores and the student discussion behavior. We can thus not conclude that student discussion behavior is strongly correlated with student attitudes and expectations as measured by the MPEX. Student discussions and the MPEX also differently correlate to measures of learning, i.e., student discussion more strongly correlates to the FCI, and MPEX more strongly to course grades and the final exam.

-The relative weakness of many of the expected correlations with the MPEX might indicate that maybe -- in spite of the efforts of the author -- the students did not take the MPEX very seriously or did not carefully read the statements. An argument for this possible explanation is that the overall scores of the students on the MPEX were low (Independence 42\%; Coherence 46\%; Concepts 48\%; Reality Link 55\%; Math Link 40\%; Effort 47\%). Also, students relatively frequently chose the answer "3" ("Neutral") on the MPEX Likert scale, which is by definition never correct --- answering that way could indicate true indifference, or confusion regarding the statement, or simply "don't care." By the same token, students appear to be taking the FCI more seriously, probably because it more closely matches the other (grade-relevant) assessments they encounter in the course, and students tend to based their relative value system regarding a subject area on the assessments used~\cite{lin}. The FCI seems to be fairly robust in ungraded settings, see for example Henderson~\cite{henderson}, who found only 0.5 points difference between graded and ungraded administration of the FCI --- the MPEX, which is never graded, may in fact be far less robust to the perception of  not counting."
+Regarding the hypotheses stated in section~\ref{hypo},
+\begin{enumerate}
+\item a correlation between performance on MPEX clusters and discussion behavior exhibited online could not be confirmed
+\item a medium negative correlation between the prominence of solution-oriented and a medium positive correlation between physics-related online discussions and the FCI score could be confirmed, while correlations with other discussion characteristics could not be confirmed on the 95\% confidence level
+\item a correlation between FCI and MPEX scores could not be confirmed on the 95\% confidence level
+\end{enumerate}
+Medium correlations exist between the performance on the final exam and the course grade on the one hand, and the FCI performance on the other, but the same could not be confirmed for the MPEX scores.
+\section{Discussion of Possible Causal Relationships}
+A purely correlational study does not allow any conclusions regarding causal relationships.
+\subsection{Discrepancy in the Correlational Power of the MPEX and the FCI}
+
+The relative weakness of many of the expected correlations with the MPEX might indicate that maybe -- in spite of the efforts of the author -- the students did not take the MPEX very seriously or did not carefully read the statements. An argument for this possible explanation is that the overall scores of the students on the MPEX were low (Independence 42\%; Coherence 46\%; Concepts 48\%; Reality Link 55\%; Math Link 40\%; Effort 47\%). Also, students relatively frequently chose the answer "3" ("Neutral") on the MPEX Likert scale, which is by definition never correct --- answering that way could indicate true indifference, or confusion regarding the statement, or simply "don't care."
+
+By the same token, students appear to be taking the FCI more seriously, probably because it more closely matches the other (grade-relevant) assessments they encounter in the course, and students tend to based their relative value system regarding a subject area on the assessments used~\cite{lin}. The FCI seems to be fairly robust in ungraded settings, see for example Henderson~\cite{henderson}, who found only 0.5 points difference between graded and ungraded administration of the FCI --- the MPEX, which is never graded, may in fact be far less robust to the perception of  not counting."

\section{Conclusions}
In this introductory calculus-based course, correlations between different performance and attitude indicators were found to be lower than expected. Student discussion behavior generally correlates more strongly with student performance (FCI, final exam, grade) than MPEX results. Particularly the prominence of solution-oriented and physics-related discussions correlate relatively strongly with the FCI.

--www1155149021--