[LON-CAPA-cvs] cvs: modules /jerf/tests About_LON-CAPA_Testing.html Utils.pm

bowersj2 lon-capa-cvs@mail.lon-capa.org
Mon, 28 Jul 2003 16:07:20 -0000


This is a MIME encoded message

--bowersj21059408440
Content-Type: text/plain

bowersj2		Mon Jul 28 12:07:20 2003 EDT

  Removed files:               
    /modules/jerf/tests	About_LON-CAPA_Testing.html 

  Modified files:              
    /modules/jerf/tests	Utils.pm 
  Log:
  Moved docs into perldoc.
  
  
--bowersj21059408440
Content-Type: text/plain
Content-Disposition: attachment; filename="bowersj2-20030728120720.txt"

Index: modules/jerf/tests/Utils.pm
diff -u modules/jerf/tests/Utils.pm:1.12 modules/jerf/tests/Utils.pm:1.13
--- modules/jerf/tests/Utils.pm:1.12	Thu Jul 17 16:32:19 2003
+++ modules/jerf/tests/Utils.pm	Mon Jul 28 12:07:20 2003
@@ -1,7 +1,7 @@
 # The LearningOnline Network with CAPA
 # testing utilities
 #
-# $Id: Utils.pm,v 1.12 2003/07/17 20:32:19 bowersj2 Exp $
+# $Id: Utils.pm,v 1.13 2003/07/28 16:07:20 bowersj2 Exp $
 #
 # Copyright Michigan State University Board of Trustees
 #
@@ -29,6 +29,209 @@
 
 =pod
 
+=head1 NAME
+
+Utils - utility functions for testing
+
+=head1 SYNOPSIS
+
+Utils.pm - provides some convenience functions for LON-CAPA unit
+    testing for easily creating and destroying test courses, and
+    provides objects representing users and courses for convenience.
+
+=head1 OVERVIEW
+
+LON-CAPA's biggest problem has been bugs.
+
+The problem, of course, is that any change to the code has the
+opportunity to introduce new bugs into the system, *and* cause old
+bugs to reassert themselves. A simple analysis shows that over time it
+becomes impossible to add features to a design without the number of
+bugs being added exceeding the number of bugs being fixed by the
+change; at this point the software is dead.
+
+Software engineers have known about this problem for decades but
+surprisingly only in the last few years have answers to this problem
+started to crystallize. Unfortunately, the answers to this problem
+have typically come from radical methodologies such as "eXtreme
+Programmming", which bring excessive baggage along with the solutions
+to the real problems we all face. Fortunately, it is possible to
+extract these solutions and use them independent of the radical
+philosophy.
+
+By far the most important thing to come out of eXtreme Programming is
+its testing methodology. We all agree in the abstract that "testing"
+is important, yet we (as in programmers in general, not LON-CAPA
+specifically) do little to none of it, typically consisting of just
+poking at the system and trying to break it, which is not systematic
+and will always fail to exercise the system completely. eXtreme
+Programming's contribution to testing is a practical methodology and
+framework that can be used for testing, along with some requirements
+for that testing framework.
+
+However, this is my (Jeremy's) synthesis of it to date, so don't
+expect perfect correspondence with XP. The advantage of this is that I
+can vouch for how well this works from personal experience, which I
+can't do with XP.
+
+=head2 What Is A Test?
+
+In XP, there are two types of tests, I<unit tests> and I<acceptance tests>. 
+
+Unit tests are tests written to exercise a particular module of the
+system, as independently from the rest as possible. They are written
+to ensure the module works correctly, and should test as many success
+and I<failure cases> as possible. For each test, you specify whether a
+given action will succeed or fail, and preferably exactly how it
+succeeds (what it returns, or what side-effects it has) as precisely
+as possible.
+
+An acceptance test is specifically designed to test how well the
+system conforms to some user requirement. Personally, I don't think
+these are worth a seperate categorization, because in the final
+analysis that what I<all> testing is, it's just that the "user" of a
+module is the programmer, while the user of the program is what we'd
+traditionally call the user.
+
+A test is specifically designed to be "fire-and-forget"; it should be
+one simple command to fire a given test, or to fire all
+tests. Moreover, they should run in a reasonable amount of time. The
+idea is that as you are developing, you can frequently run the tests
+without feeling like you're always sitting and twiddling your thumbs.
+
+=head2 Why Test?
+
+Despite the fact we all feel bad about it, it is obvious that "Finding
+bugs" is not a sufficient justification for testing, or everybody
+would be doing it, all the time. What else can testing offer us?
+
+=over 4
+
+=item * B<A clear specification of what the module should
+do?. Generally, a given concept should be in the code precisely once;
+it is this idea that underlies every single code structure proposal to
+date (OO, AOP, Agile Programming, metadata-based programming, the list
+goes on). Testing methodology modifies this to say that a concept
+should appear precisely twice, once in the code, once in the tests.
+
+Generally one of the side-effects of testing is cause you to organize
+your code into easily testable chunks, which may then be end up
+recombined, but do not generally need to be completely re-written, so
+this "extra code" is not generally an issue in practice.
+
+=item * B<Confidence>. If the code passes the tests, you can be
+confident it works. If you need to make a change or add capabilities
+to the module, you can re-run the tests to insure all the old
+functionality still works. When a bug arises, you can code a test case
+for it and be confident that once you squash the bug it will never
+come back without you knowing.
+
+=item * B<Enough confidence to increase layering>. This is an
+important enough consequence of confidence that it's worth its own
+heading. Because you have confidence in the working of the various
+modules, you are much more confident that you can use the module as a
+part of a larger system, so you are much less inclined to build
+monolithic systems that touch each other only on explicit join
+points. Instead you can create a much more agile system that
+interconnects deeply, with the confidence that not only will it all
+work, but that most of the time if there is a problem, it will not be
+terribly difficult to find which module is causing it. (Of course
+subtle errors will always exist that will be hard to localize. But
+once you do, you can write a test for it and make sure it never comes
+back without being noticed.)  
+
+=item * B<Enough confidence to re-factor>. This is also important
+enough that it's worth its own heading. Because of the confidence you
+have mentioned in the previous point, you feel more capable of
+re-factoring as necessary to grow the code cleanly, instead of making
+timid hacks that eventually kill the code, as mentioned in the
+intro. Support from unit tests and a dedication to do it right can
+keep you out of the trap where timid hacks here and there eventually
+strangle the product.
+
+B<In the long-term, this is the most important aspect of testing>,
+even moreso then mere bug finding. The ability to refactor, sometimes
+even quite violently, and still be confident you have all the old
+capability you did before can be a huge markey advantage, as you'll be
+able to safely add new capabilities while those with inferior testing
+methodologies will be stuck in that infinite-bug tweaking
+scenario. Every aspect of testing should be bent towards making sure
+this works out correctly; all the other benefits are incidental
+side-effects.
+
+=item * B<Increased development speed, both long- and short-term>. A
+persistent myth is that you can't afford to test, because you don't
+have time. The exact opposite is true. Good unit test support speeds
+long-term development by allowing you to perform radical re-factorings
+as quickly as you could add a timid hack. Good unit test support
+speeds development by helping enforce excellent modularization along
+fairly natural boundaries. Good unit test support helps pinpoint bugs
+by make it easy to run testing code over a problem section and gather
+a lot of data quickly. Good unit test support helps ensure squashed
+bugs stay down, even in the short-term.
+
+=back
+
+In the end, adding all the benefits of unit testing together reaps a
+multiplicative increase in development speed and an increase in
+development quality, simultaneously.
+
+So why haven't I (Jeremy) already started doing it? Unit testing in a
+web environment is non-trivial, especially when the environment was
+not written with testing in mind. Also I just started to realize the
+benefits over the last few months as I've applied it to my personal
+projects.
+
+=head2 The Unit Test Framework
+
+Despite three or four decades of "knowing" we should be doing testing,
+nobody has done this up until now because without understanding what
+we hoped to gain from testing, without knowing what we could hope to
+gain from testing, we would miss critical aspects of how to test and
+as a result not gain the benefits listed above. For instance, we might
+test only the final system, which gains nearly nothing; they come too
+late in the process to give most of the benefits listed above. Even if
+unit tests were written, it would be difficult to run them all at
+once, so nobody ever would. (It's absolutely critical that the tests
+be easy to fire off, which is easy to miss.)
+
+Working out what the ideal unit testing framework is still occurring,
+and nobody knows for certain what the final result will look like. But
+right now the best-of-breed testers all derive from a Java testing
+framework called JUnit. The closest Perl implementation is Test::Unit.
+
+Unit tests consist of seperate Test objects that the framework then
+inspects to determine what to do. Each test has a setup and teardown
+method which they can use to set up the environment. The object also
+has a number of test_* methods which the framework will execute,
+looking for errors as it goes. It collects the errors and reports them
+at the end of the run.
+
+It's supposed to be easy to run a specific file, or to run all tests.
+
+How to use the framework will be covered by example, in the
+ApacheRequest.pm and ApacheRequestTest.pm files. ApacheRequest is an
+object we have to write in order to test LON-CAPA, since to test a web
+system we need to simulate the requests as much as possible, while not
+actually needing to go through the server (so we have direct access to
+as much data as possible).
+
+=head2 How To Use Unit Tests
+
+Ideally, all tests should be run before any given commit. This is not
+always practical, but at the very least, all related tests should be
+executed before any commit. This helps ensure the system does not grow
+new bugs on a given commit.
+
+In addition, if we can start using these things, we should set up Data
+to run the tests every night, and automatically mail the dev list if
+something blows up.
+
+For each sub test_*, entirely seperate test objects are created, so be
+aware the setup and teardown routines will be run that many
+times. Module-scoped vars can be used as persistent globals if the
+need arises
+
 =head1 Testing Utilities for LON-CAPA
 
 Certain functionality is necessary for correctly testing LON-CAPA with 
@@ -39,6 +242,27 @@
 =cut
 
 package Utils;
+
+=head1 Utility Functions
+
+Setting up tests requires much the same steps, over and over. These
+functions help with setting up tests easily.
+
+=over 4
+
+=item * B<setupCourse>($topLevelMap): Sets up a course using the test
+    resources in the test resource directory. $topLevelMap identifies
+    a map in the test resource directory to use as the top-level map
+    by filename ("all.problems.sequence"). 
+
+=cut
+
+
+=pod
+
+=back
+
+=cut
 
 1;
 

--bowersj21059408440--