Archive for January, 2014

Thinking about the fraud against Target

Wednesday, January 22nd, 2014

I read an interesting article in the Wall Street Journal today:

Basically, the theory presented in the article is that there are these wonderful credit/debit cards with embedded chips that are much more secure than the current system. If only Target (and other retailers) had adopted these, we would have less fraud. Apparently, the fault lies with Target.

I imagine that the expected response to this article is “What were they thinking?” as the reader realizes that more-effective technology was at hand at what might have been a reasonable price.

I got to watch some of this play out in the late 1990’s. At the time, I was working as a technology-focused lawyer and one of the areas I worked on was electronic payment systems. I published a few papers on this. One available from my website appeared in 1998 in the Journal of Electronic Commerce, called “SPLAT! Requirements bugs on the information superhighway“, see

The issues I wrote about in this (and related papers) involved the use of public-key encryption systems to guarantee identity. The same commercial-liability issues were coming up for chip cards, with the same rationale.

These systems offered the potential of significantly reducing fraud in consumer transactions. Fraud was seen as a big problem. With these savings of billions of dollars of losses, some credit card company representatives spoke of being able to noticeably lower their fees and interest rates. Who wouldn’t want that?

Unfortunately, some financial services firms (and some other folks) saw two opportunities here.

  1. They hated paying money to criminals committing fraud
  2. They hated guaranteeing every credit card transaction in the event of fraud—they wanted to put this risk back on the consumer but current legislation wouldn’t let them

The proposals to adopt encryption-based identification systems in commerce tied these together. The proposed laws would:

  1. authorize the use of encryption-based identification as equivalent to an ink signature
  2. treat the encryption-based identification as absolutely authoritative, so that if someone successfully impersonated you, you would bear all the loss. Current law sticks the financial-services firms with the risk of credit-card fraud losses because they design the system and decide how much security to build into it. The proposed new system would be an alternative to the consumer-protected credit-card system. It would flip the risk to the consumer.

I think legislation would have easily passed that provided incentives to adopt encryption-based identification. For example, the legislation could have created a “rebuttable presumption” — an instruction to a court to assume that a message encrypted with your key came from you and if you wanted to deny that, you would have to prove it.  This legislation would have reduced fraud, which would benefit everyone. (Well, everyone but the criminals…)

Unfortunately, the demand went further. Even if you could prove that you were the victim of identity theft that was in no way your fault, you would still be held accountable for the loss. 

The lawyers advocating for incentivizing encryption-based identification weren’t willing to separate the proposals. The result of their inflexibility was opposition to encryption-based payment-related identification systems (including chip cards). One dimension of the opposition was technical–the security of the payment systems was almost certainly less (and therefore the risk of fraud that was created by the system and not by negligence of the consumer was greater) than the most enthusiastic proponents imagined. Another dimension was irritation with what was perceived as greed and unwillingness to compromise.

Back then, I saw this play out because I was helping a committee write the Uniform Electronic Transactions Act (UETA). This eventually passed in most states and was then federalized under the name ESIGN. ESIGN now governs electronic payments in the United States. The multi-year drafting process that yielded UETA/ESIGN offered a unique opportunity to write incentives for stronger identification systems into the laws governing electronic payments. Instead, we chose to write legislation that accepted a status quo that involved too much fraud, with prospects of much worse fraud to come. I was one of the people who successfully encouraged the UETA drafting committee to take this less-secure route because there was no politically-feasible path to what seemed like the obvious compromise.

Our economy has benefited enormously from legislation that lets you buy something by clicking “I agree”, without having to sign a physical piece of paper with a physical ink-pen. We could have done this better. Instead, we accepted the predictable future outcome that the United States would continue to use insecure payment systems, that would result in ongoing fraud, like the latest attacks on Target, Neiman Marcus, and (apparently, according to recent reports) at least six other national retailers.

On the design of advanced courses in software testing

Sunday, January 19th, 2014

This year’s Workshop on Teaching Software Testing (WTST 2014) is on teaching advanced courses in software testing. During the workshop, I expect we will compare notes on how we design/evaluate advanced courses in testing and how we recognize people who have completed advanced training.

This post is an overview of one of the two presentations I am planning for WTST.

This presentation will consider the design of the courses. The actual presentation will rely heavily on examples, mainly from BBST (Foundations, Bug Advocacy, Test Design), from our new Domain Testing course, and from some of my non-testing courses, especially statistics and metrics. The slides that go with these notes will appear at the WTST site in late January or early February.

In the education community, a discussion like this would come as part of a discussion of curriculum design. That discussion would look more broadly at the context of the curriculum decisions, often considering several historical, political, socioeconomic, and psychological issues. My discussion is more narrowly focused on the selection of materials, assessment methods and teaching-style tradeoffs in a specialized course in a technical field. The broader issues come into play, but I find it more personally useful to think along six narrower dimensions:

  • content
  • institutional considerations
  • skill development
  • instructional style
  • expectations of student performance
  • credentialing


In terms of the definition of “advanced”, I think the primary agreement in the instructional community is that there is no agreement about the substance of advanced courses. A course can be called advanced if it builds on other courses. Under this institutional definition, the ordering of topics and skills (introductory to advanced) determines what is advanced, but that ordering is often determined by preference or politics rather than by principle.

I am NOT just talking here about fields whose curricula involve a lot of controversy. Let me give an example. I am currently teaching Applied Statistics (Computer Science 2410). This is parallel in prerequisites and difficulty to the Math department’s course on Mathematical Statistics (MTH 2400). When I started teaching this, I made several assumptions about what my students would know, based on years of experience with the (1970’s to 1990’s) Canadian curriculum. I assumed incorrectly that students would learn very early about the axioms underlying algebra—this was often taught as Math 100 (1st course in the university curriculum). Here, it seems common to find that material in 3rd year. I also assumed incorrectly that my students would be very experienced in the basics of proving theorems. Again mistaken, and to my shock, many CS students will graduate, having taken several required math courses, with minimal skills in formal logic or theorem proof. I’m uncomfortable with these choices (in the “somebody moved my cheese” sense of the word “uncomfortable”)—it doesn’t feel right, but I am confident that these students studied other topics instead, topics that I would consider 3rd-year or 4th-year. Even in math, curriculum design is fluid and topics that some of us consider foundational, others consider advanced.

In a field like ours (testing) that is far more encumbered with controversy, there is a strong argument for humility when discussing what is “foundational” and what is “advanced”.

Institutional Considerations

In my experience, one of the challenges in teaching advanced topics is that many students will sign up who lack basic knowledge and skills, or who expect to use this course as an opportunity to relitigate what they learned in their basic course(s). This is a problem in commercial and university courses, but in my experience, it is much easier to manage in a university because of the strength and visibility of the institutional support.

To make space for advanced courses, institutions that designate a courses as advanced are likely to

  • state and enforce prerequisites (courses that must be taken, or knowledge/skill that must be demonstrated before the student can enrol in the advanced course)
  • accept transfer credit (a course can be designated as equivalent to one of the institution’s courses and serve as a prerequisite for the advanced course)

The designation sets expectations. Typically, this gives instructors room to:

  1. limit class time spent rehashing foundational material
  2. address topics that go beyond the foundational material (whatever material this institution has designated as foundational)
  3. tell students who do not know the foundational material (or who cannot apply it to the content of the advanced course) that it is their responsibility to catch up to the rest of the class, not the course’s responsibility to slow down for them
  4. demand an increased level of individual performance from the students (not just work products on harder topics, but better work products that the student produces with less handholding from the instructor)

Note clearly that in an institution like a university, the decisions about what is foundational, what is advanced, and what prerequisites are required for a particular course are made by groups of instructors, not by the administrators of the institution. This is an idealized model–it is common for institutional administrators to push back, encouraging instructors to minimize the number of prerequisites they demand for any particular course and encouraging instructors to take a broader view of equivalence when evaluating transfer credits. But at its core, the administration adopts structures that support the four benefits that I listed above (and probably others). I think this is the essence of what we mean by “protecting the standards” of the institution.

Skill Development

I think of a skill as a type of knowledge that you can apply (you use it, rather than describe it) and your application (your peformance) improves with deliberate practice.

Students don’t just learn content in courses. They learn how to learn, how to investigate and find/create new ideas or knowledge on their own, how to find and understand the technical material of their field, how to critically evaluate ideas and data, how to communicate what they know, how to work with other students, and so on. Every course teaches some of these to some degree. Some courses are focused on these learning skills.

Competent performance in a professional field involves skills that go beyond the learning skills. For example, skills we must often apply in software testing include:

  • many test design techniques (domain testing, specification-based testing, etc.). Testers get better with these through a combination of theoretical instruction, practice, and critical feedback
  • many operational tasks (setting up test systems, running tests, noting what happened)
  • many advanced communication skills (writing that combines technical, persuasive and financial considerations)

Taxonomies like Bloom’s make the distinction between memorizable knowledge and application (which I’m describing as skill here). Some courses, and some exams, are primarily about memorizable knowledge and some are primarily about application.

In general, in my own teaching, I think of courses that focus on memorizable knowledge as survey courses (broad and shallow). I think of survey courses as foundational rather than advanced.

Most survey courses involve some application. The student learns to apply some of the content. In many cases, the student can’t understand the content without learning to apply it at least to simple cases. (In our field, I think domain testing–boundary and equivalence class analysis–is like this.) It seems to me that courses run on a continuum, how much emphasis on learning things you can remember and describe versus learning ways to apply knowledge more effectively. I think of a course that is primarily a survey course as a survey course, even if it includes some application.

Instructional Style

Lecture courses are probably the easiest to design and the easiest to sell. Commercial and university students seem to prefer courses that involve a high proportion of live lecture.

Lectures are effective for introducing students to a field. They introduce vocabulary (not that students remember much of it–they forget most of what they learn in lecture). They convey attitudes and introduce students to the culture of the field. They can give students the sense that this material is approachable and worth studying. And they entertain.

Lectures are poor vehicles for application of the material (there’s little space for students to try things out, get feedback and try them again).

In my experience, they are usually also poor vehicles for critical thinking (evaluating the material). Some lecturers develop a style that demands critical thinking from the students (think of law schools) but I think this requires very strong cultural support. Students understand, in law school, that they will flunk out if they come to class unprepared and are unwilling or unable to present and defend ideas quickly, in response to questions that might come from a professor at any time. Lawyers view the ability to analyze, articulate and defend in real time as a core skill in their field and so this approach to teaching is considered appropriate. In other fields that don’t prioritize oral argumentation so highly, a professor who relied on this teaching style and demanded high performance from every student, would be treated as unusual and perhaps inappropriate.

As students progress from basic to advanced, the core experiences they need to support further progress also change, from lecture to activities that require them to do more–more applications to increasingly complex tasks, more critical evaluation of what they are doing, what others are doing, and what they are being told to do or to accept as correct or wise. Fewer things are correct. More are better-for-these-reasons or better-for-these-purposes.

Expectations of Student Performance

More advanced courses demand that students take more responsibility for the quality of their work:

  • The students expect, and tolerate, less specific instructions. If they don’t understand the instructions, the students understand that it is their responsibility to ask for clarification or to do other research to fill in the blanks.
  • The students don’t expect (or know they are not likely to get) worked examples that they can model their answers from or rubrics (step-by-step evaluation guides) that they can use to structure their answers. These are examples of scaffolding, instructional support structures to help junior students accomplish new things. They are like the training wheels on bicycles. Eventually, students have to learn to ride without them. Not just how to ride down this street for these three blocks, but how to ride anywhere without them. Losing the scaffolding is painful for many students and some students protest emphatically that it is unfair to take these away. I think the trend in many universities has been to provide more scaffolding for longer. This cuts back on student appeals and seems to please accreditors (university evaluators) but I think this delays students’ maturation in their field (and generally in their education).

One of the puzzles of commercial instruction is how to assess student performance. We often think of assessment in terms of passing or failing a course. However, assessment is more broadly important, for giving a student feedback on how well she knows the material or how well she does a task. There has been so much emphasis on high-stakes assessment (you pass or you fail) in academic instruction that many students don’t understand the concept of formative assessment (assessment primarly done to give the student feedback in order to help the student learn). This is a big issue in university instruction too, but my experience is that commercial students are more likely to be upset and offended when they are given tough tasks and told they didn’t perform well on them. My experience is that they will make more vehement demands for training wheels in the name of fairness, without being willing to accept the idea that they will learn more from harder and less-well-specified tasks.

Things are not so well specified at work. More advanced instruction prepares students more effectively for the uncertainties and demands of real life. I believe that preparation involves putting students into uncertain and demanding situations, helping them accept this as normal, and helping them learn to cope with situations like these more effectively.


Several groups offer credentials in our field. I wrote recently about credentialing in software testing at My thoughts on that will come in a separate note to WTST participants, and a separate presentation.