Quality

What Constitutes Quality in Web-Based Training?

Everyone involved in taking, producing, and delivering online learning would agree that quality is paramount; however, objectively measuring quality is difficult and infrequently undertaken. Quality is an unstated expectation, yet we rarely use a formal process for assessing quality of training products. Poor design, project under funding, overly optimistic schedules, and technical barriers are all enemies of quality. So is the apathy of buyers that just want some training, any training, and developers that focus on the quantity of offerings and an impressive client list. A methodical approach to evaluation can help remove subjective biases and achieve a more authoritative analysis.

We tend to judge quality only from the perspective of our own domain. Consider the views of all the stakeholders: the training manager; the designer/developer; the system administrator/IT manager that will host the application; and, of course, the end users. In some cases quality measures are of no concern to one stakeholder while of considerable importance to another. Learner-centered design would propose that you make all decisions exclusively for the learners' benefit. Yet, all stakeholders must be partners if success is to be achieved. Since development and delivery are a team effort, one must weigh all viewpoints on what constitutes quality.

 

Quality Measures

A checklist of value statements can be used to objectively measure quality. By weighting, and thus ordering, the measures, a process develops for objectively evaluating online training products.

Here are 22 value statements about web-based training quality. Consider these when planning to develop WBT or buying off-the-shelf WBT.

WBT Quality Measures
 

The training . . .

Detail

1

. . . meets the objectives.

This quality measure is easy to state but difficult to quantify. Certainly, learning objectives defined by instructional designers must be reached for the training to be considered successful. These objectives anticipate cognitive, psychomotor, and/or affective changes, and they are measurable by using appropriate evaluation strategies.

Training managers prioritized differently. The central objective of the organization is human performance improvement that increases organizational value.

2

. . . is learner-centered.

Quality training focuses on people, not content. We've all been in lecture halls where the lecturer may not even have seen us, much less addressed our individual learning needs. The best training takes the role of private tutor: presenting, coaching, evaluating, adapting and supporting. Many of the other quality measures detail methods of supporting the learner-centered design philosophy.

3

. . . provides high levels of interactivity.

Interactivity is marketed as the ultimate quality measure for web sites and training applications. Computer gaming would not exist without interactivity. Interactions force active participation. Yet, one must ask whether the interactions support learning or merely activate features or navigate an information space. Using the mouse to control a scalpel in a simulated frog dissection enhances learning. Clicking a forward arrow provides no contextual benefit though it does enable learner control. Interactions that support learning objectives are the most valuable.

4

. . . is engaging.

Interactivity is an essential strategy for engaging a learner, but engagement goes beyond this. Are the content and its presentation interesting? More important, will the learner recognize the relevance of the content to workplace tasks? Is the training realistic and hands-on? Reaching the highest levels of learning (articulation, analysis, and synthesis) can only be achieved by engaging the learner fully. Correct media selection and a presentation supportive of individual learning styles help maximize engagement.

5

. . . accommodates individual learning styles.

Users may fail to learn if the training design does not accommodate their learn styles. Individuals have hemispheric processing preferences and dominant learning styles. These styles may be related to sensory perception (seeing, hearing, moving, touching) or intellectual processing (reading, thinking, abstract reasoning). There are many formal models for describing learning styles for children and adults. They all point to the fact that individuals learn differently and training must recognize and accommodate these differences.

6

. . . uses media effectively.

Does the training demonstrate wise media selection throughout, or are media used for their own sake? Some developers use technologies like video and chat just because they know how, or because the boss wants lots of "bells and whistles." Can you justify the "talking head" video?

7

. . . helps users apply learning productively.

The relevance of the training plays into evaluation of quality. Businesses want to provide employees training that can be applied directly or indirectly to their business problems. University online learning applications should provide the foundation for more advanced coursework while stimulating independent creative thinking, problem-solving, and cross-domain application.

8

. . . adheres to the Instructional Systems Design (ISD) or similar model.

Instructional design models, such as ISD, formalize the process of defining, developing, and analyzing training. If the training adheres to a model, it most likely has gone through user analysis, training analysis, technical analysis, a rigorous design process, development, and a final post-implementation evaluation. These steps ensure that the training is meeting its goals. Which process is used is not as important as the fact that there is a comprehensive process.

9

. . . presents information in an organized, coherent manner while allowing user control of learning (cognitive usability).

User-centered design dictates that users be in control of the learning process, each choosing the approach most comfortable to him or her. Some students flourish in an environment where a non-linear design accommodates their pattern for learning. Absolute system control over presentation and patterns of learning must be avoided. Nonetheless, other students prefer guidance in the form of a preferred path through the information space with options to change course and assume user control at will. Multiple navigational controls benefit both kinds of learners and improve satisfaction.

10

. . . presents extended learning opportunities.

Surely, the technical ability to link to external resources presents extraordinary opportunities for the user to extend learning. Remember, however, that links are most often information links, not training. As learning object technologies formulate, we will see training applications being defined only as the entry point to an infinite number of interconnected learning nodes. Don't mistake long lists of hyperlinks with learning objects. Such lists can be detrimental if they distract from the training focus. Creative designs present carefully selected external resource links that reinforce the learning objectives.

11

. . . has completed post-implementation evaluations and subsequent revision.

This measure focuses on the body of learners rather than the individual. A post-implementation evaluation determines if the training provides expected benefits to individual and organization. Evaluations can be simple, typified by a satisfaction questionnaire, or as complex as an organizational impact analysis. There is an inverse correlation between the quality of the planning and instructional design and the nature and quantity of revisions needed to perfect the training. If I were buying off-the-shelf web-based training, I would evaluate results of testing in environments similar to mine.

12

. . . demonstrates good usability through excellent user interface design.

There are many rules of good web page design, such things as avoiding "dead-end" branches and nesting branches no more than four levels in depth. Interface design goes further to consider iconic representation, control and information placement, visual and aural cueing, and spatial mapping of the information space. Learning can occur more rapidly when the user is able to make mental order of the information space. This means knowing where he is and how far he has to go in the four-dimensional hypermedia world of web-based training. The training application's usability has direct correlation to the ease of mental navigation in information space and cognizance of one's surroundings.

13

. . . continually adapts to the user's knowledge or skills.

WBT applications that dynamically adapt to the user are, as we say on the farm, as scarce as hens' teeth. You may encounter so-called adaptive training that forces one to answer questions correctly before allowing one to proceed. In a truly supportive adaptive learning application, the courseware will monitor interactions and usage patterns, then customize content delivery to accommodate learning styles, pace, and perhaps prerequisite knowledge and skills. Adaptive courseware exhibits the human qualities of understanding, patience and persistence. Score high.

14

. . . validates learning at each curriculum event.

Quality design demands that learners are tested throughout the training cycle. There are a variety of testing strategies that might be used, so don't expect to see multiple-choice questions all the time. Each time new material is covered, an evaluation opportunity is presented. The testing strategy should be followed by feedback, remediation opportunities, or adaptive content delivery.

15

. . . uses group-enabling technologies (mail lists, chat, forums, multicasts) only where they are most effective.

If group interactions are important to learning, does this training effectively use group-enabling technologies? On the other hand, if the learning objectives are easily met by self-paced individual instruction, is there any value in adding group technologies? Will those technologies be supported and used?

16

. . . promotes a positive user experience with computer technology.

"I had doubts, but yes, I really learned that material."
"That's a fun way to learn and I know that I know."
"I was in control and that helped me learn my way. Fast, too."

These kinds of comments reveal a positive experience, where learning is the center of attention, not technology. Training that can elicit such responses will motivate users to continue learning and apply what they learn to their work. Identify and use technologies that are supportive of learning rather that restrictive or frustrating.

17

. . . records student data, such as login information, scores, usage statistics, prescriptions for learning, etc.

Record keeping, for whatever purpose, is not as easily implemented as web pages. That's why you so many static tutorials on the Web. Yet, record keeping can be used to retain information and control content delivery. Students appreciate something as simple as marking one's place during a break in learning. Managers love to have a comprehensive set of data upon which to evaluate training effectiveness and costs. Decide for yourself the added value of record keeping, but consider all perspectives.

18

. . . will not exceed practical bandwidth limitations of the network.

"Whoa! You're not going to run that on MY network!" That was a real response from an IT manager not consulted by the training manager before beginning a project.

WBT should be designed in consideration of real-world bandwidth limitations. A meticulous technical analysis of your network infrastructure will identify the bandwidth ceiling the designers must respect. If the course will be delivered over the public internet, use bandwidth-friendly technologies. If you are considering buying access to online training from a vendor's catalog, test performance exhaustively. Performance leaks give rise to breaks in concentration and destruction of engagement.

19

. . . is easy to access, easy to install.

As users of online learning, we want to access that content with a minimum of effort, from enrollment to completion. Barriers in the form of complex sign-up procedures, software installs, under-performing connections, and ineffective interfaces detract from the mission of learning. Highly creative and instructionally sound designs can often offset the need for complex technologies. Nonetheless, the initial pains of installation and learning to cope with technology can be rewarded handsomely with richly interactive training applications.

20

. . . ensures best value for training costs.

Buyers want the highest quality training at the lowest cost. Learners want to learn more, learn faster, and retain and apply what is learned. Managers want results leading to increased productivity, motivated and satisfied workers, and organizational competitiveness.

Sellers want to deliver customer value at a reasonable profit. Academic institutions, threatened with rising costs and diminishing enrollments, are scrutinizing distance education as a solution to both problems.

Quality training ensures value for buyers and sellers.

21

. . . content is accurate and timely.

We all assume that training developed by a subject matter expert is accurate. This may not be a useful assumption. In fact, the training may have been developed by persons with little in-depth knowledge of the content area. I have recently taken certification training in a highly technical area only to find the material somewhat outdated and littered with out-and-out false information. Questioning the content's accuracy is prudent.

22

. . . follows industry standards for interoperability.

Several initiatives are now formalizing standards for interoperability of online training components. Among these organizations are ADL, IEEE, AICC, IMS, and several others. Based on knowledge object concepts, components are described by metadata (data about data) that enable information exchange across development language, platform, and training management system. Since this work is ongoing and the standards are not mature, existing courses will not meet these standards. However, the impact of standards is profound. The ability of disparate knowledge objects to work together in unanticipated ways creates opportunities for unique courseware customizes to specific learning needs. Expect this quality measure to rise in importance as standards are developed, accepted, and implemented.

 

Weight Factors

The next step in evaluating the overall quality of a WBT application is to weigh the importance of each quality measure. As you look over these quality measures, some will seem very important and others less important. We could rank these quality measures and that would tell us what is most important and what is least. However, that would not tell us by how much. Establishing a weight factor answers how much and ensures a more objective outcome.

Imagine if you can that we could assemble ten instructional designers in a room to evaluate a WBT course on, let's say, diversity issues in hiring. There are a number of video clips to show subtle examples of discrimination. The instructional designers use a weighted scorecard that they develop to evaluate the training. They issue a very positive report. Then we bring in ten network system administrators to evaluate the same course using their own weighted scorecard. Their report is far from enthusiastic. The system administrators consider bandwidth issues extremely important and the instructional design process quite unimportant. The instructional designer would likely have the opposite view. By combining the weight factors for each quality measure as determined by members of each of the stakeholder groups, we can come up with a composite that represents the interests of all stakeholders.

One could successfully argue that the groups themselves must be weighted. Learners should have the loudest voice in determining what is good and what is bad. As we develop a combined weight factor for each quality measure, it would help if most of the input came from learners.

The scale used for weight factors is insignificant. Generally, it only needs to have enough range to reflect importance relative to the other quality measures. Several measures can have the same weight factor. You do not want to merely sort these by numbering from 1 to 22. We will use a scale of 1 (low importance) to 10 (high importance) to assign a weight factor to each measure.

In an exercise conducted in 1999 weight factors were assigned to each quality measure by a number of representatives of each stakeholder group. The resulting weight factor averages (with some adjustment to represent stakeholders in proportional numbers) form the basis of an objective quality measuring system.

 

Scorecard

Now that we have a list of value statements that express quality measures and corresponding weight factors for those measures, we can build a scorecard for evaluating online learning applications. The weight factors will be used to calculate an adjusted score and then a total score for the training application.

To make the scorecard truly effective in evaluating training in your situation, you might need to make further adjustments. For example, if you were comparing two products and they had different levels of record keeping, but record keeping was not an important issue, then you would eliminate that measure from the grading process. Remember that this tool is used to objectively measure quality, not suitability. It is possible that a course with a lower score could be better suited to your learner's needs than one scoring higher—the quality is lower but it is a better choice.

 

How To Use The Scorecard

Evaluate the courseware in each area, assign a score (0-4), multiply that by the weight factor to find the adjusted score—Score (S) multiplied by Weight Factor(WF) equals Adjusted Score (AS)—then total adjusted scores for the courses's overall score. You can change the scale for score to something else—0-10 for example—as long as you are consistent across all quality measures.

The total score from one course can be used to compare it to the score from another. However, do not rely solely on quality scores to make purchase decisions, for there are other factors to consider: cost, compatibility with learning management systems, consistency with other courseware, reputation of vendor, and perhaps others. This scorecard can be one tool you use to objectively evaluate WBT quality.

 

Scorecard for Measuring WBT Quality
 

The training . . .

Enter a
Score
Scale: 0 - 4

Weight
Factor

Adjusted
Score
(S x WF = AS)

1

. . . meets the objectives.

 

5

 

2

. . . is learner-centered.

 

6

 

3

. . . provides high levels of interactivity.

 

6

 

4

. . . is engaging.

 

5

 

5

. . . accommodates individual learning styles.

 

5

 

6

. . . uses media effectively.

 

5

 

7

. . . helps users apply learning productively.

 

6

 

8

. . . adheres to the Instructional Systems Design (ISD) or similar model.

 

2

 

9

. . . presents information in an organized, coherent manner while allowing user control of learning (cognitive usability).

 

5

 

10

. . . presents extended learning opportunities.

 

4

 

11

. . . has completed post-implementation evaluations and subsequent revision.

 

4

 

12

. . . demonstrates good usability through excellent user interface design.

 

4

 

13

. . . continually adapts to the user's knowledge or skills.

 

4

 

14

. . . validates learning at each curriculum event.

 

4

 

15

. . . uses group-enabling technologies (mail lists, chat, forums, multicasts) only where they are most effective.

 

4

 

16

. . . promotes a positive user experience with computer technology.

 

5

 

17

. . . records student data, such as login information, scores, usage statistics, prescriptions for learning, etc.

 

5

 

18

. . . will not exceed practical bandwidth limitations of the network.

 

5

 

19

. . . is easy to access, easy to install.

 

6

 

20

. . . ensures best value for training costs.

 

6

 

21

. . . content is accurate and timely.

 

5

 

22

. . . follows industry standards for interoperability.

 

3

 

TOTAL SCORE:

 

 

This scorecard is available for download as a Microsoft Word document or Microsoft Excel spreadsheet.

 

skip navigation
Copyright © 1994 - 2009 Tim Kilby. All rights reserved.
WBTIC Home WBT Primer Trends in Online Learning Surveys Resources Commentary What Is Web-Based Training? Advantages and Disadvantages of WBT Quality Rules for Good Design Standards Development Process What Is a Web-Based Performance Support System? Glossary