April 2006 Index
 
Home Page


Editor’s Note
: This article shows how choice of instructional design elements can positively or negatively affect cognitive learning.

A Learner’s Cognitive Levels of Thought

Eshaa M. Alkhalifa

introduction

Learning has been regarded as a cognitive activity by a large number of world renown researchers including Piaget (1970) and Vygotsky(1962). As research advanced and human knowledge in this field amplified exponentially, new fields of research emerged one of which is dedicated for Cognitive Science. However, due to the novelty of the field, only sporadic attempts were made to benefit from the findings in evaluating existing teaching approaches from the cognitive perspective of a learner. This article presents a taxonomy of the major cognitive levels of thought that is based on Bloom’s taxonomy of learning objectives (1971) that can be elicited by existing computer based educational system approaches.

Background

The process of learning as currently regarded is accomplished through exposing students to novel material, or a novel approach to solve a problem and then expecting learners to “recall” the essential parts of what has been presented. For example, if what is presented in a definition of a new concept such as “computers” then students are expected to be able to describe what computers are in their own words based on what they “understand” and “remember”. Learning a process differs in that it may involve the sequence followed to solve a problem as in mathematics, or the sequence plus physical motor activities such as learning how to drive a car, how to operate on a patient, or how to repair a car engine.

Both major types of learning; require learners to “recall” some of what was presented, so the cognitive activity of “memory” is necessary. Both require learners to reason, because once they see several mathematical examples presented to them, they should be able to extract the main rules they should follow in solving a novel question. To sum up, it is extremely difficult to regard the learning process as distinct from cognitive processing which is a view shared by well known researchers including; Jonassen (1991), van Jooligan (1999), Albacete and VanLehn( 2000a, 2000b) and Alkhalifa (2005, in press a).

The persistent goal is to achieve a clearer understanding of how cognition influences learning and how to utilize the findings in making the learning process more efficient.

Jonassen (1991), for example, is one advocate of the constructivist approach to learning where students play an active role in the learning process. In this approach students are given several tools to relieve them from repetitive computation or to externally display text they are required to recall (as when writing paper), in order to allow them to focus on the learning task at hand. He adopts the assumption originally proposed by Lajoie and Derry (1993, Lajoie, 1990) that computers fill the role of cognitive extensions, by performing tasks to support basic thinking requirements like calculating or holding text in memory which caused them to label computers as “Cognitive Tools”. Jonassen’s (1991) central claim is that these tools are offered to students to lower the cognitive load imposed during the learning process to facilitate learning by experimentation and discovery. However, no experimental evidence was presented to support these claims that students learn more or learn differently with these designs than in a classical classroom setting.

Wouter van Jooligan (1999) takes this concept a step further by proposing an environment that allows students to hypothesize and pursue the consequences of their hypotheses. They presented two systems; the first supports the hypothesis formation step by providing several windows that help students form their hypotheses, the second provides a formatted presentation of experiments already tested and their results in a structured manner. They added intelligent support to the system by providing feedback to students to guide their hypothesis formation approach. Yet again the work was lacking a proper comparative evaluation.

Albacete and VanLehn (2000a, 2000b) by contrast recognized the cognitive anomaly that exists between the naïve students’ ill-structured knowledge of conceptual physics and the highly structured knowledge of experts in the field. Consequently their presented system concentrates on teaching students how the various concepts relate to each other. Evaluation of results exhibited no significant differences between the learning outcomes of the control group when compared to the experimental group. Albacete and VanLehn (2000b) then utilized alternative means of analysis to highlight various differences in learning between groups. The first was through measuring the effect size as done by Bloom (1984) while the second was to compare results to the nationwide score on a standardized test. The third was to consider how much students who have different pretest scores learned when compared to each other.

The lack of structure in this work led Alkhalifa (in press a, 2005) to offer several different formalization possibilities depending on the system designer’s goals. One example (in press a) is an alignment or guide to all multimedia system designers who wish to take the effects cognitive processing characteristics into account. This formalization is supported by the positive effects on learning while utilizing a cognitively informed design of a multimedia educational system. Another example is a framework is also offered for the evaluation of multimedia systems (2005) where cognitive factors and individual differences are taken into account. Results here indicate that neglecting these factors may result in false negative evaluation outcomes.

Although prior research findings highlight how to take advantage of system design and how educational systems can be better evaluated, they do not give any clear indication of how different designs may be compared to each from the perspective of the cognitive system.

This necessitates introduction of a clear taxonomy that has well defined practical boundaries of the relative load imposed on the cognitive system during learning. Bloom’s taxonomy of learning objectives, therefore offers a perfect starting point to introduce a new taxonomy of cognitive levels of thought that can be elicited through the characteristics of any learning approach.

The taxonomy presents the levels by using terms that are well defined in the field of cognitive science with the goal of making it possible to elicit any particular level through the various possible approaches to learning that exist. A case study that compared teaching students by eliciting two different cognitive levels was performed to show that eliciting different cognitive levels does interact significantly with the complexity of the taught material to the degree that it may retard learning. (Alkhalifa, in press b).

Table 1
Taxonomy of Cognitive Levels of Thought
as Elicited by the Teaching Medium

Taxonomy of elicited cognitive levels of though

Bloom’s taxonomy

1.        Simple recall: Cognitive processing involves what was presented in memory without necessarily comprehending it.

1.       Knowledge: Students can remember what was presented to them word for word as in recalling prices of goods.

2.        Language comprehension (descriptive knowledge): Students are presented with the materials linguistically or through animation which results in a mental representation of learned concepts. This representation can then be evaluated by requesting a descriptive representation of the concept.

2.       Comprehension: Students can explain what they learned in their own words as in interpreting instructions.

3.        Reasoning & Deduction (procedural knowledge): Learning processes or steps followed in a sequence that requires a form of simple reasoning to take learners from what is given to deductions that they can make.

3.       Application: Students can apply what they learned in a new situation as in calculating an employee’s remaining vacation time.

4.       Analogical Reasoning:

a. Learn from analogies: Students presented with analogies from completely different domains have the ability to compare the structure of one domain as it maps onto the second to make comparisons and analyses.

4.       Analysis: Students can break materials presented to them into their components as in troubleshooting a piece of equipment using logical reasoning.

b. Create new analogies: This is a higher level of analogical reasoning where one searches in memory for a domain that is comparable and selects an analogical situation similar to the situation being presented. Students have the components in their prior knowledge and create the structure by giving a description of the system they composed.

5.       Synthesis: Build a structure or pattern from diverse elements as in designing a machine to perform a task.

5.        Meta reasoning: A student regards work from an evaluator’s point of view and considers reasoning followed by others to arrive at his/her conclusions.

6.       Evaluation: Make judgments about the value of ideas or materials as in selecting the most effective solution or hire the most qualified candidate.

The first experiment involved a number of mathematical series problems to be given to students in order to identify the different types of errors they may make while solving them and this resulted in isolated 6 main types of errors.

Additionally, questions can be divided with respect to the complexity of the materials into two distinct levels of complexity as shown below:

  series 1: Students are expected to produce the form

When given 3 + 6 + 9 + 12 + 15

  series 2: Students are expected to produce the form

When given 3 + 9 + 27 + 81 + 243

These levels of processing suffer from the burden of interaction between the elements as defined by John Sweller (1994). Students are expected to dissect each number into its components such that they would comprehend the relationship that is preserved between them. One possibility is as follows:

series 1: 3 x 1 + 3 x 2 + 3 x 3 + 3 x 4 + 3 x 5

series 2: 3 x 1 + 3 x 3 + 3 x 3 x 3 + 3 x 3 x 3 x 3 + 3 x 3 x 3 x 3 x 3

The result of applying similar transformations to the first and second is just a step towards identifying what the summation notation is. For the first, it is immediately clear that the terms are multiples of 3 and the index that alters goes from 1 to 5. In the case of the second, we find that the index of the series has to be counted as it is represented as the number of times the number 3 is multiplied by itself. This places it at a higher level of complexity than that of series 1.

The system compares these two levels through two modules used for teaching students how to solve this type of problem. The first module is interactive because it allow students to insert different values and calculates the resulting series live so it elicits the “Analogical Reasoning” level of cognitive processing. It displays different outcomes that emerge from conditions set by the learner and the learner is expected to generalize from the specific cases tested.

The second module cannot exist in isolation of the first because it studies student responses to a test, in order to infer the common errors made by that particular student and then reproduces through new examples the behavior of that student in front of them. A part of the screen will display the ideal solution produced to allow students to regard their behavior from an instructor’s point of view. Consequently, this level elicits the “Meta Reasoning” level of cognitive thought, where a student analyses his solution procedure in comparison to an ideal procedure.

For the first part of the evaluation, 21 students took a pretest, then utilized the interactive module and then they took a post test. For the second part of the evaluation, 12 students took the pre test and then exposed them to the interactive module followed by the mirror modeler which showed them how they would solve sample problems as compared to the ideal approach and they then took the post test. All the tests were composed of three question types for comparative purposes, division, multiplication and power operations.

Analysis of Results

There are six error types isolated by the initial experiment for each question type possible for each student. The results of the first part of the evaluation when students are exposed to the interactive system alone are shown in Table 2.

If the number of errors in each column and the number of correct question parts are compared for the pre-test, then no significant differences emerge. This implies that the three types of questions do not differ in their difficulty. Running the same test on the post-test data gives a Chi Value of 5.914 with p < 0.05 so student learn each operation differently from the others.

Table 2
The Number of Errors in the three operations in the pre and post tests
in addition to the percentage improvement made by students

 

Division

Multiplication

Power

Pre-test

56

70

54

Post-test

14

25

28

Percentage Improvement
from total

33.3%

35.7%

20.6%

Chi Test Significance p <

0.0000

0.0000

0.0007

A Chi Yates value of 7.299 with p < 0.007 emerges upon more detailed testing between the division operation and the power operation. A large difference also exists between the multiplication operation and the power operation but it is not a significant one.

Results obtained in table 2 show that no significant differences in difficulty exist as students start the learning process but differences do exist when we compare the amount of learning they achieve for each operation while using the same interactive instructional system. So although learning occurs for all three operations while using the interactive module, the total gain and nature of this learning differs from one operation to the next, in a way that consistent with the implications of the cognitive load theory, because the differ in the level of complexity of the learned materials.

Additionally, the results of the second part of the evaluation when students are exposed to the interactive system in addition to the mirror modeler is shown in Table 3.

Table 3
Number of errors in the three operations in the pre and post tests

 

Division

Multiplication

Power

Pre-test

6

21

10

Post-test

0

1

17

Percentage Improvement
from total

8.3%

27.8%

-9.7%

Chi Test Significance p <

0.037

0.000

0.200


Analysis of student responses showed in general that the number of errors made in the Pretest were 37 and the number of errors made in the Post-test were 17 with a probability of p <.001 of this happening by chance. Table 7 shows the number of errors according to question type.

The results of using the interactive tutoring module followed by the mirror modeler shows a clear difference between the division, multiplication and power operations. The division and multiplication operations both recorded significant improvements in student levels while the power operation was not significantly affected by the modules that are presented. This is further evidence to support the assumption that the difference between the cognitive load requirements of the multiplication and division operations when compared to the power operation caused a serious difference in the amount of learning achieved as students utilized these two modes of learning.

If both evaluations are compared to each other, then we find that students learned from the interactive hypermedia system in all operations, but learning was to a higher degree in the division and multiplication operations which require a lower cognitive load than in the power operation. This implies that the results obtained in experiment three for the power operation can only be obtained if the mirror modeler hindered learning for the power operation.

Elicited Cognitive Levels of Thought

A taxonomy of the learner’s cognitive levels of thought is presented here to guide educational system designers in determining their cognitive objectives and achieving them. These levels describe how the different existing approaches to learning can result in different levels of cognitive load. A case study is presented to show that eliciting a higher level of thought during learning is not always desired when students are exposed to more complex materials, while it does encourage learning for simpler materials. This highlights a need to determine the most appropriate cognitive level elicited by the teaching medium that would maximize the amount of learning that occurs. The taxonomy therefore offers itself as a meter against which a comparative measurement can take place.

Future Trends

Since learning is a cognitive activity then it is logical for learning to be affected by the characteristics of the cognitive system. The presented taxonomy offers a series of benchmarks as classified by distinct areas of research in Cognitive Science. The levels are therefore practical in that they can be elicited by existing approaches which implies that they are measurable and subject to evaluation. Further work in this direction is also likely to inform cognitive scientists on the application side of their theoretical work.

Conclusion

A taxonomy of the learner’s cognitive levels of thought is presented as a meter of comparison for educational system design. No similar meter exists to estimate the cognitive load imposed onto the learner with different educational system settings. None of the levels can be described as better as or worse than the others as they each have a purpose and each interacts differently with the learner’s cognitive state. Yet, they are extremely important because ignoring the effects of cognitive load may result in situations where learning is retarded simply by the mismatch between the learned materials and method of presentation.

References

Albacete, P. L. & VanLehn, K. (2000a). The Conceptual Helper: An intelligent tutoring system for teaching fundamental physics concepts. Intelligent Tutoring Systems: 5th International Conference, Montreal, Canada. Gauthier, Frasson, VanLehn (eds), Berlin: Springer (Lecture Notes in Computer Science, 1839), 564-573.

Albacete, P.L & VanLehn, K. (2000b). Evaluating the effectiveness of a cognitive tutor for fundamental physics concepts. In L. R. Gleitman & A. K. Joshi (Eds) Proceedings of the 22nd Annual Meeting of the Cognitive Science Society. Mahwah, NJ: Erlbaum. 25-30.

Alkhalifa, E. M. (in press a), The Alignment of Multimedia Interfaces with Learner Cognitive Characteristics, The Encyclopedia of Human Computer Interaction, Idea Group Inc.

Alkhalifa, E. M. (in press b), Effects of the cognitive level of thought on learning complex material, Journal of International Forum of Educational Technology and Society, IEEE.

Alkhalifa, E. M. (2005), Multimedia Evaluations Based on Cognitive Science Findings, Encyclopedia of Information Science and Technology, Vol. I-III, Idea Group Inc.

Bloom, B.S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13, 4-16.

Bloom, B.S. Hastings, J.T. & Madaus, G.F. Handbook on formative and summative evaluation of learning. New York: McGraw-Hill, 1971.

Jonassen, D.H., (1991) Objectivism vs. Constructivism: Do we need a new philosophical paradigm shift?, Educational Technology: Research & Development, 39(3).

La Joie, S. & Derry, S. (1993) Computers as Cognitive Tools. Hillsdale, N.J.: Erlbaum.

Lajoie, S.P. (1990) Computer environments as cognitive tools for enhancing mental models. Paper presented at the annual meeting of the American Educational Research Association, Boston, MA, April 16-20.

Piaget, J. (1970). The Science of Education and the Psychology of the Child. NY: Grossman.

Sweller, J. (1994). Cognitive load theory, learning difficulty and instructional design, Learning and Instruction, 4, 295-312.

Van Jooligen, W. (1999) Cognitive Tools for Discovery Learning, International Journal of Artificial Intelligence, 10, 385-397.

Vygotsky, L.S. Thought and Language, In Hanfmann, E. and Vakar, G. (eds) Cambridge, MA: MIT Press, 1962.

Terms and Definitions

Cognition: The psychological result of perception, learning and reasoning.

Cognitive Load: The degree of cognitive processes required to accomplish a specific task.

Cognitive Science: The field of science concerned with cognition and includes parts of cognitive psychology, linguistics, computer science cognitive neuroscience and philosophy of mind.

Cognitive Tool: A tool that reduces the cognitive load required by a specific task.

Interactive System: Any computer delivered electronic system that allows users to insert information and reacts to users’ choices according to a preprogrammed fashion.

Taxonomy: Division of materials into categories or ordered groups.

About the Author

Dr. Eshaa M. Alkhalifa is Director of Information, Data Analysis & Statistics, Deanship of Admissions and Registration, University of Bahrain, Kingdom of Bahrain.

Contact: eshaa@silvertair.com
 

go top
April 2006 Index
Home Page