Participants were given incomplete descriptions of interactions between variables, with an accompanying set of bar graphs representing the interactions. They were then required to complete the descriptions so that they correctly described the graphs.
“At the level of the four-way interactions, participants made comments such as “Everything fell apart and I had to go back”,” Professor Halford said.
“Only chance levels of performance were obtained for five-way interactions.”
The results have implications for the design of high-stress work environments such as the coordination of fire-fighting operations.
“If the number of variables to be considered exceeds human processing capacity then the worker will drop his or her mental bundle and become unable to proceed,” Professor Halford said.
“More seriously, the worker may revert to a simplified version of the task that does not take all aspects into account and therefore may make the wrong decision.
“This type of problem is particularly acute in tasks that have to be performed under time pressure or where unusual combinations of circumstances are likely to arise.
“Modern high-technology industries produce many situations of this kind because of the number of variables that have to be taken into account in decision making.”
Professor Halford`s team included Dr Rosemary Baker and Dr Julie McCredden from UQ`s School of Psychology and Professor John D Bain from Griffith University.
Their results showed that as the complexity of the interaction increased, performance and confidence levels dropped significantly.
“While all levels of complexity are logically possible, the evidence suggests that they are not cognitively manageable,” Professor Halford said.
Professor Halford said complex ideas were conceptual structures built in the temporary working area of the mind called the working memory. His findings are the outcome of a decade of research, investigating tasks that push cognitive processing to the limits.
“Four way interactions require humans to represent relations between relations between relations between pairs of bars; which can be reframed mathematically as a four-dimensional task,” he said.
“We found that four dimensions are the most that humans can conceive of.
“Therefore, if the world was five-dimensional, rather than three, we would not be able to understand it.”
Conference at West Point focuses on the challenges of IA
Information Assurance (IA) is a technique used by large organizations such as the military to deal with the large volumes of information. Its goal is to make sure the information used is transmitted and computed in a non-corrupted state. Developers can use some of these techniques in their own work to keep information states pure. In this two-part article on the IEEE Systems, Man, and Cybernetics Information Assurance Workshop, Larry Loeb takes a look at the evolution of IA and what it means from a security standpoint. Here in Part 1, he explores Dr. Eugene Spafford's keynote address, and takes detailed look at SITAR, an architecture that was presented at the conference.
When an organization is so large that information becomes another fungible commodity for it to use, it wants and needs assurance that the information it feeds on is accurate and untainted. Recent consolidations in the technical industry (especially in the aerospace sector) have created even larger organizations than were the norm just a few years ago. This consolidation parallels the rise of "information assurance" (IA) as an IEEE special interest group in the last year. Just as the actual quality of a product is only one part of "quality assurance," so security is only one part -- albeit central -- of the overall information assurance effort. What is this IA stuff?
By itself, security is usually implemented in large organizations as a threat-reactive process. It deals with the abnormal, which is measured relative to what is agreed to be the normal configuration of something -- as in, someone hacks your network and you respond to the hack. IA is more than this: It includes security, sure -- but as a metric. In IA situations, the outcome of security processes must be measured, and the results of those outcomes reported so that they can be effectively acted on. This closes the loop on an organization's information flow. Any organizational researcher will tell you how useful it is to provide feedback to a group effort, and that's what IA should be doing on a macro scale.
Figure 1: Information assurance model
Figure 1 delineates three of the four dimensions of IA (the fourth being time). Over time (and change) there can be several of these discrete "McCumber"-like models (so named from John McCumber's 1991 paper on INFOSEC) along the timeline of an organization. Each of the models might not link to other ones, but they still reflect the concerns that IA deals with.
The large organizations that are trying the IA approach hope to automate the underpinnings of information collection while at the same time implementing any security services that might be needed. At this high level of data flow, automation of some processes is both desirable and necessary. Otherwise, decision makers become drowned in data -- much like reading raw console traffic. Instead of an Intrusion Detection System (IDS) ringing the system administrator's pager and stopping there, IA would have the IDS post the event to a feedback file (not just a console log) for later review. Whatever the system administrator does in response should also be picked up and put into the same feedback file. Ideally, all the security efforts in a system are integrated into the IA review. (cont.)