Module 9 — Human Factors
9.1 — General
Human Factors is the study of how humans interact with their working environment — the equipment, procedures, other people, and the physical surroundings. In aviation maintenance, understanding human factors is not optional; it is a regulatory requirement and a matter of life and death. Research consistently shows that approximately 80% of maintenance errors involve human factors. This module equips maintenance engineers with the knowledge to recognise, understand, and mitigate the human contributions to error.
The Need to Take Human Factors into Account
Aircraft maintenance is performed by people — and people are fallible. No matter how skilled, experienced, or conscientious an engineer is, they will make errors. The goal of human factors training is not to eliminate human error (which is impossible) but to:
- Understand why errors occur — the underlying psychological, physiological, and organisational causes.
- Recognise the conditions that make errors more likely — fatigue, time pressure, poor communication, inadequate resources.
- Design systems that are error-tolerant — procedures, checklists, inspections, and organisational structures that catch errors before they cause harm.
- Promote a safety culture where reporting errors and hazards is encouraged, not punished.
The SHELL Model
The SHELL model (developed by Edwards, 1972, modified by Hawkins, 1987) illustrates the relationships between humans and their working environment. The model places the individual (Liveware) at the centre, surrounded by four elements they interact with. The edges between the centre and each surrounding block are irregular — representing the need to match and adapt the interfaces.
SHELL Interfaces Explained
| Interface | Description | Example of Mismatch |
|---|---|---|
| L–H (Liveware–Hardware) | How the person interacts with tools, equipment, controls, and displays | Controls placed out of reach; displays too small to read; tools that don't fit the user's hand size |
| L–S (Liveware–Software) | How the person interacts with procedures, manuals, checklists, and regulations | Procedures that are ambiguous, too complex, or contradict each other; poorly written manuals |
| L–E (Liveware–Environment) | How the person is affected by the physical work environment | Excessive noise preventing communication; poor lighting during inspections; extreme temperatures |
| L–L (Liveware–Liveware) | How people interact with each other — communication, teamwork, supervision | Poor shift handover; language barriers; lack of assertiveness; personality conflicts |
Incidents Attributable to Human Factors/Human Error
The aviation industry has learned many of its human factors lessons through tragedy. Key maintenance-related accidents that shaped human factors awareness include:
| Event | Human Factors Issues | Outcome / Lesson |
|---|---|---|
| Aloha Airlines 243 (1988) | Inadequate inspection of ageing structure; complacency; organisational failures | 18 feet of fuselage roof separated. Led to ageing aircraft safety programmes. |
| British Airways BAC 1-11 (1990) | Wrong bolts used in windscreen replacement (84 of 90 too small); normalisation of deviance; time pressure; poor stores procedures | Windscreen blew out at FL230. Captain partially sucked out. Survived. |
| Continental Express 2574 (1991) | Shift handover failure — 47 screws left out of horizontal stabilizer leading edge after de-icing boot removal. Incomplete task not communicated. | In-flight breakup. 14 fatalities. Led to improved handover procedures. |
| China Airlines 611 (2002) | Inadequate repair of tail strike damage 22 years earlier; repair did not meet SRM requirements; fatigue crack grew undetected | In-flight breakup. 225 fatalities. Highlighted importance of repair quality. |
| Chalk's Ocean Airways 101 (2005) | Corrosion and fatigue in wing spar not detected during inspections; inadequate training; normalisation of deviance | Wing separated in flight. 20 fatalities. Emphasised inspection quality. |
Murphy's Law
"Anything that can go wrong, will go wrong."
Attributed to Captain Edward A. Murphy Jr., a US Air Force engineer, in 1949 during rocket-sled deceleration tests at Edwards Air Force Base. A technician had wired all 16 strain gauges backwards. Murphy's observation was about designing systems to prevent human error — not about pessimism.
Application in Aviation Maintenance
- Design for error prevention — if a connector can be plugged in backwards, someone eventually will plug it in backwards. Use keyed connectors, poka-yoke (mistake-proofing), and asymmetric designs.
- Assume errors will happen — design systems with multiple layers of defence (checklists, independent inspections, BITE).
- Plan for the worst case — if a failure mode exists, it will eventually occur. Identify failure modes and mitigate them proactively.
- Don't rely on human perfection — procedures, training, and good intentions alone do not prevent errors. System design must account for human fallibility.
If there are n ways to perform a task, and one of them leads to a catastrophic outcome, then someone, at some point, will perform it that way. The probability is not if but when. The correct response is to make the catastrophic way impossible — not to tell people to be more careful.
Printing is not available
Please view study notes online at part66online.com