Exceptional Release Presents:
Leveraging Integrity and Trust:
A Call for Values-Based Compliance in U.S. Air Force Aircraft Maintenance
By: Capt Dave Loska
27 February 2020 (Published in Winter Exceptional Release)
“Unless the progressive elements that enter into our makeup are availed of, we will fall behind in the world’s development.”
− Brigadier General William “Billy” L. Mitchell, Winged Defense: The Development and Possibilities of Modern Air Power Economic and Military, 1925
A forward-looking, learning, and fair-minded aviation culture is an ideal. You could also say that in a country that is first in flight, first to fly around the globe, first to fly to the moon, and still pioneering in altitude, duration, and effectiveness, it is an American ideal. And yet, some ideologies contend against progress — punitive management and the prosecutorial imperative in the face of accidents, mishaps, and error, and that of fear. In a broad sense, we all recognize it when we see it. We empathize with scenes from films such as First Man, where Neil Armstrong, played by Ryan Gossling, declares after a near-fatal accident, “We need to fail! We need to fail down here, so we don’t fail up there!” We relate with these monumental learning moments because they are ingrained in our cultural psyche, and we know that their resulting successes transformed our world. Yet, we struggle to see the same prospect and importance of failures at the tactical level, on the flight line, where our Airmen regularly pioneer into the unknown. To maintain a balanced compliance culture at our tactical level, do we treat failure and error with consistency and fairness, pursuing the root cause instead of attributing blame? Do we promote accountability and communication with a learning system approach in keeping with a values-based compliance culture and a forward-looking vision? Or, do we offer imposing, constricting, subjective policies and even reflexively punitive managerial behaviors; barriers? Do we? What effect does this have on performance, innovation? I believe we do a bit of both, and that is a big part of the problem. We are consistently inconsistent, and effectively, it breaks the circuit of integrity within our US Air Force Maintenance organizations leaving us operationally fragmented. Additionally, although we are still our nation’s engine of air superiority, I believe we are not firing on all cylinders. By taking an honest look at our organization, and engaging in “critical self- examination and innovative thinking to identify solutions to organize, train, equip, and ultimately enable tomorrow’s Joint Force,”(1) by asking whether a “…culture of compliance and innovation (are) mutually exclusive?”(2) by communicating and cooperating, we can build a culture where they genuinely coexist. By using values-based frameworks, such as Just Culture, we can increase our strength and thaw the “frozen middle”(3) Overcoming the mindsets, practices, and policies that act as barriers to our growth. Troubleshooting a culture is like troubleshooting an aircraft, both require abstract diagnostics and resolute fault isolation. The ultimate reward comes from discovering the break in the circuit and restoring the livewire.
The intent of this paper is to start a conversation, or rather provoke the resurgence of a forum in which a much older conversation can come to the forefront– offering honesty and transparency to generate ideas and reveal solutions, for us. From the scrape of the wire brush against the seized component, to the scratch of the policy pen; to those Airmen, NCOs, SNCOs, and Officers that support our policies and accomplish our mission. AF MX compliance policy, practice, and culture intertwined to present an opinion, mainly from a practitioner’s perspective. It concludes with three recommendations to the USAF on doctrine, policy, and practice. I believe we must discuss these matters, and reconcile these disagreements, if USAF MX desires to consider itself transformational.
Motivation
You might think that a focus on compliance culture reform as the single most significant opportunity for improvement in AF MX might seem a bit narrow. There are so many areas in which to progress. You might not even see a problem yet. My motivation for this view is probably in my upbringing. As a kid from Chicagoland without much prior knowledge of tools-and-their-uses, joining the Navy to fix attack aircraft on the flight deck of an aircraft carrier was an uphill learning experience; you could even say a listing deck. I made a lot of mistakes, nearly all that could be made. For example, there was an ill-fated day when, as a “seasoned” Plane Captain, I suspended flight deck ops, causing unfathomable bolters and go-arounds. This, after teaching my flight deck trainees the convenient workaround of “deep-sixing” unaccountable shop rags off the side of the ship, in a task-centric effort to save time from hand carrying them below decks to later be tossed from the ship’s stern. The pile of rags caught a very unfortunate gust of wind, and the rest is a two-beer story as they say, but it ended with my first (and only) unpleasant encounter with the flight deck “Dog” whom I met with my tail between my legs. After learning through many failures, and kicking some butt atop the pointy end of a 100K ton spear, I unexpectedly fell in love with aviation, taking flying lessons and working as an A&P after my enlistment. I also learned the power of the trust of a few good mentors, some of whom were my Quality Assurance Representative (QARs), who saw potential in me even in my worst moments. This trust instilled in me a sense of purpose, which over time cultivated an understanding and ownership of broader organizational objectives, leading to greater adaptability as a technician and leader. As an Aircraft Maintenance Officer in the USAF, I have worked alongside excellent and inspirational leaders at every grade. I have found a culture that sacrifices everything of itself for service. Over time, however, I have become sensitive to some very confusing elements in the USAF MX culture, aspects that seem contradictory to a maintenance culture of paramount integrity.
“Troubleshooting a culture is like troubleshooting an aircraft, both require abstract diagnostics and resolute fault isolation.”
Collaboration
Collaboration in an aviation culture is a Holy Grail- type objective. The USAF has the potential to outperform over every other facet of global aviation because of our shared competitive incentive to Fly, Fight, and Win, an advantage we do not fully apprehend. We often relate to our organizational structure as a network of networks and refer to barriers to communication as silos and stovepipes. Collaboration, i.e., the breaking of barriers, and the interconnection of silos is good; in contrast, rewarding group-think and building barriers is bad. This Macro/Micro organizational model frames our mind to the importance of information-sharing and divergent thinking. Aviation regulators such as the FAA, also promote information sharing and collaboration in building and sustaining a strong safety and compliance culture — an informed, flexible, reporting, learning, and fair-minded or just culture. However, aviation regulators have historically met opposition toward information sharing from air carriers. Air carriers who felt that regulators might use the information to punitively beat them over the head, or incidentally share some of their trade secrets with their competitors. Since the early 2000’s the FAA has made improvements to the US safety regulatory environment in enlightened and trust-building approaches to collaboration such as the Aviation Safety Information Analysis and Sharing (ASIAS) System, and the enactment of Public Law 111–216 requiring each air carrier operating under 14 CFR part 121 of the Federal Aviation Regulations (FARs) to develop and implement a Safety Management System (SMS) to improve the safety of its aviation-relatedd activities, relying on the safety management of the airline. This act may, however, reinforce an incongruity between the incentives of the airline and its mechanics; incentives to maintain profits, vs. integrity. Putting regulators in an awkward push-pull position, responsible for both promoting and regulating the same airline. Airlines questioning mechanics over finding any maintenance discrepancy outside of their specific maintenance tasks — reinforcing the temptation to cut corners, relax standards or look the other way… leading to the fear of retribution. (4) Would an airline even be willing to share safety data on its production processes for fear of losing trade secrets to its competitors, or increasing liability for just being honest? In July 2019, during a hearing focused on aviation safety, inspired by the trend of Boeing 737 Max accidents, Congressman John Katko of New York questioned representatives from the NTSB and aviation trade unions on what it would mean to “… change the aviation regulatory culture from punitive to collaborative?” The slow pace of adaptation of safety tenets and mindsets into aviation maintenance, is perplexing. The available literature on human factors shows less consideration for maintenance as compared to flight crew or aircraft controllers. Is it possible that a complexity of factors, both profit and production, brought about by the competing incentives within commercial aviation, draw attention and resources elsewhere? I would offer then, that the USAF as an enterprise is better postured than industry to lead in compliance culture, safety thought and ideas, because of our shared competitive incentive to Fly, Fight, and Win. Because of this mutual incentive, we are more agile and adaptable, and therefore, we have an obligation to lead.
“Over time, however, I have become sensitive to some very confusing elements in the USAF MX culture, aspects that seem contradictory to a maintenance culture of paramount integrity.”
Culture
The USAF aircraft maintenance compliance culture belongs to the AF’s more than 70K enlisted and 1.5K officer maintenance personnel responsible for more than 5K military aircraft. It is established by the AF’s history, training, programs, policies, and, most importantly, people. At the unit level, it is primarily defined by policies found in Air Force Instruction (AFI) 21-101, Aircraft and Equipment Maintenance Management, specifically Chapter 6 which delegates technical compliance responsibilities from the Maintenance Group Commander, establishing the Quality Assurance function to serve on the Group’s staff as the “primary technical advisory agency in the maintenance organization.”(5) Aside from the flight line, the QA office is an excellent place to pick up the pulse of the MX compliance culture as they are primarily responsible for running the Maintenance Standardization and Evaluation Program (MSEP). According to the AFI 21-101, “The purpose of the MSEP is designed to provide unit’s [sic] with a method of evaluating technical compliance and measure how well they comply with established standards.”(6) The MSEP tends to be the thread of the conversation on unit compliance from day-to-day. It manages all program inspections and provides a routine inspection plan called the Routine Inspection List (RIL), which is the product of a coordinated effort between maintenance leaders for checks, both over-the-shoulderr management technical processes and after-action inspections. The resulting inspection findings are categorized by type, and assigned a heuristic score based on either type or severity, especially those findings related to safety violations, deviations from technical data or found otherwise unsatisfactory. The factors of this sample are then compiled, and the average score becomes the squadron’s compliance grade, intended to represent the quality of maintenance.
According to survey data from the AF Safety Center compiled from maintenance units across the AF over the past ten years, 46% of the 87.6K Airmen surveyed would not say that “QA/QAE is well respected in their Squadron” and were either neutral or disagreed with that statement. (7) 18% disagreed or strongly disagreed. Even more interesting, 24% of the 89.5K surveyed would not say that “Individuals in my squadron are willing to report safety violations, unsafe behaviors, or hazardous conditions.” On this matter, 7% disagreed or strongly disagreed. (8) Initially, this may not be overly concerning to you. But consider…isn’t it odd that nearly half the Air Force’s maintainers will not say that they respect the one organization especially chartered to maintain quality, compliance, and integrity? That organization is, after all, just a representation of our leadership. Or that roughly 1 out of 5, or truly, thousands of Airmen out there are reluctant to self-report? What are they seeing go unaddressed? Contrary to the survey results, there can be no neutrality when it comes to integrity and trust. Why then the lack of trust? It starts in the heart, but I also think it may begin in part within our compliance model and the implementation of our processes — our leadership.
“I would offer then, that the USAF as an enterprise is better postured than industry to lead in compliance culture, safety thought and ideas, because of our shared competitive incentive to Fly, Fight, and Win.”
Quality History
Historically, quality management was between artisans and apprentices. Practices were entrusted individually, and products were sold within communities where the artisan and apprentice held a shared incentive to maintain quality and their own reputation. (9) This was often represented by the maker’s mark of the practice. Very early evidence for this can be seen on bronze weapons developed in China during the Zhou Dynasty in the third century B.C. (Figure 1). Weapons were inscribed with the names of craftsmen and the managers responsible for their quality; (10) A powerful aid in quality and accountability from the longest-ruling Chinese dynasty in history. The dawn of the industrial revolution, 20th-century production, and post- WWII globalization brought on advances in quality science, enabling vast improvements in product output, but further distancing quality management from the craft.
In the late 1950s, the USAF faced a management dilemma — a veritable crisis. The confluence of the aggressive advancement in aviation technology following WWII – supersonic aircraft, digital electronic computers, over-the-horizon radar – and the rapidly draining pool of specialized technicians — 54% to 76% first term airman departing between 1959 and 1961– creating an exodus of 18,000 “highly technical” mechanics and gutting the force and secondarily creating a flood of quality escapes. (11) The increasing maintenance per flight hour rate led to a steadily rising operational cost. Resulting in a scenario where “…about three- fourths of its equipment required some kind of repair, and 13% had broken down entirely.” (12) Not to mention the threat of nuclear war! In 1965 the AF looked to the Strategic Air Command’s (SAC) promising Standardization and Evaluation program (Stan Eval) previously called the Standardization Board (Stanboard) (13) which was showing some success in reducing the rate of error of flight crew through administering check rides and tests. (14) The comical logo for this program was a vulture with the slogan “Harsh, but Unfair!” (15) The maintenance program offshoot, the maintenance standardization evaluation program, was designed to “improve maintenance quality through standardization.” (16) You are not alone if you are glad they did not call it the M-Stan-Board!
1“During the third century B.C., the names of the craftsmen and officials responsible for making bronze weapons were inscribed directly on the arms.” Taken from Qiupeng, Meidong, & Whenzhao, 1995
Around the same time, management theorists were designing methodologies, such as multicriteria decision analysis, used to evaluate multiple factors for decision-makers in environments of uncertainty, within the newly designated discipline of decision analysis. (17) SAC was the first to adopt this MSEP process in 1965, after which an unpublished Air War College study, only a year later, recommended its adoption to the Air Staff. (18) Its subjective Sat/Unsat pass-fail scoring had its origins in the IG inspection teams from days of yore and was later adopted by HQ Maintenance Standardization Evaluation Teams (MSETs). (19) In 1970 the MSEP was first enacted into the language of the AFM 66-1 (predecessor to the AFI 21-101), and by the following revision in 1972 the AF’s quality inspection policy had grown from 3 bullet points in 1965, to 68 pages when the MSEP program was formerly instituted AF-wide (though not uniformly implemented and therefore ironically not standardized) (20) This signified the growing focus and importance of quality and compliance in the AF. The process was very similar to the one we still use today, but it had some distinct differences.
MSEP Methodology…Then & Now
Though not free of dry jargon and complicated terms, a survey of some of the critical similarities and differences between the MSEP of the late 1960s and that of today, 50+ years later, is necessary to set a foundation from which we can evaluate the “how” and “why” of our current state. For instance, why do we score Personal Evaluations, which are more a measurement of training and capabilities of an individual (or team) in the same equation as the condition of our equipment? Does that simplify our process, or complicate it? Do we measure the right things? Are we evaluating technicians on challenging tasks, or “generally simple tasks that would not reveal actual technician competence?” (21) What effect does that have on performance? Does this encourage mediocrity? Mask problem areas? How long have we been doing this!?!
As is the current practice, audit and discrepancy findings were categorized. Today’s two categories of Evaluation Criteria are compared to their MAJCOM established, weapon system specific Acceptable Quality Level (AQL) exist as Category I (CAT I) “A required inspection/TO/AFI procedural item missed or improperly completed” or Category II (CAT II) “An obvious defect…” on a simplified, other than CAT I required item. Initially, these categories (without assessment aside an AQL) were: Category I: Improperly Completed Inspections, Category II: Maintenance Malpractice, and Category III: Obvious Item–Common Maintenance Item. (22) Thus explaining the historical origin of the attributable colloquialism “maintenance malpractice” often used to describe error in an AF maintenance organization still today. The technical inspections of the former program construct, i.e., Quality Verification Inspection (QVI), Detected Safety Violation (DSV) observations, and Personnel Evaluations (PE), either over-the-shoulder or after-the-fact, were graded on a 100-point scale using a similar scoring distribution as that of today. Dissimilar to today’s program, however, was that 60% of the grade was accounted for by PEs, and the other 40% was attributed to technical inspections. (23) If no equipment within a category was inspected in that particular month, then the points were evenly distributed between the categories that were. All this was designed to “show successful maintenance management in terms of technical proficiency and improvement in equipment condition.” (24) The most controversial aspect of this past construct was its “floating baseline average” methodology utilized to determine the monthly score. This baseline was a locally computed standard based on a six- month average of past aircraft discrepancies, the figure of which was never allowed to be less than one. According to one study published by the Air War College by Lt Col David Crippen in 1986 (25) this created a state where “Last month’s excellent rating becomes this month’s satisfactory rating,” quoting one airman in saying, “If you bust your tail, you’ll eventually fail.” A similar critique of the program in 1976 proved that the statistical confidence of the inspection program was no higher than 25%, thereby invalidating it. (26) To which Crippen ten years later elaborated, “A Deputy Commander of Maintenance (DCM) basing his decisions on those results has a three in four chance of being wrong”…”The few evaluations of the various populations create statistically meaningless results”… “The MSEP cannot be made valid”… and was “fatally flawed from its inception” (27) Under our current MSEP design, technical inspections and PEs are of equal weight on a pass-fail basis, and “Ratings are calculated by dividing the total number of inspections passed by total completed.” Along with the DSV, the later added observation finding types of Technical Data Violation (TDV)s, and Unsatisfactory Condition Report (UCR)s deduct a heuristic 0.5 percentage points from overall percentage grade for each occurrence. One significant similarity between the then and now MSEP is that the program has, and is still designed with the intent to give maintenance leaders, (then the DCM, and now the MXG/CC) the flexibility to tailor the MSEP to meet needs and address quality and compliance trends within their unit by concentrating resources. However, I would offer that the current backwards-looking MSEP construct acts as a hindrance to that pursuit. Moreover, organizations that are proactively pursuing trends and design intervention could be doing so despite the limitations of the MSEP — the program itself then is left generating reactive heat, and not proactive and predictive light.
Safety Management Systems
Quality and compliance management are both elements of a Safety Management System. The International Civil Aviation Organization (ICAO) categorizes safety management methods into those that are reactive and event-based, proactive and process-oriented, and predictive and analysis-drivenn (Figure 2). A mature safety management system is proactive and predictive. Immature systems are reactive and event-based, locked into fly-crash-fix-fly daily operations. Hazards within the system are then avoided through the use of controls, of which there are two types: closed-loop systems (those with a feedback mechanism) and open-loop (those that operate without feedback). In his book titled, Whack-a-Mole: The Price We Pay for Expecting Perfection, author David Marx discusses one type of cultural closed-loop system, what he calls a “learning system.” (28) In which, a group of people familiar with an operating environment convenes to understand errors and work together to design interventions that reduce the likelihood of future mistakes. Designing interventions requires a complete understanding of the system and full cooperation, from the handle of the bullwhip to its cracking end. This requires accurate root cause analysis of error. There is nothing inherently wrong at face value with our MSEP design. It sits on the shoulders of the giants who created it, the “past generations who made harsher sacrifices so that we might enjoy our way of life today.” (29) But I wonder if we measure the right things? Also, if our scoring of inspections, which dates back to the pass-fail days of yore, is holding short of the bigger picture and impeding our performance. Auditing consultant and author Dennis Arter addresses the shortcomings of compliance audits in favor of performance audits by stating,
“A different perspective on audits is needed. Instead of examining past conformance to requirements in minute detail, you can use current performance to project future actions. It is better to avoid dwelling on mistakes of the past. They can never be changed. A backward-looking view cannot achieve the goal of improved performance within the organization being examined. It will only lead to antagonism and fighting. This is because people are powerless to change the past. They become frustrated and strike back, usually at you. Instead use past practices to predict future performance, which can be changed.” (30)
Watershed research supports that process implementation, not just barriers, can prohibit innovations. (31) Moreover, that the inspection of a process rather than a product is known to reduce defects. (32) And yet, how much of our RIL and time is dedicated to the later? How many process-based Management Inspections (MI’s) do we conduct in comparison to equipment-based inspections? And does the assignment of subjective grading to event-based audit results and lumping them all together into a super metric introduce competing incentives within our organization to maintain a grade rather than compliance and performance? Does it cause us to run at full speed with our hands over our eyes, and preclude root cause realization? That is a question only those of us that are practitioners under the MSEP can honestly answer. Instead of grading inspection findings in similar categories, could we use them to highlight focus areas in which they spend the majority of our effort in root cause process analysis? If not, is it because we are untrained to determine the root cause adequately? There is a way to fix that. Is it because our QA doesn’t have the time because they are saddled with production-related administrative tasks and programs that could otherwise be delegated to “revitalize the squadron?” There is a way to address that, as well.
Objectivity
To accurately determine the root cause of error, the priorities of the entire maintenance organization must be aligned, and inspections must be objective. Objectivity is a hard thing to maintain, as is trust.
Drury & Dempsey capture this difficulty by stating:
“Customers rely on the judgment of professional inspectors, checkers, and auditors to make informed decisions. This reliance raises questions of honesty, trust, competence, and human-machine system design. From a sociotechnical systems perspective, inspectors must be seen as independent of producers of goods and services, or their findings will not be accepted. For example, in civil aviation, the US Federal Aviation Administration (FAA) decrees that airline inspectors charged with checking airworthiness are kept organizationally independent of aviation maintenance technicians, who perform repairs, adjustments, and replacements. This independence can lead to role ambiguity and conflict in their job” (33)
Simply because we have, “objective ratings” (34) does not mean that we have eliminated subjectivity in our process; we cannot ever eliminate it because we are human. And there are also advantages to subjectivity in audits from technical experts. However, in our subjective process, we can handicap our inspectors and technicians instead of empowering them. This, I will agree, is genuinely conjecture from a practitioner’s perspective. I offer that our MSEP process introduces competing incentives in a loose-chain linked in part with our event-based compliance scoring and MSEP grade.
In scoring events, and immediately assigning a scored category, instead of purely documenting, we have obliquely assigned a root cause and, in some cases, blame. This is at an opportunity cost to a system-focused proactive and performance-based compliance model, and we sacrifice any chance of ever being predictive; we are, conversely, choking out the data. Research shows that audit findings can “provide proactive predictors of future safety performance in the aviation maintenance field.” (35) We need to rethink what we are doing.
Problem-Solving
An oft-cited example of problem-solving and out-of-the-box thinking is that of the Hungarian born Jewish statistician, Abraham Wald. During WWII, in an effort to increase the survivability of
its combat aircraft from sustained battle damage, officers from the U.S. Navy enlisted the support of the Statistical Research Group located at Columbia University, whose aim was dealing with problems of military importance. (36) To those military officers, the solution was clear; increase aircraft armor while limiting deterioration to maneuverability and fuel efficiency from the increased weight. The officers supplied the bullet hole data of returning combat aircraft to Wald. The Officers requested that Wald help determine the location of the greatest need for armor based on where the planes were getting hit the most. Wald’s observations were quite different, however. Wald questioned that if the bullet holes were distributed across the aircraft, why were there fewer bullet holes in the aircraft engines? Where were the missing bullet holes?
3 Abraham Wald and the Missing Bullet Holes
Wald concluded that the missing bullet holes were on the downed aircraft and that the aircraft being shot in the engines were not returning home. His recommendation was, therefore, to increase the armor around the engines, and his methodology would benefit US aircraft survivability in combat for decades to come. Wald’s problem was one of missing data, requiring contrapositive reasoning and the contradiction
of convention. Reinforcing those airframe areas that were initially proposed would have done nothing to increase the integrity of the aircraft. By acknowledging the missing information and applying a different perspective, Wald’s recommendations saved lives and vital resources. I believe a comparison could be made, and a similar approach could be taken to correct our course and increase the integrity of our organization, eliminating competing incentives and recovering the missing data.
Competing Incentives
The competing incentives of the MSEP, simplified, and ceteris paribus are as follows:
First, the Maintenance Colonel, a mission-first, people always leader, her incentives are clear and straightforward: the safety and reliability of the Group’s aircraft and personnel. She delegates responsibility for safety and reliability to her Maintenance Manager and her Quality Inspector to ensure compliance. All three now share mutual incentives.
The Quality Inspector, an NCO who always puts integrity first, sets out to conduct an inspection, and later observes a finding. The Quality Inspector must then record and categorize that finding into the generalized buckets of the MSEP. Each is graded, with those regarding safety, technical manual deviation, or those that are generally unsatisfactory assigned a weightier score deduction from the final average. By immediately assigning a scored category, a general root cause may then be obliquely attributed, and root cause realization suspended. In some cases, attributing a scored and graded premature root cause to a squadron, unit, section or individual, and inadvertently attributing blame.
The Maintenance Manager, who eats work and spits excellence, is now incentivized not only to produce safety and reliability but also to maintain an excellent score on behalf of his unit. Understanding that the immediate assignment of a grade is a misrepresentation of his unit’s compliance, he knows that achieving a grade is more the result of the inspector’s findings than it is the representation of his operational environment. He may have, on some occasions, even considered highlighting
areas of personal concern for inspection, and would have found the results helpful. But, understanding that more inspections will likely result in a lower grade, he is disincentivized to share. Also, if then the findings are in some way unsatisfactory to the Manager, he will challenge the inspector. The Manager, often well senior in rank to the Quality Inspector, will challenge the inspector to have those findings discarded. He will challenge when the inspection was conducted, and whether the inspector followed procedural rules of informing the technician before, or debriefing his supervisor after the inspection. He will challenge the inspector’s intent and if the finding was associated with the type of inspection initially intended or if the finding was discovered obliquely. He will challenge the categorization of the finding and whether it was poorly articulated or referenced during documentation. He will test the accuracy of the procedural reference. These will be contended and relitigated until an agreement is made, or not.
The Quality Inspector is now incentivized not only to ensure safety, reliability, and compliance but also to maintain the integrity of MSEP and its weighted average score, this amongst the many other production programs which policy dictates he must keep; he is a busy man. He knows that the Maintenance Colonel does not tolerate the relitigations as they do not best represent safety or reliability, and she will either impose time restraints on them, delegating ultimate decision authority to her staff or squash them entirely. However, this relitigating process has by then driven a very surprising informal dynamic. This bullying to justify applies a further burden on QA inspectors to prove intent and thereby creates an informal warranting procedure. QA inspectors are now bound and restricted on what they can report and therefore inspect, and inspections are then reduced from being conducted during a constant presence at the point of execution to golf cart drive-bys of mostly after-the-fact items. Furthermore, the data which was possibly imperfect but nonetheless useful for intervention design never made the cut. This results in the incomplete narrative of the unit’s compliance and whitewashed reporting. Missing data; Missing bullet holes. This shapes our compliance culture even further. This relitigation and warranting distort the self-perception of the QA inspectors, now seeing themselves as a type of police and no longer the brother-at-arms in quality they once were.
This policing style of quality management can be identified in some of the widely accepted inherently punitive symbols representing our compliance today — a policeman’s badge with its implications of criminality, and the QA crest.
Symbols, Language & Culture
The QA crest is an interesting symbol, as it is truly only an apt representation of a punitive compliance culture. A vulture, dating back to the days of the SAC Stanboard, (37) perched atop its carrion, a snake in the grass — our technicians and their guileful deception. This over a shield depicting a downturned pitch fork descending into the rising flames of perdition. This is truly the saddest portrayal of our compliance culture. It breaks my heart. Imagine that you are a patriotic hometown hero joining the AF to serve your country and protect your loved ones, entering into the proud
4 QA Crest
tradition of American aerospace and you are implicitly told that you are a snake, day-one. Airmen should not view leadership as vultures, nor should leaders represent subordinates as snakes. Bad form, not proud heritage. If you are not convinced that simple symbols can shape culture and transcend time and space, click here. (38)
This psychology can also be tied in a loose-chain to our use of language, etymology in an era where (though categorically incorrect) it was acceptable to neglect the time “to distinguish between the unfortunate and the incompetent.” Such as the use of the terms “maintenance malpractice,” with its unwarranted implications of criminality. Addressing error should not be contingent on the personality or style of leaders but on the academic and proven principles of aviation leadership, resulting in a common doctrine and lexicon. If you are not convinced that simple language can shape culture and transcend time and space click here (mindbender at 9:00). (39)
This internal compulsion is an effort to embolden a QA team that might feel pressured to back down. As some have called it, this “grab-em by the balls and their hearts and minds will follow” approach towards compliance unintentionally handicaps the QA inspectors even further through a lack of trust. Technicians, instead of receiving the guidance from the best-of-the-best technicians in their field through a constant presence at the point of execution, like POWs, compose bird noises to alert their teammates to the arrival of the guard at the Stalag Luft; picking up their toolboxes and shutting down productivity when QA arrives. This reactive and event-based compliance process results in an incomplete compliance narrative that drives a whitewashed operational scope. Its weighted average score then becomes only useful as artificial means to enhance an end-of-year individual personal performance report as it most likely shakes out to a >90% annual average, encouraging only mediocrity in the end. This decreases the quality and design of interventions. It breaks the feedback loop. It fractures teamwork; it is a waste. It’s in the grain of our organizations, where the stress begins to splinter. And it may also start to explain why some say that “maintenance eats their own.”
There is an inherent friction between compliance and performance that can sometimes lead to adversarial relationships. However, embracing and fortifying the cultural adversarial roles of that natural push-pull relationship, instead of addressing and correcting them, pull an organization in a third direction, ultimately suspending momentum.
If left unchecked, our litigative approach to compliance can translate into an unbalanced black-and-white leadership approach, and a policy paralyzed culture through a lack of teamwork and a culture of fear. For there to be compliance, there must first be a requirement. However, an over-reliancee on bureaucratic controls can dangerously impede development, production, and innovation.
Technical Orders
Technical Orders (TOs) provide guidance on the maintenance of weapon systems. Additionally, TOs implement the policy of AFIs such as the 21-101.These sources are authoritatively written by order of
the Secretary of the Air Force, an oath-taking officer with powers delegated by the Executive Branch, established by the Constitution of the United States, the highest law of the land. TO deviation is also a leading contributor to maintenance mishaps annually. (40) (41) And yet, we as leaders often stop our root cause analysis at whether a TO was deviated from or not, instead of taking a broader system-focused view. Thought leaders within industry conclude that managers often do not pursue root cause of TO deviation, stating, “For many incident/ accident reports, Failure to Follow Procedures (FFP) is listed as a causal factor and not analyzed further to determine why the procedure user took the unusual and unauthorized step of failing to follow the given procedure. There is a need to understand the reasons for not following procedures if we are to help reduce FFP incidents.” (42)
TO adherence, while predominant to maintenance and safety, is only one factor in weapon system diagnostics. Therefore, a policy fixated leadership vision is myopic. For instance, in addition to serving as policy, TOs have unignorable economic and systematic aspects.
Economically speaking, TOs or technical data is Intellectual Property (IP). From a program management perspective, IP is much like a trade secret that translates into the maintainability and sustainability of a weapon system that transacts into dollars. Program managers purchase IP and maintainability from the manufacturer during the procurement of a weapon system. This costly and complicated bargaining chip dictates future maintainability and operations & sustainment costs of an aircraft. These bargaining chips or “trade-offs” come under pressure from factors within the Defense Acquisition System, i.e., cost, schedule, performance. This largely accounts for why some TOs are “better” than others and why some aircraft are better sustained. Technicians that have worked multiple aircraft can tell the difference between the quality of TOs and maintainability intuitively but are seldom aware of the underlying cause. Further, maintenance leaders will likely not be aware of these upstream decisions making it challenging to recognize their downstream consequences.
Technicians and maintenance leaders alike, exposed to the backpressure of operational demands, might be unconscionably tempted to overextend their assumption of risk through the bottleneck of poor maintainability and supportability. When confronted with defect, maintenance leaders with a limited scope of their true operating environment and system, coupled with inadequate training in Human Factors, will mistakenly design sub-optimizing interventions, thinking themselves nearest to the handle of the bullwhip when they are actually closer to its ever-oscillating cracking end.
Systematically speaking, TOs are considered an interface. A common model used to understand aviation systems is the SHEL model. (43) SHEL is an acronym where “L” the Liveware or human is the hub and central component to which all interfaces are matched:
This model provides a useful framework for understanding Human Factors. Technicians operate in a system of many simultaneous interfaces. TO interface breakdown may be less tangible than other interfaces and “consequently more difficult to detect and resolve.” (44) Automated TO interfaces such as E-Tools can cut down on “liveware to software interface breakdown” but can also contribute. Thumbing through various manuals via multiple screen tabs, swiping around complex schematics on a limited screen size, can quickly contribute to system breakdown. Compound that by adding other demanding interfaces — a flash- lighted, hot, and painfully contorted workplace, the distraction of a night-shift production superintendent demanding updates over an out of reach radio. Mishap prevention in maintenance requires a hyper-developed situational awareness, brilliant vigilance, and an unyielding backbone.
Disciplined TO adherence is the bedrock to quality and safety. TO deviation is a leading contributor to maintenance mishaps annually (45) (46) and therefore, adherence is a predominant value of our organization. There are too many case studies to list where lives could have been saved or mishaps prevented by adhering to a TOs blood-inscribed warning, cautions, and notes. But just as technicians must think beyond the black-and-white of a TO in order to troubleshoot, so to maintenance leaders must consider the entire operating environment when designing interventions or administering discipline. This is where we sometimes go wrong, and when halting our root cause analysis at adherence to instructions means holding short of an understanding of a more actual operating picture. This myopia is what drives failures of creativity and short-sighted, inherently blame attributing behavior. Such as reading technicians Miranda Rights before discussing error, channeling operational feedback via AF Form 1168 “Statement of Suspect/Witness/ Complainant,” or a hyper-reliance on ever-accumulating military progressive administrative discipline, tools that are wholly inadequate for the task. While these methods are possibly helpful in preparation for a future court-martial scenario, they take a significant withdrawal from the unit’s piggy bank of trust and likely create a deficit in open communications. In order to determine a real operating picture, we need to begin with our people and how they (we) think.
The Mind of The Maintainer
When Orville and Wilbur Wright required an engine for the first powered flight, the brothers knew they needed one that produced eight to nine brake horsepower, weighed no more than 180 pounds, and was free from vibration. With crude drawings, and nothing but a drill press, shop lathe, and hand tools, Charles Taylor built the first aviation engine. It produced 12 horsepower at full RPM, allowing 150 additional pounds of strengthening on the airframe. (47) Rooted in the same vein as Charles Taylor, our technicians use limited resources with black-and-white, binary, and basic requirements to accomplish the infinite. Often pioneering into the unknown, they must both firmly hold that which is “concrete,” and reach out to obtain the “abstract.” As leaders, our job is to help them hold-fast to both, and we fail if we pry their grip from either. One fundamental question seems to strain this endeavor.
Upon deviation and error, we repeatedly ask ourselves, “why don’t people just do what they are supposed to?” This, in essence, is the human factor of our business.
For example, sometimes mistaken as a policy to which to strictly adhere, the use of fault isolation and decision making “trouble trees” are a beneficial aid in narrowing possibilities of fault diagnosis and increasing the probability of accurate troubleshooting which leads to increased weapons system availability and reduced costs. But these tools will only put technicians out on a limb without abstraction. This is why technicians will sometimes anthropomorphize an aircraft during troubleshooting, attributing animate characteristics to the inanimate, and asking questions such as “what is the aircraft thinking during this phase of flight? They are processing in abstraction — thinking beyond the concrete. Nevertheless, the repair of complex systems in complex environments introduces the increased potential for error, and when presented with error and deviation, even well-meaning maintenance managers can be tempted to invent simplistic and binary doctrine in an effort to distill the operational scope into a more handy paradigm. An error then exists as either this or that, a “misdemeanor” or “felony” for example, or as the result of either training or attitude, ignorance or ineptitude or committed by the “unfortunate or the incompetent” etc. Root cause analysis of error then instead of becoming distilled, becomes diluted. And the technician can quite possibly be browbeaten back into concrete thinking. That is why simplistic and experiential approaches just won’t do.
Compliance Doctrine
Currently, there is a need for a common doctrine and lexicon in our maintenance compliance culture.
Aviation safety research widely concludes that 80% of aviation maintenance accidents are influenced by human factors. (48) It is, therefore, critical that leaders have a firm grasp on recognizing the signs and types of human error. This is foundational to maintenance management and should be incorporated in every technical training for technicians and leaders alike. The Dirty Dozen of Aircraft Maintenance lists twelve common human factors to maintenance technicians and is widely adopted as an industry standard for a straight-forward discussion on human factors in maintenance. Additionally, our culture needs to be one in which error, and our people are treated fairly with consistency and transparency.
Values-Based Compliance
In a progressive effort to improve its safety culture, in 2015, the FAA published its compliance philosophy to establish a “just safety culture.” To the FAA, this meant fostering “an open and transparent exchange of safety information,” and obtaining “a higher level of safety and compliance with regulatory standards.”
The International Civil Aviation Organization (ICAO) defines a Just Culture as:
“…One in which all employees are encouraged to provide, and feel comfortable providing, safety-related information. It is an environment in which employees understand they will be treated justly and fairly on the basis of their actions rather than the outcome of those actions, in the case of positive, as well as negative safety events. A Just Culture recognizes that systemic factors (not just individual actions) must be considered in the evaluation of safety performance and interpretation of human behavior. A strong Just Culture in each aviation organization is perceived as the basis for a successful safety culture.” (49)
Eurocontrol and the European Civil Aviation Conference define Just Culture as:
“A culture in which front line operators or others are not punished for actions, omissions or decisions taken by them that are commensurate with their experience and training, but where gross negligence, willful violations and destructive acts are not tolerated” (50)
In a 2015 memorandum titled “Proactive Aviation Safety Programs and Just Culture” General Carlton Everhart II, the Commander of Air Mobility Command stated that “In a Just Culture, airmen are encouraged to report safety-related information, knowing their leadership recognizes the difference between acceptable mistakes and unacceptable behavior.” (51)
Dr. James Reason, a world-renowned expert on human error and creator of the Reason Model or “Swiss Cheese Model,” Defines a Just Culture as:
“…One in which everyone knows where the line must be drawn between acceptable and unacceptable actions. When this is done, the evidence suggests that only around 10% of unsafe acts fall into the unacceptable category. This means that around 90% of unsafe acts are largely blameless and could be reported without fear of punishment.” (52)
Outcome Engenuity, a human error management consultancy firm, in a well thought out model, synchronizing legal, human resource, psychological and safety aspects to address mishaps, distinguishes attributable causality into three categories:
Human Error: an inadvertent action; inadvertently doing other than what should have been done; slip, lapse, a mistake.
At-Risk Behavior: a behavioral choice that increases risk where risk is not recognized or is mistakenly believed to be justified.
Reckless Behavior: a behavioral choice to consciously disregard a substantial and unjustifiable risk.
Further, Outcome Engenuity provides a proprietary decision-making tool called the “Just Culture Algorithm” which contributes a consistent and transparent model for intervention design adaptable to the values of the organization, through which culpability is assessed through “Duties.” This is on the premise that “Duty precedes error” and “To make a mistake, there had to be a “right” thing to do in the first place.” These include the Duty to Avoid Causing Unjustifiable Risk or Harm, the Duty to Follow a Procedural Rule, and the Duty to Produce and Outcome. Additionally, the algorithm addresses repetitive Human Errors and At-Risk Behaviors. Ultimately concluding that intervention in a Just Culture, is one that consoles human error, coaches at-risk behavior, and punishes reckless behavior, independent of the outcome.
Outcome Bias
In the 1951 film titled Flying Leathernecks, John Wayne stars as Maj. Daniel Xavier Kirby in an allegorical clash between the military mission and its people. Kirby’s F4F Wildcat fighter unit performs relentless missions over Guadalcanal with an undisciplined crew and significant resource constraints. Kirby takes a hard-line disciplinary leadership approach. Kirby’s antagonist in the film is not the enemy but his second-in-command, executive officer Capt Carl ‘Griff’ Griffin.
Repeatedly passed over for command for lack of endorsement, Capt Griffin disapproves of Kirby’s hard-line approach. Sympathetic to the needs of the squadron pilots, Griffin often removes men from combat missions. Kirby considers Griffin soft, “you just can’t bring yourself to point your finger at a guy and say, go get killed!” Eventually, Kirby wins him over, and Griffin makes the hard call, earning respect from Kirby and his endorsement for Griffin’s eventual command. In this story, however, there is another character, inconsequential to the plot, but performing a role that is meaningful to maintenance in its own right, the unit’s Line Chief, MSgt Clancy. With no given first name, Clancey is responsible for the readiness of the squadron’s aircraft and just about everything else. He weaves in and out of scenes, often illicitly commandeering needed equipment, everything from reengineering a mess hall washing machine motor into a fuel pump, to stealing a rocking chair for the base’s Colonel. All with the adoring approval of Kirby. With resources stretched thin, the officers admire Clancy’s improvisation. However, towards the end of the story, Clancy is busted in rank to Private First Class when the outcome of that same resourcefulness finally catches up with him. This is oddly laughable to both Kirby and Clancey in a way only appropriate for Hollywood. The relationship culminates with Kirby’s sentiment, “If I have to go where there’s another war, I hope Clancey will be there.” This is a very cliché and admittedly cheesy example, but its subtle implications are considerable. Are we, like the silver screen portrayal of the Flying Leathernecks Kirby and Griffin, at war with ourselves? Do we over-extend our people into double standards in complex operational environments, only to “grab ‘em by the balls” when the outcome is inopportune or even…embarrassing?
Figure 5: Flying Leathernecks credit RKO Radio Pictures
How often do we as leaders collectively endorse, through omission or commission, the behavior we ultimately punish? How often is our administration of discipline driven more by outcome rather than by fair treatment? With complex systems in high-risk industries, even small errors can have a damaging impact. How many of those inspections categorized as minor on your units’ RIL, could prove catastrophic? How many major could go unnoticed and unaddressed? Does our current compliance system provide for fair and consistent treatment of similar errors despite the severity of the outcome? Is our standard of discipline applied evenly irrespective of circumstances, in-garrison, or deployed? During a mass-generation or I.G. inspection? During the Wing Commander’s sortie or a standard flight around the flagpole? How much is contingent on the personality or style of leaders? With the high rate of turnover within maintenance units, how far-reaching does the impact of those inconsistencies propagate, and how many airmen carry with them the baggage of a duplicitous compliance culture from assignment to assignment until they are eventually the ones re-enforcing those same inconsistencies? How often are we lulled into the tepid dinner table conversation of the MSEP, while the pot is boiling over in the kitchen leaving our Airman to manage the mess? How many of our Airmen just decide to vote with their feet and…leave?
Outcome Engenuity defines Severity or Outcome Bias as: “Punishing or disciplining a person who made a human error or engaged in an at-risk behavioral choice, simply because there was a severe outcome.” Conversely, “Not addressing the behavior at all when no adverse outcome results, even though harm could have occurred in a similar circumstance.” A failure to provide a consistent discipline framework causes a break in the feedback loop, preventing open communication about risk and system issues. As Outcome Engenuity states, “The severity bias causes us to “label” people, events, into categories that don’t help us define performance issues.”
Take the cost of an aircraft part, for example. Determining a more severe intervention based on the price to the government of damage to a $100K part as compared to a $100 part might seem entirely rational, the severity or outcome of the former obviously more significant than the latter and therefore the greater severity of the intervention in good keeping with government stewardship. But challenge yourself, does the technician really have any visibility at all into the procurement cost of supply? Or the ergonomics of design and maintainability of each part? Technicians are trained and should know intuitively to exercise the most exceptional care, but error is introduced with complex systems in complex environments. This should beg the question, “what else might be going on here?” Also, whether all damage to expensive parts is truly reckless and conscionable disregard? Taken to its logical extent, this would clearly be ridiculous. An error occurring on more expensive aircraft being treated with more severity than a less expensive weapon system. The severity of the discipline of the unit eclipsing that of the other. And yet ask yourself, how many times have you seen, or maybe even written, the price tag of a part damaged in error on the punitive paperwork of a technician? Why was that important to mention? Did it alone influence that intervention decision? If not, how much so? Moreover, the management by exception of audit and inspection outcomes by full reliance on progressive discipline is akin to jumping between gears while pedaling uphill and snapping the chain of communication and trust. Nevertheless, how many times have you seen levels of progressive discipline or hierarchy of management confrontation assigned to categories of inspection findings? TDVs, DSVs and UCRs go straight to the top!? In these cases, does education and learning happen at the point of execution or at the point of punishment?
I would offer, that sort of learning is as stiff as a starched uniform at the position of attention on a Saturday morning. All of this begging the question, “might we be our own worst enemy?”
I firmly believe that our AF is entrusted with good leaders of high standing in the ranks of which I often feel unworthy, but I am grateful to be counted. Bad ideologies creep in, taking many forms, and we need to train our minds and fortify ourselves to keep them out. I think it is in this area that we are weakest, we cave, and often will dismiss values-based compliance leaders as “Mr. Nice Guys” and unaccountable leaders. Truly, laissez faire leaders can be even more destructive in our high-octane environment, making the “just an honest mistake” answers all the more unsatisfying and potentially hazardous. I believe the rub and confusion are due in part to our lack of common lexicon and doctrine to combat negative compliance ideologies. We train our maintenance leaders in the classic fundamentals of military leadership but forgo training them in these academic and proven principles, leaving aviation leadership to be contingent on personality and swagger; Where we hedge our ignorance of the aviation sciences, through administrative preoccupation. The type of leader that General Mitchell wrote that an “Airman cannot be.” (53) And as we as an AF MX community of leaders discuss matters such as what belongs on our CFETP and whether maintenance and logistics leadership is a matter of technical or behavioral competencies, we must remember that our compliance culture is a matter of both. Toxic ideologies present themselves in many forms. In a not too distant period of our history, “Theory X,” DoD thought leaders produced terms, such as the “Vacant Middle.” (54) A term used by higher-level managers to describe the middle managers that resisted top-down efficiency measures. Perceiving the middle managers to be a population too sympathetic to the causes of employees and, therefore, as good as absent. Although it’s difficult to assess, it is possible that by bridging one gulf, these mindsets have widened another divide. This is quite possibly why we no longer use this marginalizing term. I also wonder whether mindsets such as these have caused us to overlook and may have even contributed to problems within our current state. Have our outdated “Vacant Middle” mindsets widened the divide, contributing to our “Frozen Middle” current state? Certainly, we’ve outgrown these mindsets? Haven’t we?
When designing intervention after being confronted with error, many AF maintenance leaders will decide between using either “black-hat,” distantly objective, and even punitive approaches or “white- hat” informal coaching methods. This terminology arises from the black baseball caps worn by Cold War era MSET inspectors sent from unit Higher HQs to ensure top-down compliance, and it conjures images of monochromatic western saloon shootouts between the good, the bad and the ugly. In the common vernacular of the AF, these “hats” have come to represent a good-cop-bad-cop approach to compliance, each having their “appropriate” time and place. Leaders who operate within this paradoxical paradigm choose either to turn the compliance spigot left toward “white” during times of plenty or a hard right to “black” during times of famine. We have been at this battle of mindsets for some time now. When establishing the office of the Air Force Inspector General in 1949, Maj Gen Hugh J Knerr wrote,
“Experiences with Inspector-Generals had convinced me the established concept of the office was wrong… aimed at finding fault. I decided to turn the whole system upside down and view it from the bottom, rather than from the top. My objective would be to give local commanders the tools with which to ensure their efficiency.” (55)
Revitalizing the Squadron, before it was cool! If we play the part of the Air Force anthropologist, for a moment, we can ask what effect does this duplicity have on the trust of our technicians, and what behavior does it promote? What if the “hats” were taken off and just…simply disappeared? Would our approach to compliance be more consistent, less polarized? What would we do if there were absolutely, no hats on the flight line?
One notion that is errantly perpetuated within our ranks is that of obtaining organizational balance in this compliance model. Pulling tools and tricks from the white-hat or the black-hat depending on the timing. There is a reason why attempting to fix your unit through this approach is confusing and unsatisfying. Balance in a black-and-white model is a myth. As the decks of time are continually rocking and its sands ever-shifting, finding balance is only as momentary as the instance in which it was struck. Therefore, its pursuit is fleeting and, in many ways, blinding. Managers pursuing balance through this paradigm often guise their decisions like that of a swinging pendulum, borrowing methodology from each end of a black-and-white spectrum as if each aspect of organization will pick up the tempo of this metronome. But a mixture of flawed methods does not a pure method make, and more “black-hat” will not fix your problems. What truly exists in this fundamentally flawed compliance model is a path of multiple personality-driven discordant pendulums belonging to its many maintenance managers upon which our technicians must perfectly time their steps in order to avoid the trap. It is imperative that our compliance systems are consistent and not contingent on the personality or impulse of our diverse team of AF leaders.
Imagine that you could reshape our compliance culture, what would it look like? Would you ask, does our culture and subjective processes promote competing incentives? Is a good grade more critical than quality data? Do competing incentives begin the loose chain of distrust and fractured teamwork? Does that lead to the missing bullet holes of missing data, and myopic, policy fixated vision? Does the way in which we treat information from our Airmen cauterize the source and restrict its flow? Does it demoralize our people? What behavior does it promote? Would a proactive change lead to better information upon which to design intervention, improving safety, and quality? Are we using all the tools we have available to complete the circuit, create a learning system to educate minds with meaning and purpose at the point of execution? What would you change in the AF, or more importantly, in your unit? Do our symbols of compliance evoke pride and a sense of deepened empowerment? Is there a foreign debris in the throttle quadrant of progress, preventing the advancement required to retake our lead in the formation? As Brig Gen William “Billy” L. Mitchell wrote, “Transportation is the essence of civilization” (56) I humbly submit that it is vital that we have this conversation now, as AF MX leaders.
Recommendations and Final Thoughts
This paper makes three recommendations to the USAF and strives to emphasize their need by offering a wholistic discussion on culture and relevance, mainly from a practitioner’s perspective.
- Develop and publish compliance doctrine in keeping with industry, regulatory, and academic best practices and research. This must be a collective As far as I have read, the best content is being developed by Outcome Engenuity. In 2015 the FAA published its Compliance Philosophy and Airman Rights, a 6-page, forward-looking, document establishing compliance doctrine and vision, but limited in its practicality. An approach could begin with a combination of the two. Developing a framework consistent with our organizational values and building a common lexicon is key to ensuring our performance and combating negative ideologies.
- Overhaul the outmoded MSEP, beginning with abandoning the complicated and unnecessary scoring criteria in our multi-criteria analysis MSEP methodology. Decoupling the inspections and results from the short-sighted grade, will allow for more adaptability within inspection areas, better precision in personnel evaluations and ultimately better data. This should also encourage a more process-oriented, performance-based approach/model from which to launch into root cause analysis and increase future performance. Further research should be conducted into the appropriateness of this methodology, considering the sensitivity to and the mutual independence of the input factors and the output, as described in the competing incentives There is a great degree of flexibility in audit design. There are many available ways to analyze and display audit data, I.e., balanced scorecard, dashboard, heatmap, control chart, attribute chart, pareto etc. (57) Many ways to design audit systems. (58) (59) (60) The key is in simplicity and the ability to lead to what is most important, achieving organizational goals and promoting change. (61)
Personnel evaluations should, at the very least, be disaggregated from this metric. Our policy should reflect our organizational identity and within that reality, strive to develop a “learning- culture” in a “learning system.” Considering the average age of an industry Aircraft Maintenance Technician is 51 years old (62) and with an average age of workforce entry of 25 years old after a minimum of a 24-month trade school. Awarding A&P, a certification that by industry is considered a “license to learn.” And compare that to the 2-3 months of technical school that our new airmen receive, and you realize that there is a very heavy training burden levied on the tactical units. Our OJT focused pipeline dates back to the WWII build-up. Watered down PE’s on simple tasks to shore-up a grade can’t complete that feedback loop. Stove piping “lessons learned” at the tactical level or even privileged accident data at the AF level won’t do that either. American Society for Quality (ASQ) recommends against using audit results for performance appraisals stating that audits are a “penalty-free period of time to correct problems and improve performance” “If audit results become part of performance appraisals, the audit program will suffer.” (63) What then occurs if your auditing process is your performance assessment? Grading outcomes vs. evaluating processes, masking data to shore-up a grade…data in and of itself is neither inherently good nor bad, outstanding or unsatisfactory, it’s just data. We need it to complete the circuit and restore the livewire.
- Eliminate the use of cultural symbols that promote a punitive and attributable compliance culture and stand in contrast to values-based compliance principles. Our compliance culture and policy framework should do everything to support the essence of our mission and should do nothing to detract from it — help mechanics perform maintenance and generate sorties, and then get out of their way.
For now, at least, our symbols of compliance are what we’ve made them. The virulent vulture, punitive police badge, and duplicitous black and white hats adorn our units and remind us of the correlating current state which they promote. A better symbol could be employed. An empowering symbol, intelligent and bold. An example could be taken from the ancient Zhou Dynasty weaponry craftsman. The weapon of the maintainer artisan warrior with our maker’s mark of integrity and trust, set in the black heart of a common enemy.
“A better symbol could be employed, an empowering symbol, intelligent and bold.”
Our compliance culture is our history, and it is our people. It is the reverberating heartbeat of five- thousand radial engines en route to the enemy. It is the icy klaxon call on a frigid coastal morning.
It is the dry concussion of a desert air assault. It is the tense twilight of Technical Sergeant Thomas Mueller, trapped in his prison cell of guilt. It is the breathless silence on the cockpit voice recording of Technical Sergeant Joseph Gardner III that can’t be rewound. It is dog days of discrepancies, near
misses, and saves. It is the progress that comes from armoring engines and completing the mission. It
is our integrity behind the panel. It is the trust of a crew chief’s salute. We need to fail down here, so we don’t fail up there. We cannot accept less; we must aim high.
Captain Dave Loska
Figure 6: Proposed Compliance Symbol
(U.S. Air Force photo by Senior Master Sgt. Ralph Branson)
ABOUT THE AUTHOR
Captain Loska is a U.S. Air Force Aircraft Maintenance Officer, currently serving as a Logistics Career Broadening Officer assigned to the Oklahoma City Air Logistics Complex, Tinker AFB, Oklahoma.
References
- USAF HAF/A4. (2017). A4 Enterprise Flight Plan.
- George, C. (2017, October). LOGTALK Maj Gen George. Retrieved from Logistics Officer Association: http:// www.logisticsymposium.org/ Media/2017-Video-Gallery
- Wright, K. (2018, August). Thawing The Middle – Chief Wright encourages all Airmen to build a culture of innovation. Retrieved from AIRMAN Magazine: https://airman.dodlive. mil/2018/08/06/thawing-the- middle/
- CBS News. (2019, February 5). FAA investigators assessing mechanics’ complaints may be interpreting regulations differently, CBS News investigation finds. Retrieved from https://www.cbsnews.com/news/ airline-mechanics-pressured-
faa-investigators-interpreting- regulations-differently-cbs-news- investigation/
- S. Air Force. (2015). AIR FORCE INSTRUCTION 21-101 Aircraft and Equipment Maintenance Management. U.S. Air Force.
- Ibid
- Air Force Combined Mishap Reduction System. (2019).
Retrieved from AFCMRS: https:// www.afcmrs.org
- Ibid
- Juran, J. M. (1995). A History of Managing for Quality: The Evolution, Trends, and Future Directions of Managing for Quality. ASQC Quality
- Qiupeng, , Meidong, C., & Whenzhao, L. (1995). Ancient China’s History of Managing for Quality. In J. M. Juran, A History of Managing for Quality: The Evolution, Trends and Future Directions of Managing for Quality. ASQC Quality Press.
- Karafantis, L., & Volmar, (2017). Idiot-Proofing the Air Force: Human Factors Engineering and the Crisis of Systems Maintenance. Presented at “Maintainers II: Labor, Technology, and Social Orders” Stevens Institute of Technology, Hoboken, NJ.
- Ibid
- Hoctor, C. J. (2013). Voices from an Old Warrior. Galleon’s Lap
- Crippen, M. (1986). Aircraft Maintenance Quality Program: Time for a Change. Air University, Air War College.
- Hoctor, C. J. (2013). Voices from an Old Warrior. Galleon’s Lap
- Daschbach, (1977). The Aircraft Maintenance Standardization and Evaluation Program–A Viable Program? Air Command and Staff College, Air University.
- Department for Communities and Local Government. (2009) Multi-criteria analysis: a manual
- Mitchell, B. (1976). Quality Control: A Unit Level Analysis of the Maintenance Standardization and Evaluation Program. Air Command and Staff College.
- Crippen, M. (1986). Aircraft Maintenance Quality Program: Time for a Change. Air University, Air War College.
- Daschbach, (1977). The Aircraft Maintenance Standardization and Evaluation Program–A Viable Program? Air Command and Staff College, Air University.
- Crippen, M. (1986). Aircraft Maintenance Quality Program: Time for a Change. Air University, Air War College.
- Daschbach, (1977). The Aircraft Maintenance Standardization and Evaluation Program–A Viable Program? Air Command and Staff College, Air University.
- Ibid
24 Daschbach, T. (1977). The Aircraft Maintenance Standardization and Evaluation Program–A Viable Program? Air Command and Staff College, Air University.
- Crippen, M. (1986). Aircraft Maintenance Quality Program: Time for a Change. Air University, Air War College.
- Mitchell, B. (1976). Quality Control: A Unit Level Analysis of the Maintenance Standardization and Evaluation Program. Air Command and Staff College
- Crippen, M. (1986). Aircraft Maintenance Quality Program: Time for a Change. Air University, Air War College.
- Marx, (2009). Whack-a-Mole: The Price We Pay for Expecting Perfection. By Your Side Studios.
- (2018). U.S. National Defense Strategy. Department of Defense.
- Arter, (2003). Quality Audits for Improved Performance. ASQ Quality Press.
- Gross, N. , Giacquinta, J. B., & Bernstein, M. (1971). Implementing Organizational Innovations: A Sociological Analysis of Planned Educational Change. New York: Basic Books.
- Drury, C. G., & Dempsey,
- (2012). Human Factors and Ergonomics Audits. In G. Salvendy, Handbook of Human Factors and Ergonomics. Hoboken, New Jersey: John Wiley & Sons.
- Ibid
- S. Air Force. (2015). AIR FORCE INSTRUCTION 21- 101 Aircraft and Equipment Maintenance Management. U.S. Air Force.
- Hsiao, -L., Drury, C., Wu, C., & Paquet, V. (2013). Predictive models of safety based on audit findings: Part 2: Measurement of model validity. Applied Ergonomics.
- Mangel, M., & Samaniego, J. (1984). Abraham Wald’s Work on Aircraft Survivability. Journal of the American Statistical Association, 79(386), 259-267.
- Hoctor, C. J. (2013). Voices from an Old Warrior. Galleon’s Lap
- Petzinger, G. (2015). Why are these 32 symbols found in ancient caves all over Europe? Retrieved from TED Ideas worth spreading: https://www.ted.com/talks/
genevieve_von_petzinger_why_ are_these_32_symbols_found_ in_ancient_caves_all_over_ europe?language=en#t-8298
- Boroditsky, L. (2017). TED Ideas worth Retrieved from How language shapes the way we think : https://www.ted.com/ talks/lera_boroditsky_how_ language_shapes_the_way_we_ think?language=en
- Air Force Safety (2016). Aircraft Maintenance Mishaps FY09-Present. HQ AFSC.
- Johson, B., & Avers, K.
- (2014). The Operator’s Manual for Human Factors in Maintenance and Ground Operations. FAA.
- Drury, C. G., Drury Barnes, , & Bryant, M. R. (2017). Failure to Follow Written Procedures. Federal Aviation Administration.
- (1989). Human Factors Digest No. 1 Fundamental Human Factors Concepts, Circular 216-AN/131. Canada.
- CAA. (2002). CAP 718: Human Factors in Aircraft Maintenance and Inspection. Civil Aviation Authority.
- Air Force Safety (2016). Aircraft Maintenance Mishaps FY09-Present. HQ AFSC.
- Johson, B., & Avers, K.
- (2014). The Operator’s Manual for Human Factors in Maintenance and Ground Operations. FAA.
- Taylor, (n.d.). Charles E. Taylor: The Man Aviation History Almost Forgot. Retrieved from FAA: https://www.faa.gov/ about/office_org/field_offices/ fsdo/phl/local_more/media/ CT%20Hist.pdf
- (2018). Human Factors. InAMT Handbook.
- (2013). Safety Management Manual (SMM). International Civil Aviation Organization (ICAO).
- Van Dam, R. (2007). The Just Culture Initiative (ICAO / McGill Conference Presentation).
- Everhart, C. (2015, November 9). Proactive Aviation Safety Programs and Just Culture (Memorandum).
- Reason, J. (1998). Achieving a safe culture: theory and practice. Work and Stree, 12(3), 293 306.
- Mitchell, (1925). Winged Defense: The Development and Possibilities of Modern Air Power. Dover Publications, Inc.
- (1968). Zero Defects: The Quest for Quality.
- Knerr, H. The Vital Era:1887- 1950 In which America nurtured leaders and tempered arms. unpublished manuscript. The Papers of Maj Gen Hugh J. Knerr, Clark Special Collections Branch, McDermott Library, U.S. Air Force Academy
- Mitchell, (1925). Winged Defense: The Development and Possibilities of Modern Air Power. Dover Publications, Inc.
- Stolzer, Halford, & Goglia. (2008)
Safety Management Systems in Aviation. Ashgate Publishing Co.
- Flight Standards Information Management System Retrieved from FAA: http://fsims.faa.gov/PICResults. https://bit.ly/2O0DZ4a
- SAE International. (2016) Aerospace Standard
- International Business Aviation Council (IBAC). (2014) IS-BAO Internal Audit Manual
- Arter, (2003). Quality Audits for Improved Performance. ASQ Quality Press.
- Aviation Technician Education Council. (2018) Pipeline Report
- ASQ Quality Audit Division. (2005) The ASQ Auditing Handbook: Principles, Implementation, and Use. ASQ Quality Press