☰ Menu

Technology's Stories

Stories

March 12th, 2018 by: Kira Lussier

From the Intuitive Human to the Intuitive Computer

Technology’s Stories vol. 6, no. 1 – DOI: 10.15763/jou.ts.2018.03.16.01

PDF: Lussier_Intuitive Human

In an early scene of Her, Spike Jonze’s 2013 romantic science fiction film, the protagonist, Theodore, purchases a new operating system marketed as a revolution in personalized computing. After setting up the operating system, Theodore asks its surprisingly personable female voice (played by the unseen Scarlett Johansson): “what makes you work?” The operating system, who names herself Samantha, responds: “I have intuition.” Samantha describes her intuition as the composite of the personalities of the coders who programmed her. It is the force that makes her operate as more than a machine, bestowing upon her the ability to grow, learn, and adapt. Her depicts an imaginary world where human intuition can be replicated in a non-human operating system, to the extent that Theodore falls in love with Samantha. Theodore is not even the only person in the science-fiction world of the film to find friendship or romantic love in their personalized operating systems. Unsurprisingly, difficulties ensue as Theodore and Samantha pursue a romantic relationship, with both lamenting Samantha’s disembodiment. Intuition, as it turns out, is not so easily automated.

This article not about falling in love with your computer, but Her’s invocation of intuition is an entry point into the themes that drive it. In a North American world where Amazon’s Alexa and Apple’s Siri speak to some of us from our cellphones and our homes, a personified female operating system who seems to display intuitive empathy might strike viewers less as a portent of technology to come than a reflection on the possibilities and pitfalls latent in new technology . Indeed, we do not have to turn to science fiction to find depictions of corporate networks of intuitive computers. As I was taking the train to the airport to give a version of this paper at the Society for the History of Technology’s Annual Meeting in Philadelphia, I found an advertisement in the commuter train magazine for Cisco’s new “intuitive network,” which trumpeted intuition as the force powering their distributed computer network.

How did intuition, once treated as the paradigmatic human capacity, come to be seen as a property of distributed computer networks? The proclamation of intuitive computers by Her and Cisco advertisements struck me because the actors I study (cognitive psychologists, information technology specialists, and business executives) carved out intuition as precisely the human capacity for holistic, non-linear thinking that computer systems could not replicate. Two psychologists, writing in 1982, declared confidently that “fictional accounts of creative, sensitive, wise or intuitive computers are just that: fiction.” [1]  They advocated for a division of labor between managers and machines: computers should do what computers did best—to calculate and process data according to formal logic—while managers should stick to what humans do best, namely to perceive information holistically, seek patterns in data, and take intuitive leaps under conditions of uncertainty.

Elsewhere, I have argued that the “intuitive manager” became one archetype of the knowledge worker in 1970s and in American corporations.[2] Here, I  examine how intuition became a touchpoint within burgeoning debates around information technology systems in corporations in the 1970s and 1980s, as psychologists, IT designers and executives debated questions that continue to haunt our contemporary moment: How could computer systems, and the vast quantities of data they produce, aid managerial decision-making? What type of work could be automated and what remained the province of human expertise? Which psychological capacities, if any, could be outsourced to machines, and which remain uniquely human capacities? By turning to the past, I interrogate how practical concerns about how to design information systems were inextricably bound up in more theoretical, even existential, concerns about the nature of the human who could make such technology work.[3]

Corporate management has been an important audience for new information technologies, from typewriters to fax machines, which have promised to improve and automate aspects of office work.[4]  But managers have not always greeted new technology with enthusiasm, especially technology that threatened to replace their expertise. Nowhere was this more apparent than in the efforts of “systems men” —employees of mainframe computing companies like IBM—to sell Management Information Systems (MIS) to corporations in the 1960s.  Trumpeting information as the foundation of all work, from clerical work to high-level strategy, these systems men marketed MIS as more than just machines, but a total system that could supplement managerial decision-making, a theme that echoed how IBM reconciled itself as a seller of systems, not just machines. [5] Systems men conjured visions of control in the minds of executives, who, seated beyond their mahogany desk, could oversee the whole company.[6] Decision Support Systems (DSS) were developed in the 1970s to support managers in making unstructured decisions, unlike the structured decisions presumed by MIS. Decision Support Systems encompassed a cluster of techniques, ranging from databases to video conferencing technology, that were applied to domains like hiring, financial planning or market forecasting. To be clear, DSS and MIS were not, strictly speaking, forms of artificial intelligence because unlike Expert Systems, they did not automate decision-making, but sought to supplement it. [7]  But the literature on these varying forms of information systems, with its profusion of acronyms, speak to questions at the heart of humans’ relationship to machines that are crucial to understanding the history and contemporary politics of artificial intelligence.

Corporations and systems designers alike lamented the failure of managers to use the much-touted systems. Information systems designers and management writers attributed this failure to MIS’s inattention to individual differences in how individuals perceived information and made decisions, and particularly the preference for intuitive modes of thinking.[8]  Designers of information systems, critics suggested, had been too focused on the technical aspects of what systems could do, and had not paid enough attention to the people who would use such systems. Designers and management practitioners turned to the burgeoning research into “cognitive styles” to articulate the human in relation to machines, arguing both that individual differences in decision-making style were important, and that intuitive modes of decision-making deserved more attention from IT designers. To conceptualize the intuitive cognitive style, information technologists turned to the Myers-Briggs Type Indicator, a personality typing system that garnered increasing corporate attention in the 1970s-1980s. Based on the psychoanalytic theory of Carl Jung, the Myers-Briggs defined intuition as a form of unconscious perception of the realm of images, symbols, ideas, and abstractions. Its opposite was sensing: perception of the empirical world through the five senses. Intuitive people, in contrast to “sensors,” preferred abstract ideas over concrete facts, potentialities over actualities, future over present, and holistic over sequential decision making.[9] Cognitive style, as measured by the Myers-Briggs, referred to the variety of differences in individuals’ modes of perception, cognition, and reasoning that affected how they perceived information and came to decisions.[10]

Cognitive styles researchers exhorted systems designers to be attuned to psychological differences of their users, in part through incorporating intuitive cognitive styles into work teams that designed information systems.[11] Strategic management expert Ian Mitroff drew on Myers-Briggs categories to argue that the very definition of information varied depending on psychological type: “What is information for one type will definitely not be information for another.”[12] Intuitive thinkers, who sought cues and patterns in data, jumped between sections of data and steps of analysis, preferred qualitative information. They were more likely to consider narratives as a form of information, rather than the computer print-outs and databases presumed by designers of information systems. [13]  They preferred more user-friendly interface, and graphics like vividly-colored pie charts.[14] Cognitive style offered a framework to build information systems that took their users’ psychological differences into account.

The proclamation of intuitive management sought to reclaim, and legitimize intuition, as a form of managerial reasoning. In a frequently-cited Harvard Business Review article from 1974, “How Managers’ Minds’ Work,” James McKenney and Peter Keen argued that far from a sloppy or mysterious method, was the superior mode of decision making for unstructured problems in conditions of uncertainty—like forecasting consumer tastes, or historical research—to analytical approaches to decision-making. Aided by computers, analytical decision-making techniques like the decision tree sought to break down decisions into their components, search for operational rules, and ordering alternatives.[15] Critics of this analytical approach to managerial decision-making argued that such models failed to take into account how managers actually made decisions, namely by using their intuition. Moreover, analytical approaches to decision-making processes forced managers to articulate and make explicit elements of their reasoning that were best left unconscious, in their intuitive processes.

As cultural critics Hubert and Stuart Dreyfus argued, unconscious intuitive processes constituted the backbone of human expertise; they could never be replicated by computing systems based on the manipulation of formal rules and symbols. Computers could play a role in creativity or intuition, but only as supplements to human intuitive expertise. They ended their book, Mind Over Machine, with a ringing endorsement of human’s psychological capacity for intuition: computers demanded a rethinking of human nature that “values our capacity for intuition more than our ability to be rational animals.”[16] Dreyfus and Dreyfus were responding to the arguments of Herbert Simon, cognitive scientist and founding father of artificial intelligence, who described intuition not as a uniquely human capacity, but as an algorithmic process that could be replicated by artificial intelligence. Intuition was an unconscious process that relied upon pattern recognition to make rapid decisions, a heuristic that agents adopted to cope with inherent cognitive constraints on their capacity to process information.[17] In Simon’s influential model of human nature, humans displayed inherent cognitive biases and limitations that made them turn to heuristics, like intuition.

Following Simon’s approach to intuition suggested strategies for creating information technology systems that could simulate aspects of human decision-making, including mimicking human intuition.[18]  Applied to corporate settings, Simon suggested that certain tasks of middle management reliant on programmed decisions(like analyzing corporate financial reports) could be automated through artificial intelligence.[19] The automation of some aspects of managerial decision-making was undertaken by Expert Systems, which proliferated in corporations in the 1980s as one of the first widespread forms of artificial intelligence. Expert systems sought not just to supplement managerial decision-making, but to automate it by having experts teach decision-making to computers by feeding patterns into them.

Similar attempts to implement information systems to correct for human’s cognitive biases in decision-making occurred in American public policy. Proponents of technology-assisted decision-making worried that cognitive limitations, in the international arena, could have disastrous implications—leading even to nuclear war. Automated computer systems were promoted under the guise of freeing humans from routine labor by providing empirical data, in order to enhance human’s intuitive expert judgement.[20] Like policy makers, business leaders in the 1970s and 1980s were convinced that the future would be more turbulent and less predictable than before, and so past experience could not serve as an adequate guide. Popular management literature, like the best-selling In Search of Excellence, declared that “only the intuitive leap that will let us solve problems in the complex world.”[21] In this vision, technology serves as a tool to liberate human creativity and freedom: by automating tasks and assigning them to computers, managers would be free to perform the high-level decision-making and creative work that computers could not perform.

Management trade literature of the 1970s and 1980s depicted intuitive leaders, time and time again, as visionary leaders—able to see the big picture and mobilize people in accordance with their vision, but ill-suited for detail-oriented, routine tasks. Behind mahogany desks, they harnessed their intuition, and their authority, to make big-picture decisions.[22] An article in the trade journal Business Horizons depicted intuition as not only the force that separates humans from machines, but also the force that separates executives from managers:[23] “in the middle of the computer revolution, the intuitive skill to sift through all the information may be as important as the information itself.”[24]  The executive was one who had the power of seeing the forest, delegating the task of “seeing the trees” to lower-level managers, or computers; it was this capacity to “see the forest” that justified high salaries of executives.

The business literature on intuition depicted the rediscovery of intuition as a transformed, masculinized version of intuition, a mode of knowing historically associated with women. As one 1987 trade article in American Banker noted, “In this too-macho world do business, intuition is seen as a feminine trait, one that can be overcome with some good, old fashioned common sense.”[25] But, the article reminded its readers (an imagined audience of male banking executives)that intuition was not exclusively a female mode. “Men have always used intuition too,” it added, “It’s just called something more macho—like a gut feeling or a hunch.”[26] Imagined as an unevenly-distributed capacity that clustered in top executives, the rediscovery of intuition in business reinforced extant managerial hierarchies that were heavily gendered. Male managers were the possessors of expert knowledge and decision-making authority, while the secretary who performed clerical tasks was considered a menial laborer.

Routine tasks did not, as we might imagine, disappear, nor were computer systems the sole way to automate such routine tasks. Writing to mahogany-ensconced executives, trade articles counselled their readers on how to use secretaries most effectively in order to free up time and cognitive capacity for the major decisions required of executives. The kinds of labor obscured in discussions of intuitive management was a kind of clerical work that was already feminized: as typewriters to punch-card operators, women in the office have long performed devalued computational work in offices.[27] The business imaginary of intuitive management rested on making invisible all the low-waged and under-valued labor that continued to operate in corporations.

*

Propelling contemporary debates around artificial intelligence, from self-driving cars, to medical technology, are concerns about replacing human judgement and expertise with artificial intelligence. As this article has shown, similar concerns about human’s psychological capacities were part of these earlier debates over the introduction of computer systems into corporations. Like Alexa and Siri, let us not forget that Her’s Samantha is a corporate commodity, an operating system created by an (unnamed) corporation and sold to lovelorn consumers like Theodore. Corporate automation of work once done by humans, from manual labor in Amazon warehouses to managerial labor in offices, engenders concerns at once technological and political, economic and ethical. The possibility that corporate-owned artificial intelligence could replicate psychological capacities assumed to be the property of humans—like intuition, emotion, and love—has profound implications for how humans might live and love, play and work. But the history of technology also teaches us that expertise in humans isn’t so straightforward: human expertise, decisions, judgement, reflect their own limitations and biases. Who gets the power to decide? Who doesn’t? Just as automation doesn’t fall evenly, neither does expertise or intuition! So, in addition to worrying about the replacement of human expertise and ethics by artificial intelligence, we might also ask: whose expertise? And whose ethics get programmed into our algorithms?[28]

Kira Lussier is a Ph.D. Candidate at the Institute for the History and Philosophy of Science and Technology at the University of Toronto.

 

Suggested Readings

John Harwood, The Interface: IBM and the Transformation of Corporate Design (Minneapolis: University of Minnesota Press, 2011

Thomas Haigh, “Inventing Information Systems: The Systems Men and the Computer, 1950–1968,” Business History Review 75:1 (2001): 15–61

Orit Halpern, Beautiful Data (Durham: Duke University Press, 2014)

Jennifer Light, “When Computers Were Women,” Technology and Culture 40: 3 (July 1999): 455-83

Joy Rohde, “Pax Technologica: Computers, International Affairs, and Human Reason in the Cold War,” Isis 108:4 (December 2017): 792-813

 

Notes

[1] Daniel Robey and William Taggart, “Human Information Processing in Decision Support Systems,” MIS Quarterly 6: 2 (June 1982): 61-73.

[2] Kira Lussier, “Managing Intuition,” Business History Review 90: 4 (Winter 2016): 708-718.

[3] Edward Jones-Imhotep, The Unreliable Nation (Cambridge: MIT Press, 2017).

[4] Thomas Haigh, “Inventing Information Systems: The Systems Men and the Computer, 1950–1968,” Business History Review 75:1 (2001): 15–61; Michelle Murphy, Sick-Building Syndrome and the Problem of Uncertainty (Durham: Duke University Press, 2006); John Harwood, The Interface: IBM and the Transformation of Corporate Design (Minneapolis: University of Minnesota Press, 2011).

[5] Orit Halpern, Beautiful Data (Durham: Duke University Press, 2014); Harwood, The Interface).

[6] Haigh, “Inventing Information Systems.”

[7] M.C. Er, “Decision Support Systems: A Summary, Problems, and Future Trends,” Decision Support Systems 4 (September 1988): 355-384.

[8] The discussion in this section draws on the following articles: Roger Mason and Ian Mitroff, “A Program for Research on MIS,” Management Science 19:5 (January 1973): 475-483; Stephen Barkin and Gary Dickson, “An Investigation of Information Systems Utilization,” Information & Management 1: 1 (1977): 33-45; William Feeney and John Hood, “Adaptive Man/Computer Interfaces: Information Systems which Take Account of User Style,” SIGCPR Computer Personnel 6:3-4 (Summer 1977); M. Bariff  and E. Lusk, “Cognitive and Personality Tests for the design of MIS,” Management Science 23:8 (April 1977): 820-829 ; Roger Mason and Ian Mitroff, “Can We Design Systems for Managing Messes,” Accounting, Organizations and Society 8:2/3 (1983): 195-203; Kathy White, “MIS Project Teams: An Investigation of Cognitive Style,” MIS Quarterly 8:2 (June 1984): 95-101; Kathy White, “A Preliminary Investigation of Information Systems Team Structures,” Information & Management 7: 6 (December 1984): 331-335.

[9] Isabel Briggs Myers and Mary McCaulley, Manual: A Guide to the Development and Use of the Myers-Briggs Type Indicatorl (Palo Alto: Consulting Psychologists Press, 1985); Isabel Briggs Myers, Gifts Differing (Palo Alto: Consulting Psychologists Press, 1980).

[10] Bariff and Lusk, “Cognitive and Personality Tests for the Design of Management Information Systems.”

[11] White, “MIS Project Teams and Cognitive Style”; White, “A Preliminary Investigation of Information Systems Teams Structure.”

[12] Mason and Mitroff, “A Program for Research on MIS,” 478.

[13] Ibid.

[14] Robey and Taggart, “Human Information Processing in Decision Support Systems.”

[15] Stephanie Dick, “Of Models and Machines: Implementing Bounded Rationality,” Isis 106:3 (September 2015): 623–34; William Thomas, Rational Action: The Sciences of Policy in Britain and America (Cambridge: MIT Press, 2015).

[16] Hubert Dreyfus and Stuart Dreyfus, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (New York: The Free Press, 1986): 206.

[17] Hunter Crowther-Heyck, Herbert A. Simon: The Bounds of Reason in Modern America (Baltimore: Johns Hopkins University Press, 2005); Herbert Simon, “Making Management Decisions: The Role of Intuition and Emotion,” Academy of Management Executive 1: 1 (February 1987): 57–64.

[18] Dick, “Of Models and Machines.”

[19] Simon, “Making Management Decisions.”

[20] Joy Rohde, “Pax Technologica: Computers, International Affairs, and Human Reason in the Cold War,” Isis 108:4 (December 2017): 792-813.

[21] Thomas Peters and Robert Waterman, In Search of Excellence: Lessons from America’s Best-Run Companies (New York: Harper & Row, 1982), 63.

[22] Robert Bloomfield, “The Changing World of the Secretary,” Personnel Journal 52:9 (September 1973): 793-798.

[23] Stephen C. Harper, “Intuition: What Separates Executives from Managers,” Business Horizons, 31:5 (September-October 1989): 13-19.

[24] Ibid, 16.

[25] Paul Willax, “Intuition Packs Powerful Punch for Managers Under the Gun,” American Banker (July 8, 1987): 4.

[26] Philip Goldberg, Ibid., 4.

[27] Jennifer Light, “When Computers Were Women,” Technology and Culture 40:3 (July 1999): 455-83; Angel Kwolek-Folland, Engendering Business: Men and Women in the Corporate Office, 1870-1930 (Baltimore: Johns Hopkins University Press, 1994); Marie Hicks, Programmed Inequality (Cambridge: MIT Press, 2017); Janet Abbate, Recoding Gender: Women’s Changing Participation in Computing (Cambridge: MIT Press, 2012).

[28] Safiya Noble, Algorithms of Oppression (New York: NYU Press, 2018).