AI and the ESA: Opening a Conversation
by Thomas Collins and Susan Leddick
“The world now stands on the cusp of a technological revolution in artificial intelligence and robotics that may prove as transformative for economic growth and human potential as were electrification, mass production, and electronic telecommunications in their eras.”
MIT Work of the Future (David Autor, 2019)
“Executives need to put on their “paranoia hat” and envision where AI has the potential to disrupt their business or even their entire industry. Now is the time to have this discussion. In three to five years it may be too late.”
Paul Sallomi, Deloitte (Sallomi, 2019)
“Competitive strategy aims to establish a profitable and sustainable position against the forces that determine industry competition.”
Michael E. Porter, Professor of Economics, Harvard University (Porter, What is Strategy?, 2011)
Why Open a Conversation on the Strategic Implications of Artificial Intelligence for ESAs?
Artificial intelligence (AI) has arrived, will keep coming, is accelerating and inexorable. The strategic implications of AI for ESAs are profound and far-reaching. Artificial intelligence and its variants require ESAs to re-evaluate the competitive forces they face, the competitive strategies they employ, and the ways they develop and deliver services. Here is a sampling of the issues that will be critically important to ESA leaders in both the short and long term.
- AI will hyper-personalize education in districts and schools, altering traditional roles, structures, and processes. Those districts and schools will look to their local ESA for guidance.
- New competition for ESAs— cloud based commercial smart software, much of it well-financed and tightly integrated across multiple functions—will emerge and seek to come between the agencies and the districts and schools they serve. New competition for schools will also emerge from a growing set of alternatives based on hyper-personalized learning. Homeschooling and online education providers are but the tip of the iceberg.
- ESA service offerings and pricing structures will evolve to meet new customer realities and expectations. That is, both the work of the ESA and the jobs of the people who do it will evolve rapidly and dramatically.
- The job of teaching, along with the job of professional development, will increasingly be mediated by personalized information about learners and context and optimized instructional strategies driven by smart programs that have research-based solutions embedded in their code.
- ESA and client workforces will change— new roles will be created, many roles will be modified, and some roles will be eliminated. ESAs will hire for new skill sets such as flexibility, and repeated staff re-training will be essential to survival.
- ESAs’ geographic proximity to the districts and schools in their regions has traditionally been a competitive advantage over private competitors. Proximity will decrease in importance for some services that can be delivered remotely. As geography matters less, interagency cooperation and corporate partnerships will upend regionalization and historic patterns of allegiance.
- Beyond its effects on outward facing ESA work and ESA workers’ jobs, AI will also offer ESAs opportunities to strengthen internal operations and processes.
These are but a few of the strategic issues AI poses for ESA leaders. If we believe MIT’s assertion quoted above, and we add the fact that MIT has committed $2B to construct an AI center on its campus in Boston, ESA leaders have no time to lose to meet the sweeping changes that AI is bringing and will continue to bring to every facet of individual and collective experience.
This sobering realization has led to our desire to start the conversation about AI in ESAs now before any more time is lost. We will publish two articles to encourage the dialogue—the first to define AI and some of its implications for ESAs in practical terms and the second to dive more deeply into potential AI applications and strategic leadership considerations. Any attempt to address definition and application will fall short in comprehensiveness and foresight. Instead, we intend to present enough information to frame important questions. We will write in first person and address our remarks to you, just as we would in a conversation. Finally, we should explain that despite the obvious connection between how AI will affect education systems as a whole, we will focus on implications for ESAs. We know that the two are inseparable in the long run, but a focus on ESAs will dominate in our articles.
AI: All Around Right Now
Just today, you have encountered AI: Google, Alexa, SIRI, Facebook content based on your browsing history, Amazon product recommendations based on your purchasing history, today’s weather forecast, the GPS in your car, and much, much more.
You may be less familiar with some of the following examples:
- Facial recognition software, often used by law enforcement authorities, is now being coupled with emotion-detecting algorithms and used in classrooms to help determine levels of student engagement in the learning process. How might this impact the work ESA consultants do with classroom teachers? What are potential administrative implications, i.e. teacher evaluation and union contracts?
- Robotic exoskeletons powered by AI algorithms to mimic human movement are being used with recovering spinal cord injury and stroke patients as well as being introduced in the workplace to augment human capacity, boost productivity and reduce bodily stress and injury. Similar products are being developed for children. Progress being made in robotics is astounding. How might this change the work, or service offerings of ESA physical and occupational therapists?
- Commercial-grade AI-enhanced robots are performing maintenance tasks like cleaning floors, cutting grass, and washing dishes. In Japan, robots are being used to fill staffing shortages in nursing homes. Can AI help ESAs meet staffing shortages, create efficiencies, save money, or allow for repurposing of current staff?
- “Listening” to a child speak, AI can detect depression and anxiety, which can be difficult for humans to detect, especially in children too young to articulate these conditions themselves. Similarly, potential speech and language developmental disorders can be diagnosed in a few brief minutes of “listening” by AI-enhanced tools, helping to speed screenings and propose remediation as well as to identify potential disorders in children who might otherwise go undiagnosed (Hardesty, 2016). “Trained listening” is a specialty of nearly all ESA personnel. Will AI-enhanced “listening” augment or replace some of the work of ESAs?
- AI can transform financials and other forms of data into “plain English stories” so they can be more easily understood by a wider range of people. A related product can transform data into “natural language” reports. How might an ESA use such tools to tell their story to clients, or to assist their clients in telling their stories to taxpayers, legislators, and others?
- The movement toward personalized learning at scale will accelerate, with AI acting as the key accelerator. “Learning at scale” involves large-scale, technology-mediated learning environments with many learners and few experts to guide them. Large-scale learning environments include massive open online courses, intelligent tutoring systems, open learning courseware, learning games, and more (Learning @ Scale, 2019). The implications for traditional schools structures and processes are significant. How does an ESA support client teachers and administrators in this emerging environment?
AI is not a distant reality. It is a present one that has crept quietly into how we go about our daily lives and how work is done. It is a reality with enormous implications for the future of ESAs.
Toward a Definition
Artificial intelligence is a term with many meanings. Settling on precise definitions can be a challenge but, at a minimum, we’ll want to distinguish AI and its variants from automation and other more generalized technological advancements.
Artificial intelligence deals with technologies, systems or processes that competently mimic how human beings react to new information, speak, hear, understand language, and make predictions—all of which are critical components of human intelligence and decision-making (Rouse). And just as human intelligence emerges from human learning, artificial intelligence emerges from machine learning. It is useful, therefore, to define some of the key processes and terms that are associated with how artificial intelligence gets to be “smart.”
Machine learning enables systems and processes to learn from data, identify patterns and recommend decisions without human involvement. Examples include image recognition, speech recognition, medical diagnosis, prediction, association, and extracting information from unstructured data such as student achievement files. Machine learning can find patterns within large data sets that outstrip the mental capacity of human minds to discern. Deep learning, on the other hand, is a subset of machine learning that occurs when artificial neural networks—algorithms built around the neural structure of the human brain— “learn” from data. The same way human beings learn from day-to-day events over time, a deep learning algorithm executes functions repeatedly, continuously learning and adjusting itself to improve accuracy. These protocols are called deep learning algorithms because the neural networks have various (deep) layers that enable learning of complex patterns in large amounts of data. Examples include news aggregation based on user sentiment analysis, robots that can learn just by observing the actions of a human completing a task, classifying objects in photographs, automatic game playing, autonomous vehicles, and improving customer experience on online self-service platforms.
Automation is the creation of and application of technology in order to control and monitor the production and delivery of various goods and services (Techopedia – Definitions). It performs tasks that were previously performed by humans. Automation is being used in a number of areas such as manufacturing, transport, utilities, defense, facilities, operations and information technology. An intermediate unit (IU) in Pennsylvania, for instance, automated its contracting process. Previously fraught with errors, end runs for last-minute approvals, and general aggravation to staff who used it regularly, contracting yielded to automation and digitized documents. Five contract types were identified, each with unique attachments and approval sequences. Automation made it easy for users to identify which type they were about to process, and the system automatically prompted for the proper attachments. Once complete, the document was routed (again automatically) through the approval chain. The business office of a Texas ESC automated its financial reporting with a routine that produced daily financial reports for program managers. As they logged onto their computers each morning, they could see district account status for their programs current through the previous day along with a forecast of what was to come in the current day. Automation is not considered AI because it is designed to follow manually configured pre-programmed rules, typically to let machines perform repetitive, monotonous tasks resulting in more efficient, cost-effective production. AI, by contrast, gives computers the ability to “learn” and “understand.” Increasingly, AI applications are overlaying automated processes, blurring the lines between the two.
Augmented reality (AR) transforms volumes of data and analytics into images or animations that are overlaid on the real world (Hepplemann, 2019), Starwalk and head-up windshield navigation displays, as examples.
Virtual reality (VR) is a complementary but distinct technology that replaces reality with a computer-generated environment (Hepplemann, 2019), a VR field trip and an HVAC training, as examples.
New applications for AI, machine learning, deep learning, automation, augmented reality and related technologies are being developed daily. Initial development of many of these technologies is in the commercial sector where the early money is. As with most technologies, they eventually find their way into the educational sector.
Seven Patterns of AI
There is a variety of AI “types.” Cognalytica offers seven “patterns” that can be found singly or in combination in most AI applications (The Seven Patterns of AI, 2019). Implications for ESAs and schools are evident.
- Hyperpersonalization – using machine learning to develop a unique profile of each individual, and having that profile learn and adapt over time for a wide variety of purposes, including displaying relevant content, recommending relevant products, providing personalized recommendations and guidance, providing personalized information, advice, and feedback. SIRI, Amazon, and many online learning platforms are examples.
- Autonomous Systems – systems that are able to accomplish a task, achieve a goal, or interact with their surroundings with minimal to no human involvement. This is applied both to physical, hardware autonomous systems as well as software or virtual autonomous “bots.” The primary objective of the autonomous systems is to minimize human labor. Classroom robots, GPS-enabled floor scrubbers and mowers are examples.
- Predictive Analytics and Decision Support – using machine learning and other cognitive approaches to understand how learned patterns can help predict future outcomes or help humans make decisions about future outcomes using insight learned from behavior, interactions, and data. The objective of this pattern is helping humans make better decisions. SLP diagnosis, student progress monitoring, and self-harm alerts are examples.
- Conversational / Human Interaction – machines interacting with humans through natural conversation and interaction including voice, text, images, and written forms. The objective is to facilitate communication interaction between machines and humans, as well as between humans and other humans. Also within this pattern is the creation of content that is meant for human consumption, such as generated text, images, video, and audio. The primary objective of this pattern is enabling machines to interact with humans the way that humans interact with each other. Alexa, customer service bots, and voice-enabled GPS are examples.
- Pattern and Anomaly Detection – machine learning and other cognitive approaches are used to identify patterns in data and learn higher order connections between information that can provide insight into whether a given piece of data fits an existing pattern or is an outlier and does not fit. The primary objective of this pattern is to find which one of the things is like the other and which is not. Example use cases include fraud detection and risk analysis, discovering patterns among data and surfacing insights, automatic error detection or correction, intelligent monitoring, finding hidden groups of data, finding best matches to given data, predictive text, and similar applications.
- Recognition – using machine learning and other cognitive approaches to identify and determine objects or other desired things to be identified within some form of unstructured content. This content could be images, video, audio, text, or other primarily unstructured data that needs to have some aspect within it identified, recognized, segmented, or otherwise separated out into something that can be labeled and tagged. The primary objective of this pattern is to have machines identify and understand things within otherwise unstructured content. Example use cases include image and object recognition including facial recognition, sound and audio recognition, item detection, handwriting and text recognition, gesture detection, and identifying what is happening within an object or field of interest.
- Goal-Driven Systems – using machine learning and other cognitive approaches to give AI agents the ability to learn through trial and error. The primary objective of this pattern is to find the optimal solution to a problem. Examples of this pattern include scenario simulation, game playing, resource optimization, iterative problem solving, bidding, and real time auctions.
The seven patterns illustrate the breadth of what is called “artificial intelligence” and help to explain why a simple definition is so difficult.
PREDICTION: A Key to Understanding AI
Authors of the book Prediction Machines: The Simple Economics of Artificial Intelligence (Ajay Agrawal, 2018) posit that, at this point, artificial intelligence doesn’t actually provide “intelligence” but instead provides a critical aspect of intelligence: prediction. “Prediction is the process of filling in missing information. Prediction takes information you have, often called data, and uses it to generate information you don’t have.” (p. 24) Prediction is essential to decision-making and AI will drive down the “cost” of prediction, which will happen more quickly, with greater accuracy, and at scale, more cheaply. Two examples illustrate the point. AI can diagnose certain types of diseases with greater speed and accuracy than trained medical professionals. AI can review contracts with greater speed and accuracy than attorneys.
And to the list of professionals impacted by AI we can add teachers. What is teaching? Many things, but “prediction” is a major component. How much does a student know? Is this student engaged? Which instructional strategies will work best? What types of intervention may be necessary? If I “teach to the middle” will my high- and low-achieving students learn enough? Is this student in danger of dropping out? Teaching, like many other aspects of education, can be viewed as a series of prediction problems (Ajay Agrawal, 2018). Given enough data from which to construct models with very large numbers of variables, AI is poised to excel in solving prediction problems. Importantly, iterative feedback cycles enable AI systems to learn from experience and improve prediction accuracy over time.
The mere fact that prediction is such a pervasive mental activity in everyday life, not just in work life, begins to explain why AI will continue to have inroads into many daily routines. Consider, for instance, how simple the choice of the route you take to work seems on the surface. Then think more deeply about how much prediction goes into that choice. Will there be a traffic jam at the key intersection? Will snow and ice make a hilly route more difficult than usual? Should I leave earlier or later? Then think of how you use that little red line on your navigation system to stay out of heavy traffic. How many times have you relied on that smart system on your dashboard or on your phone to help you with just such prediction problems? And we could easily cite many other examples to emphasize how important the goal of prediction is to AI and to what extent the ability to predict is a defining characteristic.
Will AI Replace ESA Jobs?
In the movies and in popular culture, it is often suggested that AI will “replace people” and that certain occupations will disappear. Will that prediction come true? A more helpful way to think about this is that AI has high potential to disrupt and redefine the nature of work rather than to replace people. There are at least two ways to think about this: functional disruption and industry disruption. That is, AI technologies will bring changes at the task and job level as well as at the broad industrywide level.
We can expect substantial amounts of functional disruption in ESAs. For the first, new research (Restrepo, 2018) finds that specific tasks/functions within jobs, rather than entire occupations themselves, will be replaced by automation, with some jobs more heavily impacted than others. Obviously, the more a job contains tasks easily automated, the more vulnerable the job is and the more vulnerable the person performing it is.
Which types of tasks can AI-enhanced computers and machines do better than humans?
- Repetitive/predictive tasks
- Tasks that hinge on computational power
- Classifying huge amounts of data and inputs
- Making decisions based on concrete rules
Which types of tasks can humans do better than AI-enhanced computers and machines?
- Experiencing authentic emotion and building relationships
- Formulating questions and explanations across sources
- Making products and results usable for humans and communicating about them
- Making decision according to abstract values
- Making judgments
Many jobs increasingly will become hybrid jobs, a combination of functions or tasks that machines can do better or more cheaply than humans coupled with functions or tasks that humans can do better. Depending on the occupation, some jobs will be affected more than others.
What might this mean for an ESA? How might typical ESA and educational jobs be affected by AI-enhanced technology? The management company McKinsey analyzed the detailed work activities of over 750 U.S. occupations to estimate the percentage of time that could be automated using existing technology and the per hour value of that time (McKinsey Global Institute, 2017). As an example, McKinsey estimates that 45% of an audiologist’s job @ $38/hour could be automated. That example and others below may not be exact ESA fits, but are illustrative:
- Billing and Posting Clerks: 88% @$15/hr.
- HR Managers: 12% @ $50/hr. and HR Specialists: 25% @ $25/hr.
- Interpreters and Translators: 18% @$21/hr. (We believe that to be low!)
- Librarians: 45% @ $22/hr. and Library Assistants: 80% @ $10/hr.
- Network/Computer Systems Administrators: 65% @ $35/hr.
- Occupational Therapists: 25% @ $38/hr. and OT Aides: 45% @ $11/hr.
- Physical Therapists: 35% @ $38/hr. and PT Assistants: 38% @ $22/hr.
- Registered Nurses: 30% @ $35/hr.
- Special Education Teachers – Secondary: 12% @ $25/hr.
- Speech Language Pathologists: 42% @ $35/hr.
The implications of this list go beyond the financial. With an additional 45% of the work week freed up by automation, what new services could the audiologist develop and deliver? It is exciting to think of the explosion of energy that can be redirected to fulfilling the ESA service mission when automation removes drudgery and professionals can actually apply their knowledge and craft.
Some reductions in force are likely within lower skilled jobs, though, without a doubt. A few years ago one of the authors (Leddick) compared the number of clerical staff employed to support special education field staff in two large but similar-sized ESA special education divisions in two agencies in different states. For one agency, 75 clerical staff completed reports, typed field notes and IEPs, managed calendars, and filed documents for the special education service providers. By contrast, the second agency employed only four. Automated systems, digital document management, templates and repeatable processes enabled the professional staff in the second agency to accomplish what an additional army of people were doing in the first. Surely this is one example that shows how some ESA jobs may actually be replaced as others are hybridized.
Wrapping Up and Next Steps
We have attempted to start a conversation about the implications of artificial intelligence or AI in the world of ESAs. We have given examples of what is and what may be. We have defined terms and expanded concepts. Yet we have just scratched the surface of this important topic. We have yet to drill down to even more specific ESA leadership implications and AI applications. We have not considered practicalities of how to prepare ESAs for the judicious acquisition and use of the myriad offerings that will emerge on the market in the near future. We have not opened the discussion of corporate and interagency partnerships that will likely be required for ESAs to make the most of AI. Those are matters for the second article. For now, we have officially issued the invitation to a conversation to continue at AESA conferences, in your leadership team meetings, and with ESAs in your region and state. Let the dialogue begin!
References
Ajay Agrawal, J. G. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Boston, MA: Harvard Business Review Press.
Cerulo, K. A. (2006). Never Saw It Coming: Cultural Challenges to Envisioning the Worst. Chicago and London: The University of Chicago Press.
David Autor, D. A. (2019). The Work of the Future: Shaping Technology and Institutions. Boston: MIT .
David Schatsky, C. M. (2015, January 27). Cognitive Technologies: The Real Opportunties for Business. Deloitte Review Issue 16.
Hardesty, L. (2016, September 22). MIT News. Retrieved from http://news.mit.edu/2016/automated-screening-childhood-communication-disorders-0922
Hepplemann, M. E. (2019). Why Every Organization Needs an Augmented Reality Stratgy. HBR’s 10 Must Reads: On Analytics and the New Machine Age, 53-76.
Learning @ Scale. (2019). Retrieved from Learning @ Scale: https://learningatscale.acm.org/las2019/
McKinsey Global Institute. (2017). Harnessing Automation for a Future That Works. McKinsey & Company.
Porter, M. E. (2011). What is Strategy? In HBR’s 10 Must Reads: On Strategy (pp. 1-38). Boston: Harvard Business Review Press.
Restrepo, D. A. (2018). Artificial Intelligence and Work Working Paper 24196. Cambridge, MA: National Bureau of Economic Research.
Rouse, M. (n.d.). A Machine Learning and AI Guide for Enterprises in the Cloud. Retrieved from TechTarget: https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence
Sallomi, P. (2019). Artificial Intelligence Goes Mainstream. Retrieved from Deloitte: https://www2.deloitte.com/us/en/pages/technology-media-and-telecommunications/articles/artificial-intelligence-disruption.html
Techopedia – Definitions. (n.d.). Retrieved from Techopedia: https://www.techopedia.com/definition/32099/automation
The Seven Patterns of AI. (2019, April 4). Retrieved from Cognilytica: https://www.cognilytica.com/2019/04/04/the-seven-patterns-of-ai/
Virtual Reality vs Augmented Reality. (n.d.). Retrieved from Augment: https://www.augment.com/blog/virtual-reality-vs-augmented-reality/
Special thanks to Dr. Daniel Hanrahan, CEO of Cooperative Educational Service Agency 2 in Whitewater, WI and Nick Brown, Deputy Executive Director of Region 12 ESC in Waco, TX who served as critical readers and for their review, comments, and insights.
Dr. Thomas Collins was most recently Executive Director at HCC, a legislatively created information technology center, that provides IT and related support services to school districts, private schools, local governments, higher ed, and others in and around Cincinnati and southwest Ohio. Prior to HCC he was a consultant with Hamilton County ESC in Cincinnati, OH. He can be reached at (513) 967-5966 and collins2444@gmail.com.
Dr. Susan Leddick is a consultant in organization design and continuous improvement and President of Profound Knowledge Resources, Inc., a consulting and training firm. She has extensive experience working with educational service agencies, state departments of education, and other organizations. She can be reached at (406) 994-0303 and susan@pkrnet.com.