« View all Global Content posts

Disruptive AI: Hype or Reality? Where are we on the Artificial Intelligence Timeline?

Written by Markus Welsch on 19/01/18

The Hype

Artificial Intelligence (AI) and disruptive AI trends are generating an immense amount of hype and are even driving market dynamics, as illustrated by some sample facts and observations:

  • Almost everybody talks—or writes 😊—about it.
  • For years, leading analysts and major consulting firms (e.g., Accenture, Deloitte, Gartner, McKinsey, PwC, etc.) consecutively rank AI and related subjects (e.g., intelligent apps, intelligent things) among the top strategic trends.
  • Venture capitalists are funding AI start-ups at a rapid pace.
  • Digital giants, such as Microsoft, Google, Facebook, Apple, Amazon, Oracle and IBM are competing in the race to lead the market—they are acquiring the most innovative and promising AI businesses [1] and investing enormous amounts in AI research and AI-driven solutions.
  • The media regularly draw our attention to AI success stories, breakthroughs and innovations [2][3].
  • AI Inside is currently the top marketing claim and sales pitch touted by almost every solution provider (I can’t prove this statement by figures—it reflects my personal perception 😊).

AI is often associated with some sort of science fiction—robots that can behave like humans, become self-aware, reject human authority and attempt to destroy mankind. A good example of this is the movie 2001: A Space Odyssey, in which the artificially intelligent on-board computer H.A.L. 9000 lethally malfunctions on a space mission and kills the entire crew except the spaceship's commander, who manages to deactivate it. Another good example is the movie I, Robot, an apocalyptic story of robots who are fully integrated into human society and become a danger to mankind.

This sci-fi perception of AI might stem from Artificial General Intelligence (also called General AI, Strong AI or Full AI), where the intelligence of a machine would:

  • successfully perform any intellectual task that a human could perform and
  • learn dynamically, much as humans do.

General AI still belongs to the world of science fiction—it may happen in decades, centuries or never. AI experts disagree—and we simply don’t know.

I Robot image.jpg

The Reality: AI in a Nutshell

AI is not a new concept. It was founded as an academic discipline in 1956 and has since experienced waves of optimism, followed by disappointment and the interim loss of funding, followed by new approaches, success and renewed funding.

Artificial Intelligence Timeline.png

The common AI research goals include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.

The overall research goal of AI is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems, consisting of capabilities such as:

  • Reasoning enables drawing inferences appropriate to the situation.
  • Problem Solving is the ability to perform a systematic search through a range of possible actions to reach some predefined goal or solution.
  • Knowledge Representation is dedicated to representing information about the world in a form a computer system can utilize to solve complex tasks.
  • Planning concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles.
  • Learning (Machine Learning) gives computers the ability to learn without being explicitly programmed (often requires large amounts of training data).
  • Natural Language Processing enables the interactions between computers and human (natural) languages and is concerned with programming computers to effectively process large amounts of natural language data.
  • Perception is the ability to use input from sensors (such as cameras, microphones and others) to deduce aspects of the world (e.g., speech recognition, facial recognition, object recognition, etc.).
  • Motion and Manipulation (Robotics) establish the intelligence required for robots to handle tasks, such as object manipulation and navigation, with sub-problems, such as localization, mapping and motion planning.
  • Social Intelligence (Affective Computing) enables machines to recognize, interpret, process and simulate human affects (e.g., the ability to simulate empathy: the machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions).

General Intelligence—as stated earlier—is among the field's long-term goals.

Old Wine in New Bottles? Why Now? What’s Changed?

Today’s focus is on narrow AI (also called weak AI or applied AI), which, in contrast to General AI, does not attempt to perform the full range of human cognitive abilities. Narrow AI usually consists of highly scoped machine-learning solutions that target a specific task (e.g., sentiment analysis, virtual customer assistant, machine translation, etc.). The models and algorithms chosen are optimized for that task. All real-world examples of AI in use or under development are examples of narrow AI, which hereafter in this article will be called AI.

AI applied to routine tasks tends to result in automation, whereas AI applied to semi-routine and non-routine cognitive tasks tends to make existing workers more effective by highlighting interesting connections in data and providing recommendations.

AI Key Enablers.png
Increased theoretical understanding, corresponding advanced statistical techniques (e.g., deep learning), access to large amounts of data (i.e., Big Data) and faster computers enabled significant advances in AI, including (machine) learning, perception (e.g., speech recognition, facial recognition, object recognition) and natural language processing.

We all probably remember some of the most striking success stories of the last few years:

  • In 2008, Apple iPhones began to offer speech recognition.
  • Early 2011, IBM's Watson defeated the two greatest Jeopardy champions by a significant margin in a Jeopardy quiz show exhibition match.
  • In March 2016, AlphaGo won four out of five games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.
  • In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world’s number-one ranking for two years. This marked the completion of a significant milestone in the development of AI as Go is an extremely complex game, more so than chess.

AI Anywhere

AI is omnipresent today—we all interact with it at some level every day in our private and professional life:

  • Our email providers use AI to detect and separate spam.
  • Amazon and other eCommerce platforms use AI to provide us with personalized product recommendations.
  • Traffic apps (e.g., Google Maps, Waze, etc.) use AI to find the shortest routes and avoid traffic.
  • Personal virtual assistants (e.g., Amazon’s Alexa, Google’s Assistant, Apple’s Siri, Microsoft’s Cortana, etc.) use AI to understand our intent and to act / respond accordingly.
  • Financial institutions have long used AI systems to detect charges or claims outside of the norm and flag them for human investigation.
  • Banks use AI systems today to organize operations, maintain bookkeeping, invest in stocks, manage properties and more.
  • The healthcare industry uses AI algorithms to read medical images, such as radiology results, spot anomalies and assist in diagnoses. AI also helps find patients for clinical trials in minutes instead of weeks or months.
  • Real-time translations on Skype.
  • Advancements in AI have contributed to the creation and evolution of self-driving vehicles that integrate various intelligent sub-systems (e.g., braking, lane changing, collision prevention, navigation and mapping).
    These are just a few of today’s myriad AI-driven use cases.

These are just a few of today’s myriad AI-driven use cases.

The Other Side of the Coin

Despite all the enthusiasm, excitement, expectations and progress around AI—which some may perceive as another Gold Rush fueled by endless opportunities and benefits—we should probably pause for a moment and not forget disruptive AI advances also raise various questions around ethics, legality, privacy, risks, threats (e.g., unintended consequences) on which opinions are currently divided.

What about air-traffic control systems that direct airplanes carrying thousands of passengers, or medical diagnosis systems that might assist physicians in life-or-death situations? What happens if these systems give wrong advice? Who should be held accountable? And if these systems can think autonomously and be aware of their own existence, could the incorrect advice be intentional? Should the legal system be extended to deal with machines in the future, like in Isaac Asimov’s fictional work Three Laws of Robotics?

There is still some work to be done in this context, particularly when AI directly impacts “critical” decision making.

Disruptive AI – A Game-changer

Disruptive AI a game-changer

For more than 250 years, the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies, which can affect an entire economy on a national or global level. Examples include the steam engine, railroad, electricity, the computer and the Internet. Each one catalysed waves of complementary innovations and opportunities. 

The most important general-purpose technology of our era is AI. As it advances and moves out of research labs into mainstream consumer and business applications, it is leading to massive changes in every industry and business domain with significant economic impact. A major part of this impact will be driven by:

  • Productivity gains from businesses automating processes (including use of robots and autonomous vehicles).
  • Productivity gains from businesses augmenting their existing labor force with AI technologies (assisted and augmented intelligence).
  • Increased consumer demand resulting from the availability of personalized and / or higher-quality, AI-enhanced products and services.

However, the ultimate commercial potential of disruptive AI capabilities is doing things that have never been done before, rather than simply automating or accelerating existing processes and aptitudes. New business models based on brand-new capabilities have the potential to disrupt markets and industries and to leapfrog established organizations.

As outlined in its recent Global Artificial Intelligence Study [4], PwC estimates the global GDP (Gross Domestic Product) will be up to 14 percent higher in 2030 because of the accelerating development and take-up of AI—the equivalent of an additional $15.7 trillion.

I am not citing these estimates because of the bare figures—you will find different estimates from different analysts, using different assumptions, models, etc. It is the order of magnitude that underlines the game-changing nature of AI.

One of the key risks we face today in our businesses is losing our competitiveness because of not anticipating the opportunities and not embracing the new capabilities that AI offers.

AI is a rich and diverse field. We must understand the multitude of related concepts and technologies and integrate them into full solutions that generate business value and impact—for us and our customers.

Blog Outlook

Within this post, we have laid the groundwork for understanding AI by highlighting its various, rather general facets and potential impact. Our next post will be dedicated to Corporate Dark Data with a focus on unstructured, multilingual data and content. We will outline key challenges and risks organizations are facing today with Corporate Dark Data. Furthermore, we will see how AI can support the transformation of content- and information-centric operations, enabling a highly-customized content delivery, tailored to a content consumer’s specific persona and lifestyle—ultimately driving customer experience.

Subscribe to our blog to receive all our posts, including the next instalment in our AI discussion. Have more questions about AI and what it could mean for your business? Send an email!

 

References
[1] Apple confirms Shazam acquisition; Snap and Spotify also expressed interest
[2] Google's AI Tech Helps NASA Spot 2 New Planets
[3] AI Will Turn Regular Fitness Trackers Into Potentially Life-Saving Medical Devices
[4] Sizing the prize - PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution

 

➡ Read this article in French


➡ Read this article in German

 


Additional blog posts related to this topic: 

Adobe Symposium Amsterdam 2017: Experience at the heart of business

Meet Sensei: the new artificial intelligence star by Adobe

Adobe is changing the world through digital experiences

 



Topics:
Artificial Intelligence, AI trends, AI timeline






Markus Welsch

Written by Markus Welsch

Markus Welsch is Vice President Content Intelligence and Chief Solution Architect at AMPLEXOR. He is based in Luxembourg. During his more than 20 years within the AMPLEXOR group, he contributed in different roles and positions to the design, architecture, implementation and operation of numerous challenging, multi-lingual Content and Information Management solutions for customers in different industries. Encouraged by his Computer Sciences background and a corresponding mindset, his passion and special attention always was—and continues to be—on the smart automation of content- and information-centric business processes and related cognitive activities. In his current positions, Markus is responsible for managing a comprehensive portfolio of smart solutions that bring together the best of both worlds, combining the speed, scale and power of machines with a human-like approach to take advantage of information on a scale that would otherwise be impossible for people. Within their area of application, these solutions can understand language, recognize valuable patterns and relationships, learn from data and information and allow answering questions that would have seemed unimaginable only a few years ago.

Related posts

Comments