Artificial Intelligence is becoming ubiquitous, and more people are starting to notice. What’s more, even though most people do not fully grasp its complexity and implications, conversations about AI have deepened and widespread.
On the other hand, many AI and Singularity promoters tend to exaggerate the spectacular side of this groundbreaking technology, advancing radical implications mainly to gain more public attention. The consequence is that the same technological and philosophical assumptions fueling the hype are equally preventing us from making sensible judgments on Artificial Intelligence.
This alone is reason enough to watch Artificial Intelligence from Hype to Reality – a brand new Future Horizons show that will investigate, unpack and debunk the AI hype in the company of renowned Professor Dan Cautis, from Georgetown University.
During the four episodes of this series, QUALITANCE Chief Innovation Officer Mike Parsons will be interviewing Professor Cautis, who will explain the technological and philosophical assumptions at the heart of the AI hype, while weighing the pros and cons for each.
[Episode #1] A brief history of AI
The first episode, “A Brief History of AI”, will take you back to the early days of Artificial Intelligence while highlighting the following matters:
- Why we should question the hype;
- AI applications: healthcare, finances, space engineering, defense, robotics, etc;
- The difference between Narrow AI and Artificial General Intelligence;
- John von Neumann & Alan Turing’s digital computer and the birth of AI;
- Computationalism and the logic component of calculations;
- The leap from number crunching to AI;
- The Golden Years of AI: McCarthy’s contribution, the ungrounded optimism of Marvin Minsky & Herbert Simon, the classical AI paradigm;
- The early signs of the AI winter;
- How Machine Learning revived and renewed AI;
- Moore’s Law and the rise of neural networks as a new field of interest and focus;
- The paradigm shift towards machine learning;
- Branches in machine learning: supervised learning, classification, unsupervised learning; clustering, linear regression, prediction, reinforcement learning;
- Machine learning algorithms and concepts.
P.S. You might also like A Short History of Artificial Intelligence & The Biggest AI Breakthroughs over the Past 10 Years.
Recommended readings on AI
For a better understanding of the above-mentioned talking points, you might also enjoy the following reading list recommended by Professor Cautis:
- How Artificial Intelligence is transforming the world
- 27 incredible examples of AI and machine learning in practice
- Military applications of Artificial Intelligence
- Artificial Intelligence and the Future of Warfare
- Jerry Kaplan / Artificial Intelligence – What Everyone Needs to Know
- Michael Negnevitsky / Artificial Intelligence – A Guide to Intelligent Systems
- Nils J. Nilsson / Introduction to Machine Learning
- Richard S. Sutton & Andrew G. Barto / Reinforcement Learning: An Introduction
- Sebastian Raschka / Python Machine Learning
- D. Stork, P. Hart & R. Duda / Pattern Recognition
- 10 companies using Machine Learning in cool ways
- 9 applications of Machine Learning from day-to-day life
- Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning
Stay tuned as in February we’ll be back with a 2nd episode on “AI, Transhumanism and Singularity”. If you feel like sharing your book recommendations on AI, you’re welcome to leave them in a comment below.