8 April 2019

7 Indicators Of The State-Of-Artificial Intelligence (AI), March 2019


Turing Award winners (from left to right) Yoshua Bengio, Yann LeCun, and Geoffrey Hinton at the ReWork Deep Learning Summit, Montreal, October 2017. GIL PRESS

AI “Sputnik moment” (say it in Chinese*) is at hand

China is overtaking the US not just in the sheer volume of AI research papers submitted and published, but also in the production of high-impact papers as measured by the top 50%, top 10%, and top 1% most-cited papers. “By projecting current trends, we see that China is likely to have more top-10% papers by 2020 and more top-1% papers by 2025” (Allen Institute for Artificial Intelligence).

AI continues to be popular among business executives, regardless of complications, concerns and confusion

73% of senior executives see AI/machine learning and automation as areas they want to maintain or increase investment in but only 33% state that they plan to invest more in getting better visibility of their processes, not taking into account that understanding their current processes first could help them work out which technologies would be most beneficial to their business (Celonis).


71% of U.S. enterprises plan to leverage more AI/ML tools for security this year but only 49% of IT professionals feel extremely comfortable using these tools, 76% don’t care if their companies leverage them and 56% still aren’t even sure what AI and ML really mean (Webroot).

82% of IT and business decision makers agree that company-wide strategies to invest in AI-driven technologies would offer significant competitive advantages, only 29% said their companies have those strategies in place (DXC).

66% of security experts said they would rely upon AI, down from the 74% who said they would in 2018. Cisco attributes the decline to their increased confidence that “migrating to the cloud will improve protection efforts, while apparently decreasing reliance on less proven technologies such as artificial intelligence” (Cisco).

83% of IT professionals are confident their organization has everything it needs to defend against advanced AI- and ML-based cyberattacks, yet 36% reported their organization has suffered a damaging cyberattack within the last 12 months despite using AI/ML security tools (Webroot).

Nearly 90% of IT leaders see their use of AI/ML increasing in the future and 41% look for technology that is powered by AI, a top factor in their purchasing decisions. Their top concerns: Data security (47%), implementing AI/ML (40%), driving innovation and implementing new tech (40%) (Adobe).

37% of service leaders are either piloting or using artificial intelligence (AI) bots and virtual customer assistants (VCAs), and 67% of those leaders believe they are high-value tools in the contact center; 68% of service leaders believe AI bots and VCAs will be of significant importance for them and their organizations in the next two years (Gartner2).

The race against the machine is on. Still, some humans trust AI more than their governments.

56% of Europeans express weariness about a world where machines perform most of the tasks currently done by humans (IE University).

74% of Europeans think that businesses should only be allowed to substitute jobs that are dangerous or unhealthy and 72% think that governments should set limits to the number of jobs businesses can replace with machines (IE University).

61% of internet users worldwide were worried about AI affecting the availability of work, 58% believe governments need to regulate AI to protect jobs, 32% expressed unease about the potential ethical issues associated with AI, and 31% were concerned about lack of transparency in AI-based decision making (BCG).

25% of Europeans are somewhat or totally in favor of letting an artificial intelligence make important decisions about the running of their country; 43% in the Netherlands, 31% in the UK and Germany (IE University).

45% of retail customer service issues/inquiries have been fully automated without affecting CSAT scores; bots have been used to assist agents by collecting upfront information in 25% of issues; agents and bots combined have been able to handle twice the amount of tickets within a given time period (Helpshift).

Inmates at two prisons in Finland are classifying data to train artificial intelligence algorithms for a startup, Vainu, which sees the partnership as a kind of prison reform that teaches valuable skills. But “other experts say it plays into the exploitative economics of prisoners being required to work for very low wages,” according to The Verge.

“AI” is the new “Big Data” and the new “New Economy.” Tech bubbles are defined by poorly-defined terms and the proliferation of billion- and trillion-dollar forecasts leading to investors’ irrational exuberance

MMC Ventures reviewed the activities, focus and funding of 2,830 purported AI startups in the 13 EU countries most active in AI and found that in approximately 40% of the cases there was no evidence of AI being “material to the company’s value proposition” (MMC Ventures).

MMC Ventures: “‘AI’ is a general term that refers to hardware or software that exhibit behaviour which appears intelligent.”**

Worldwide spending on artificial intelligence (AI) systems is forecast to reach $35.8 billion in 2019, an increase of 44% over 2018, and to more than double to $79.2 billion in 2022 (IDC).

IDC defines cognitive/Artificial Intelligence (AI) systems as a set of technologies that use deep natural language processing and understanding to answer questions and provide recommendations and direction.”

Business value derived from AI reached $1.2 trillion worldwide in 2018 and will grow to $3.9 trillion by 2022 (Gartner).

Gartner: “Artificial intelligence (AI) applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions.”

“We are investing $100 billion in just one thing, AI”—Softbank CEO Masayoshi Son (telling CNBC that all the 70 or investments of his Vision Fund—including Uber, DoorDash, WeWork, and Slack--have been focused on AI).

After years in the (mostly Canadian) wilderness followed by (almost) seven years of plenty, Deep Learning is officially recognized as the dominant AI paradigm

“ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing… their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field”—Fathers of the Deep Learning Revolution Receive ACM Turing Award

"For a long time, people thought what the three of us were doing was nonsense. They thought we were very misguided and what we were doing was a very surprising thing for apparently intelligent people to waste their time on. My message to young researchers is, don't be put off if everyone tells you what you are doing is silly"—Geoffrey Hinton

“What we have seen is nothing short of a paradigm shift in the science. History turned their way, and I am in awe”—Oren Etzioni, Allen Institute for Artificial Intelligence

[In October 2012, when a convolutional neural network achieved an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before,] “The difference there was so great that a lot of people, you could see a big switch in their head going ‘clunk.’ Now they were convinced”—Yann LeCun

“…science worked the way it's meant to work… until we could produce results that were clearly better than the current state of the art, people were very skeptical”—Geoffrey Hinton

“This is not just a Turing Award for these particular people. It’s recognition that machine learning has become a central field in computer science”—Pedro Domingos, University of Washington

[Anyone hoping to make the next Turing-winning breakthrough in AI] “should not follow the trend—which right now is deep learning”—Yoshua Bengio

“Whether we’ll able to use new methods to create human-level intelligence, well, there’s probably another 50 mountains to climb, including ones we can’t even see yet. We’ve only climbed the first mountain. Maybe the second”—Yann LeCun

AI is not perfect and will never be. Same as the humans using AI.

68% of Facebook, Google (YouTube), Reddit and Twitter employees think their companies have not done enough to stop the spread of violent content online (Blind’s Work Talk Blog).

“Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically. AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect… this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare”—Guy Rosen, VP, Product Management, Facebook

AI is not perfect. Smart and well-endowed people hope it can be improved by establishing research and education centers focusing on “multidisciplinary collaboration and diversity of thought”

MIT President Rafael Reif said the MIT Schwarzman College of Computing will train students in an interdisciplinary approach to AI. It will also train them to take a step back and weigh potential downsides of AI, which is poised to disrupt “every sector of our society.”

Stanford University is launching a new institute committed to studying, guiding and developing human-centered artificial intelligence technologies and applications. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) is building on a tradition of leadership in artificial intelligence at the university, as well as a focus on multidisciplinary collaboration and diversity of thought. The mission of the institute is to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition.

“There’s no reason why computers can’t think like we [do] and can’t be ethical and moral like we aspire to be”—Patrick H. Winston, the Ford Professor of Engineering at MIT

MIT’s Marvin Minsky (1969) and Stanford’s John McCarthy (1971) received the Turing award for their pioneering AI work. Both saw the development of computers that think like we do as the primary goal of AI. Have they failed only because they have gone about achieving this goal with the wrong approach (encoding logic instead of improving machine learning, as this year’s Turing laureates have done) or because the goal itself led them astray? Does “diversity of thought” mean including researchers and practitioners that do not agree with the assumption that computers could be made to think like us (Minsky: “The human brain is just a computer that happens to be made out of meat”)? Will abandoning this goal and questioning this assumption help improve and advance “AI”?

Sources

**According to this definition, the very first digital computers in the late 1940s could be classified as “AI”—they certainly exhibited “behavior which appeared intelligence,” calculating much faster than humans. Indeed, contemporaries called them “giant brains.”

No comments: