Top
  /  ECCT   /  Latest News   /  Technology committee workshop on AI

Technology committee workshop on AI

The ECCT’s Technology committee hosted a workshop on the subject of Artificial Intelligence (AI). The event featured presentations by three speakers, followed by a Q&A session with the speakers, moderated by Giuseppe Izzo, Co-Chair of the committee and concurrently ECCT Vice Chairman. At the event the speakers gave an overview of the most significant recent developments in AI, and how AI is being used in a variety of applications, with a focus on voice and augmented reality. The three speakers were Alexandre Mengeaud, Technical Marketing Manager from STMicroelectronics; Charles Kuai, President, Greater China Region for Nuance Communications and Eric Chang, Chairman of Peer Giant System Inc., Taiwan. The workshop was followed by a lunch arranged by the ECCT's Automotive committee featuring a presentation by Marc Hamilton, Vice President of Solution Architecture and Engineering for Nvidia.

In his introductory presentation, Giuseppe Izzo noted that the concept of AI is not new. Its origins can be traced back more than 70 years to the invention by Alan Turing of his Turing and Enigma machines, the latter of which used algorithms and computation to speed up the breaking of Nazi codes in World War II. He expressed the view that AI should be seen as an opportunity rather than a threat given the dramatic improvements in efficiency it will enable in multiple industries.

Nevertheless, the advent of AI will present challenges. The disruption to a number of industries will require requalification of the workforce to include coding skills. It will also have to be regulated politically and must comply with ethical principles. Ethical principles can be traced back to the ethics of robotics formulated by Isaac Asimov in 1942, which state that: 1) A robot may not injure a human or, through inaction, allow a human to come to harm; 2) A robot must obey orders given by human beings, unless such orders conflict with the first law; 3) A robot must protect its own existence, as long as such protection does not conflict with the first and second law. These concepts were more recently refined by Stuart Russel into three principles for creating safer AI: The robot’s only objective is to maximize the realization of the human values; 2) The robot is initially uncertain about what these values are; 3) Human behaviour provides information about human values. With enough data robots will learn to predict which life each human will prefer.

In his presentation, Alexandre Mengeaud gave an overview of his company’s solutions for AI. He noted that it has been 21 years since IBM’s Deep Blue Supercomputer beat then reigning world chess champion Gary Kasparov at his own game (in 1997) and two years since a computer defeated the best human at Go.

Mengeaud showed how different types of neural networks focus on specific functions, and improve as more data is added. For example, systems are designed to sense and understand the environment, which enables them to identify, for example, different types of fruit or different types of human activity, such as walking, running, cycling and driving.

He went on to demonstrate a voice recognition system, whereby instructions were giving verbally and carried out to turn on, adjust and turn off lights. He also demonstrated a facial recognition system, which is able to recognise seven types of facial expressions.

In his presentation Charles Kuai noted that augmented reality has been used by the US air force since the 1970s in its pilot’s helmets. He expressed the view that deep learning is just another way of referring to the process of pattern recognition. For AI to be useful and valuable, he made reference to Steve Jobs who said that you have to start with customer experience and work backwards from that.

Voice recognition programmes are much more secure than passwords to protect sensitive data, although iris recognition is even better. The speaker made the point that there are a lot of useful applications available today that perform various functions. However, the problem is that they are not integrated with other applications. What is needed is a cognitive arbitrator to link multiple applications.

He went on to show a video of his company’s voice-operated system for cars that allows drivers to ask for directions, make phone calls, get weather reports, shop online and get various types of information, using only voice instructions. Kuai predicts that by 2020 we will be talking more to machines than to people.

In his presentation, Eric Chang cited an Oxford University study which predicts that within 10-20 years, 47% of jobs in the United States will be easily replaced by AI, particularly jobs in the service industry, sales and administrative jobs. Chang said that he believes that smart glasses will start replacing TVs, PCs smart phones and tablets in the near future. He based this on several reasons: 1) the proximity of glass lenses to the eye is equivalent to an 80-inch screen; 2) the arrival of 5G telecoms will make improve streaming quality and eliminate latency issues; 3) glasses make 3D viewing possible; 4) they allow voice, image, gesture recognition and eye-tracking; 5) they are hand’s free, freeing the user for many tasks and activities; 6) they allow remote interaction and team work.

Smart glasses allow virtual objects to be superimposed on the real world and thereby help the user to better understand the real world. User interfaces and technology are improving, including voice and gesture control. However, there are still issues to overcome including appearance, size and battery power. Unlike augmented reality, which is digital content on top of the real world, Chang referred to “mixed reality” as digital content or virtual objects that can be controlled to interact with the real world.

Chang cited several use case examples for smart glasses. They have been used in the maintenance process of power stations to reduce downtime, avoid excessive preventative maintenance and prevent accidents. They can be used by police to recognise the licence plates of suspects. Emergency health workers connected via their glasses to doctors in hospitals can steam live video and vital sign information and receive real-time instructions from doctors, which could help to save lives.

They could also be used by surgeons performing operations so that they can see the patient, view information from the camera inside the patient, view patient information and monitor vital signs, all on their glasses rather than having to look at other screens.