On April 16 , H.E. Dirk Wouters, the Ambassador of Belgium and IFE diplomatic steward, hosted together with Coach Kathy Kemper, Founder and CEO of the Institute of Education, an event on Artificial Intelligence (AI). Dr. Sophie Vandebroek, Vice President of Emerging Technology Partnerships at IBM , presented the current trends, causes of change and implications for the middle and long term of AI applications including how AI will transform industries and creating trustworthy AI. Dr. R. David Edelman, Director of the Project on Technology, Economy & National Security at the Center for International Studies, Massachusetts Institute of Technology, moderated the Q§A.
Here is a summary of Dr. Vandebroek’s presentation . Although it became a hot topic not that long ago, AI-fathers can be dated back as early as the beginning of the twentieth century. During the 1900s, Nobel Prize Santiago Ramon y Cajal was already studying the human brain. He was among the first to develop the microstructure of the brain and fully envisage the complexity of the brain’s network, thereby foreseeing AI science. Fifty years later, AI got brought back to the fore by people such as Nathaniel Rochester (IBM) and Marvin Minski (Harvard and then MIT), who gathered and held many discussions and workshops about the unlimited possibilities of AI. They were the first to actually name AI “science as a topic per se”. However, this hype around AI somewhat vanished during the 60s and 70s. Some algorithms existed. But two core components were missing to expand AI science further: the computing power and a sufficient amount of data to train these algorithms. Today, we possess these tools and the structures needed to feed, train and improve AI algorithms. AI will become an important part of our future, just like electricity is today. Today and to some extent, AI already plays a part in our everyday lives.
Technology is evolving fast and the common tools supporting our habits and lives are going to change just as fast. This exponential power driving technologic change began in the 1970’ with Moore’s Law. At the time, one could only find up to a thousand transistors on a single quarter-inch semiconductor. Moore predicted this number would double every two years. Today, we can have as much as 10 billion transistors on that same surface. This tremendous improvement of computers’ hardware, multiplied computation capacity to an unprecedented extent. Along with this evolution, the ability to collect data was also brought to a new level thanks to networks. Our connected devices, social media, sensors and human beings all produce data and enable those data to be stored and processed on a large scale. This altogether changed the rules of the games and allowed AI science to further develop. We are today on the edge of a new exponential curve of technologic development, just like we were when Gordon Moore predicted his law in 1965. This time, big data, mined using AI, is available to leverage our knowledge even further and unleash a new wave of technological innovation, accelerated by the combination of more efficient hardware materials, with large-scale data production and sophisticated algorithms.
Scientists recently made a new symbolic step forward. We used to see computers playing – and eventually winning- chess games against humans such as IBM’s Deep Blue’s win over Garry Kasparov in the late 1990s. Then there was the Watson AI system winning Jeopardy in 2011. IBM Research has now shown an AI system able to debate society issues with humans and have an opinion on that and Dr. Vandebroek showed a video of the AI Debator. Just like the human beings in front of them, they give opening statements, react to their interlocutor’s arguments and give personal conclusions. AI can now be used to add to discussion, inform decision-making processes or train someone to influence others. So what’s next in AI?
As technology evolves, AI applications become more and more ubiquitous. They integrate our daily life, our food, our cars, our hospitals and our homes. They can monitor energy consumption, improve privacy and drive cars. New tools will predict what people need and want. They will enable remote control of our houses. AI can also serve in the healthcare domain. For example, they will be able to monitor and understand how elderly with disabling diseases manage their days, the extent to which they are still autonomous, find the most adequate medication and automatically offer assistance for the people who need it. Scientists also recently expand AI’s abilities to interpret human language. This leads to numerous applications one of which could be cars. You would get in your car and talk with it, and it would detect if you’re tired, drunk, under influence of drugs or if you’re doing fine. Researchers at IBM are even developing AI’s abilities to taste and smell the world around it. AI can also detect, recognize and analyze gazes. These new abilities will allow new AI applications to emerge and most of them we cannot yet imagine. AI already plays a key role in monitoring the environment for example. IBM works with Beijing authorities to analyze air quality and monitor the particles therein. It allows them to react in a timely manner if need be, detect patterns and improve public health. This air quality system is now in use in hundreds of cities in China and is expanding in other emerging world locations.
Supply chains will also be transformed by AI, because it is very good at keeping track of transactions and operations. AI together with Blockchain platforms will allow to reinforce supply chains’ reliability and transparency. For example, IBM already works with a diamond-mining company in Australia to ensure their production of diamonds is ethically mined and comply with the Kimberley regulation. AI applications on top of the Blockchain also allow retail companies to track food from the farm to the shelves. If there’s a sanitary problem, they don’t need to shut down the entire chain for weeks to spot its origin but can find out which farm or value chain node is involved in a few minutes.
AI skills in problem-solving will probably also be leveraged by the emerging science of quantum physics and their tremendous computing abilities. While initial commercial quantum computers have limited number of Qbits research is underway to make them more powerful. Large ecosystems are being created and over 100,000 users have run more then 6 million quantum experiments in the IBMQ System. In the future, when there are more Qbits, characteristics are such that it is expected that a quantum computer will be able to crack any current security firewall in the world.
This step up in technology capacities calls for checks and balances as well as ethical monitoring. Successfully entering the AI age will require that consumers and the broader society trust AI. Dr. Vandebroek points out four characteristics AI designers must stick with in order to build up that trust (in addition of course to making sure AI gives the correct answer): fairness, explainability, robustness and transparency.
First, AI applications must be unbiased and able to explain how they have come up to a decision as well as being secure and transparent in their developments. Tools have been developed to reach these requirements. For trusted and unbiased AI, researchers have developed the open sourced tool “AI Fairness 360” which enables to detect up to ten biases often found in data sets and mitigate that bias.
Second, AI must be able to explain its statements and develop why it has reached a defined conclusion. Today The European General Data Protection Regulations (GDPR) requires this explainability already. For example, this can be reached through the contrastive approach (Dhurandhar Amit (IBM)) et al., which is based on explanations by the missing feature
Third, AI must be secure to prevent hackers from using it for malicious purposes. With this in mind, IBM researchers created the Adversarial Robustness Toolkit (ART), now also an open source software library, to tackle adversarial attacks. Its goal is to assist AI researchers to find defensive tools that help to detect adversarial noise, undetectable by humans, in order to prevent misclassifying inputs and preventing the algorithm to function well. More broadly, AI applications deal with sensitive data which must be sheltered from theft and misuse.
Fourth, AI must be transparent to engender humans’ trust in a service. For that reason, firms are encouraged to label their AI with Factsheets, like nutrition labels for food. They are used to disclose AI features and help increase consumers’ trust. The recent European Commission’s High-level Expert Group on AI published a the EU -Ethic Guidelines for Trustworthy AI, that contains the required AI features: lawfulness, ethical and robustness. By creating an adequate legal framework, governments incentivize firms to comply with these requirements and pave the way for a better and fairer AI market development.
“Just like a physical structure, trust can’t be built on one pillar alone (fairness, robustness, reliability and lineage). If an AI system is fair but can’t resist attack, it won’t be trusted. If it’s secure but we can’t understand its output, it won’t be trusted. To build AI systems that are truly trusted, we need to strengthen all the pillars together.” Aleksandra Mojsilovic, IBM Fellow.
This fascinating presentation by Dr. Sophie Vandebroek, was followed by some sharp comments of Dr. R. David Edelman, who also brilliantly conducted the Q&A.
Dr. Edelman’s remarks
First, R. David Edelman underlined the fact that AI can be a very useful tool in education. Indeed, teachers that are accompanied by tablets and other technological assistance are able to teach in a more efficient way. The school were this experiment was conducted reports that in six months, the AI-supported teacher was able to teach two years’ worth of educational curriculum. AI is changing our teaching approach.
Secondly, Edelman touches on the trustworthiness issue brought up in the public opinion a few months when a pedestrian was killed by an autonomous car. Some scientists were afraid that this accident would set the industry back by ten years. Yet legislators understood that AI has the potential to save so many lives and thus did not overreact to this tragedy. The woman hit by the car was one of the seven pedestrian killed by a car in America that day. No one heard about the six other. 41,000 people died on American roads in 2016 and 39,000 in 2017. AI has the power to change that if improved enough.
Another issue that Dr. Edelman pointed out is the power that AI could have in medical domain. Some researchers have already shown that, by using AI, it is possible to detect cancer or other disease much earlier. Besides, while human diagnosis allows for a reasonable accuracy, AI can provide 99% reliability.
Is there ongoing research is investigating the impact of AI on suicidal individuals. Suicide kills more people than car accidents in the US. AI could be a solution to that? When the loved ones can no longer convince the suicidal person and loneliness has overgrown everything else, machines can build confidence, trust, give an ear to listen and ultimately influence the person not to commit suicide. However, AI is not a long term solution. The reason why a person wants to commit suicide is very dependent on his/her feelings, background and consequently, a large panel of reasons exists. If the person really wants to commit suicide, the machine can be efficient and convince him/her not to do so, but if the person is just trying to get more attention of his or her loved ones, the machine won’t solve it. On the contrary, it could even aggravate it. In the mental healthcare domain, machines are already used to accompany people and offer some daily conversations. The final goal is not to make these machines sentimental, but attentive when people feel alone or unhappy.
Another point that has been identified which also needs special attention is the regulation of personal data. Today, GDPR exists in Europe, a Californian law tries to be influential in the United States and Congress is looking into possible federal legislation. But there is not yet a comprehensive regulation framework in terms of personal data for the whole field of AI. The audience also touched on the existing checks and balances regarding the use of personal data to feed AI and to move towards scientific advancement.
Besides this, AI proves to be really conclusive in terms of security. It is able, through preventive calculations, to react to dangers much more quickly and adequately than humans. Edelman was asked during the conference what were his views on the trolley problem. When an autonomous car is confronted with a situation where it can whether sacrifice the passenger’s life whether kill five people who chose to cross the street at an inappropriate section, what should he do? Who should decide which life gets sacrificed ? The passenger ? The AI engineer ? The legislator ? Edelman responded that the best way to choose might be to weigh the pros and cons and to analyze the histories of each victim. However, when such a situation arises in reality, the driver does not even have time to ask himself these questions, he has already killed the people on the tracks. In that way, AI would be regulated beforehand to know how to react in such a situation. At some point, we will have to choose an answer to the trolley problem. Along with this comes the problem of AI legal accountability in these situations.
AI appears to be really conclusive in several areas and can provide assistance in ways humans cannot. AI creates endless opportunities for humans and can make our lives better. However, it is necessary to remain vigilant and call for more regulation to answer the questions AI triggers. The legal and ethical framework is still insufficient to develop AI in a mature and responsible way. It is up to legislators and companies to establish these rules and standards so AI can flourish even further.
To conclude the evening, a smaller group of guests discussed even more AI questions such as :
- How to run a democracy or a society, how to run an electoral processes, in this new AI environment?
- Who gives permission and who authorizes new AI developments and AI innovation? For example, how did we get into all these experiments with driverless cars? Who authorized this direction?
- How will warfare develop and how successful has mankind been in the past to interdict the development of new weapons that were generally considered as “no- go”?
- How much will Small and Medium Enterprises be affected by AI, compared to the big companies who use big data?
- The complexity of the technology related to AI and the coexistence of different systems.
Which is the best regional or global forum for the development of a policy or for the development of an ethical and legal framework?