Artificial Intelligence (AI) – Ethical use and Regulation
Reflecting on our use of technology in early 2019, what we have become familiar with (and possibly blasé about) would seem impossible if explained to someone from just a few generations ago.
We can wake up and ask Alexa to start brewing our coffee before asking for Siri’s help in deciding what to wear today based on the weather. When we leave the house, we can lock the front door from our phone and when we’re at work, we can trust our refrigerator to do the grocery shopping for us and have it delivered to our front door.
With computer processing power doubling every two years, camera and other sensor technologies becoming more affordable and with faster internet due to be rolled out in the near future (by way of 5G internet), technology continues to make our lives easier and devices that can “learn” or “think” for themselves will continue to be produced. Indeed, it is anticipated that by 2020 there will be 50 billion devices in the world that are connected to the internet (being the “Internet of Things”). The integration and development of Artificial Intelligence (AI) is inevitably going to continue to transform our society and our daily lives in ways we cannot yet predict. To start with, AI is already transforming elements of industries such as legal services, banking, cybersecurity, transportation and healthcare.
With technology charging ahead, laws must catch up and, to develop the necessary legislation, consideration is required of the ethical implications of artificial intelligence. Given that early indicators of certain illnesses can be detected from how we speak, how comfortable would we be with virtual assistants (that are more often used for playing music) having access to that information? Should we permit AI to make decisions about military targets?
The European Commission has now released its first draft of Ethics Guidelines for the development and use of AI, and interested parties are invited to submit comments on that draft to shape, what the report calls, “Trustworthy AI…since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology”.
The UK has already began legislating on AI as of July 2018 when it passed legislation setting out the rules on responsibility for a road accident caused by an automated (driverless) vehicle. This legislation provided welcome clarity and future developments around how other laws approach regulation of AI (for example, to deal with when an algorithm learns and evolves of its own accord (which the UK government considers a foreseeable possibility)) will not only be interesting, but will impact us all.
While great care has been taken in the preparation of the content of this article, it does not purport to be a comprehensive statement of the relevant law and full professional advice should be taken before any action is taken in reliance on any item covered.