The development of regulations specific to artificial intelligence (“AI”) is still at an early stage. In Canada, there is currently no specific law that regulates AI, but it does not mean developers and users of AI systems do not have legal obligations. They are still subject to current legal regimes (tort, criminal law, privacy, consumer protection, etc.). Organizations that are taking a proactive stance in implementing and developing responsible and ethical AI principles will be better prepared for upcoming regulatory changes, will reduce risks associated with AI and will take leadership positions as responsible corporate agents.
Setting out and detailing ethical principles is an important first step. However, organizations should seek to go further and develop concrete internal governance processes that aim to comply with such ethical principles and which provide clear roles and responsibilities within the organization. Some frameworks have already been published as discussed in our previous publications, many of which focus on ethical principles (see also our discussion on AI Policy Framework released by the International Technology Law Association and the OECD).
Singapore Model AI Governance Framework – From Principles to Governance (Second Edition) (the “Singapore Framework”) is particularly interesting as it does not simply state ethical principles, but it also links them to precise and concrete measures that can be implemented by organizations. The main principles underlying the Singapore Framework are: (1) explainability, transparency and fairness of decision-making process and (2) AI solutions should be human-centric (i.e. used to amplify human capabilities, including their well-being and safety). These principles constitute the underlying pillars of the Singapore Framework and are further developed in the following four key areas: read more