The Global Race to Responsibly Regulating AI

The campaign for responsible use of Artificial Intelligence (AI) has grown like a massive wildfire, and the magnitude of the problem is growing faster than authorities can keep up with. Agencies around the world are working to make sense of it all and provide practical solutions for change. For global business leaders, this means staying informed about compliance, ethical standards, and innovation surrounding the ethical use of AI. To date, here’s the state of AI regulation and legislation around the globe: 

While the following is not a comprehensive list, it shows the distance that needs to be travelled to adequately regulating AI. 

United States

In the U.S., progress toward regulating AI is well underway. The Federal Trade Commission has been working to join the campaign, starting with appending responsible AI to current laws. The burden of change is placed on business leaders to hold themselves accountable for mitigating bias. In April 2020, the FTC published a blog covering U.S. AI regulation to warn and guide businesses about the misuse of AI.  

“The use of AI tools should be transparent, explainable, fair and empirically sound,” Andrew Smith, Bureau of Consumer Protection at the FTC, stated. In the release, Smith highlighted some important points for businesses using AI to remember: 

  • Transparency in collection and use of data 
  • Explain decision-making to consumers 
  • Fair decision-making 
  • Robust, empirically-sound data and modeling 
  • Accountability for compliance, ethics, fairness and nondiscrimination 

Thus far, they’ve accomplished regulating the equitable use of AI under: 

The Fair Credit Reporting Act (FCRA): Biased algorithms used in housing, employment, insurance, and credit decisions are banned. 

The FTC Act (FTCA): Bans the use of racially discriminatory bias in AI commercial use. 

The Equal Credit Opportunity Act (ECOA): Prohibits discrimination in credit decision-making based on race, color, religion, nationality, sex, marital status, age, or the use of public assistance. Discriminatory AI is banned against “protected classes.” 

In 2022, the Equal Employment Opportunity Commission (EEOC) released technical assistance guidance for algorithmic bias in employment decisions, based on the provisions under the Americans with Disabilities Act (ADA). Charlotte Burrows, Chair of the EEOC, reported that more than 80% of all employers and more than 90% of Fortune 500 companies are using such technology. Although there aren’t any federal laws that specifically target use of AI, they serve as the foundation for future legislation and regulations.  

Europe 

Europe has been working on regulating the commercial use of technology since 2018. The General Data Protection Regulation (GDPR) is a resource for achieving and maintaining compliance with Europe’s laws regarding the responsible use of AI. There has been much debate amongst executives and regulators regarding the European Union’s enactment of a comprehensive set of rules for governing artificial intelligence. Executives are arguing that the rules will make it difficult to contend with international competitors. 

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi. 

China 

In 2017, the Chinese State Council released the Next Generation Artificial Intelligence Development Plan as a set of guidelines surrounding the use of specific AI applications. The release was regarding currently active provisions on the management of algorithmic recommendations of Internet information services and the management of deep synthesis of Internet information services, which is still being drafted.  

In May 2023, China’s Cyberspace Administration (CAC) drafted the Administrative Measures for Generative Artificial Intelligence Services. It requires a “safety assessment” for companies desiring to develop new AI products before they can go to market. It also mentions the use of truthful, accurate data, free of discriminatory algorithms. It focuses on prevention as the major first step for responsible AI. 

Brazil 

In December 2022, Brazilian Senators released a report containing studies and a draft of a regulation relating to responsible AI governance. It serves to inform future regulations that Brazil’s Senate is planning. The focal point of the regulation was the presentation of three central pillars: 

  • Guaranteeing the rights of people AI affects 
  • Classification of risk levels 
  • Predicting Governance Measures 

Japan 

In March 2019, Japan’s Integrated Innovation Strategy Promotion Council created the Social Principles of Human-Human-Centric AI. The two-part provision is meant to address a myriad of social issues that have come with AI innovation. One part established seven social principles to govern the public and private use of AI: 

  • Human-centricity 
  • Education/literacy 
  • Data protection 
  • Ensuring safety 
  • Fair competition 
  • Fairness 
  • Accountability & Transparency 
  • Innovation 

The other part, which expounds on the 2019 provision, targets AI developers and the companies that employ them. The AI Utilisation Guidelines are meant to be an instruction manual for AI developers and companies to develop their own governance strategy. There’s also the 2021 provision, Governance Guidelines for Implementation of AI Principles, which features hypothetical examples of AI applications for them to review. While none of these regulations are legally binding, they are Japan’s first step in starting the race to regulating AI. 

Canada 

In June 2022, Canada’s federal government released the Digital Charter Implementation Act. This contained Canada’s first piece of legislation to strengthen the country’s efforts to mitigate bias. The charter included the Artificial Intelligence and Data Act, which regulates international and interprovincial trade in AI. It requires that developers responsibly ensure to mitigate risk and bias. Public disclosure requirements and prohibitions on harmful use are also included. The charter is preliminary to moving toward officially enacting legislation regarding AI in Canada. 

India 

Currently, there are no official regulatory requirements in India regarding the responsible use of AI. The Indian Commission NITI Aayog has released working research papers being used to begin to address the issues. The first installment of the paper, Towards Responsible #AIforAll, discusses the potential of AI for society at large and recommendations surrounding AI adoption in the public and private sectors. The next part, an Approach Document for India, established principles for responsible AI, the economic potential of AI, supporting large-scale adoption, and establishing and instilling public trust. The final paper, Adopting the Framework: A Use Case Approach on Facial Recognition Technology, is meant to be a “benchmark for future AI design, development, and deployment in India.” 

Switzerland 

There are currently no specific regulations that govern the responsible use of AI. Already enacted laws are being used to inform cases as they present themselves. For example, the General Equal Treatment Act, their product liability and general civil laws address prevention of bias in the public and private sectors. 

The Future of a Global Approach 

To limit or completely eradicate AI bias, there needs to be a communal effort and commitment to accuracy, trust, and compliance. Business leaders and developers should target preventive, corrective and measures for transparency, accuracy, and accountability when employing AI. Regulators must also do their due diligence in providing comprehensive, appropriate, and timely legislation that applies to the present and will be relevant in the future.  

Related Resources 

The post The Global Race to Responsibly Regulating AI appeared first on Actian.


Read More
Author: Saquondria Burris

Please follow and like us:
Pin Share