Search for:
The Global Race to Responsibly Regulating AI

The campaign for responsible use of Artificial Intelligence (AI) has grown like a massive wildfire, and the magnitude of the problem is growing faster than authorities can keep up with. Agencies around the world are working to make sense of it all and provide practical solutions for change. For global business leaders, this means staying informed about compliance, ethical standards, and innovation surrounding the ethical use of AI. To date, here’s the state of AI regulation and legislation around the globe: 

While the following is not a comprehensive list, it shows the distance that needs to be travelled to adequately regulating AI. 

United States

In the U.S., progress toward regulating AI is well underway. The Federal Trade Commission has been working to join the campaign, starting with appending responsible AI to current laws. The burden of change is placed on business leaders to hold themselves accountable for mitigating bias. In April 2020, the FTC published a blog covering U.S. AI regulation to warn and guide businesses about the misuse of AI.  

“The use of AI tools should be transparent, explainable, fair and empirically sound,” Andrew Smith, Bureau of Consumer Protection at the FTC, stated. In the release, Smith highlighted some important points for businesses using AI to remember: 

  • Transparency in collection and use of data 
  • Explain decision-making to consumers 
  • Fair decision-making 
  • Robust, empirically-sound data and modeling 
  • Accountability for compliance, ethics, fairness and nondiscrimination 

Thus far, they’ve accomplished regulating the equitable use of AI under: 

The Fair Credit Reporting Act (FCRA): Biased algorithms used in housing, employment, insurance, and credit decisions are banned. 

The FTC Act (FTCA): Bans the use of racially discriminatory bias in AI commercial use. 

The Equal Credit Opportunity Act (ECOA): Prohibits discrimination in credit decision-making based on race, color, religion, nationality, sex, marital status, age, or the use of public assistance. Discriminatory AI is banned against “protected classes.” 

In 2022, the Equal Employment Opportunity Commission (EEOC) released technical assistance guidance for algorithmic bias in employment decisions, based on the provisions under the Americans with Disabilities Act (ADA). Charlotte Burrows, Chair of the EEOC, reported that more than 80% of all employers and more than 90% of Fortune 500 companies are using such technology. Although there aren’t any federal laws that specifically target use of AI, they serve as the foundation for future legislation and regulations.  

Europe 

Europe has been working on regulating the commercial use of technology since 2018. The General Data Protection Regulation (GDPR) is a resource for achieving and maintaining compliance with Europe’s laws regarding the responsible use of AI. There has been much debate amongst executives and regulators regarding the European Union’s enactment of a comprehensive set of rules for governing artificial intelligence. Executives are arguing that the rules will make it difficult to contend with international competitors. 

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi. 

China 

In 2017, the Chinese State Council released the Next Generation Artificial Intelligence Development Plan as a set of guidelines surrounding the use of specific AI applications. The release was regarding currently active provisions on the management of algorithmic recommendations of Internet information services and the management of deep synthesis of Internet information services, which is still being drafted.  

In May 2023, China’s Cyberspace Administration (CAC) drafted the Administrative Measures for Generative Artificial Intelligence Services. It requires a “safety assessment” for companies desiring to develop new AI products before they can go to market. It also mentions the use of truthful, accurate data, free of discriminatory algorithms. It focuses on prevention as the major first step for responsible AI. 

Brazil 

In December 2022, Brazilian Senators released a report containing studies and a draft of a regulation relating to responsible AI governance. It serves to inform future regulations that Brazil’s Senate is planning. The focal point of the regulation was the presentation of three central pillars: 

  • Guaranteeing the rights of people AI affects 
  • Classification of risk levels 
  • Predicting Governance Measures 

Japan 

In March 2019, Japan’s Integrated Innovation Strategy Promotion Council created the Social Principles of Human-Human-Centric AI. The two-part provision is meant to address a myriad of social issues that have come with AI innovation. One part established seven social principles to govern the public and private use of AI: 

  • Human-centricity 
  • Education/literacy 
  • Data protection 
  • Ensuring safety 
  • Fair competition 
  • Fairness 
  • Accountability & Transparency 
  • Innovation 

The other part, which expounds on the 2019 provision, targets AI developers and the companies that employ them. The AI Utilisation Guidelines are meant to be an instruction manual for AI developers and companies to develop their own governance strategy. There’s also the 2021 provision, Governance Guidelines for Implementation of AI Principles, which features hypothetical examples of AI applications for them to review. While none of these regulations are legally binding, they are Japan’s first step in starting the race to regulating AI. 

Canada 

In June 2022, Canada’s federal government released the Digital Charter Implementation Act. This contained Canada’s first piece of legislation to strengthen the country’s efforts to mitigate bias. The charter included the Artificial Intelligence and Data Act, which regulates international and interprovincial trade in AI. It requires that developers responsibly ensure to mitigate risk and bias. Public disclosure requirements and prohibitions on harmful use are also included. The charter is preliminary to moving toward officially enacting legislation regarding AI in Canada. 

India 

Currently, there are no official regulatory requirements in India regarding the responsible use of AI. The Indian Commission NITI Aayog has released working research papers being used to begin to address the issues. The first installment of the paper, Towards Responsible #AIforAll, discusses the potential of AI for society at large and recommendations surrounding AI adoption in the public and private sectors. The next part, an Approach Document for India, established principles for responsible AI, the economic potential of AI, supporting large-scale adoption, and establishing and instilling public trust. The final paper, Adopting the Framework: A Use Case Approach on Facial Recognition Technology, is meant to be a “benchmark for future AI design, development, and deployment in India.” 

Switzerland 

There are currently no specific regulations that govern the responsible use of AI. Already enacted laws are being used to inform cases as they present themselves. For example, the General Equal Treatment Act, their product liability and general civil laws address prevention of bias in the public and private sectors. 

The Future of a Global Approach 

To limit or completely eradicate AI bias, there needs to be a communal effort and commitment to accuracy, trust, and compliance. Business leaders and developers should target preventive, corrective and measures for transparency, accuracy, and accountability when employing AI. Regulators must also do their due diligence in providing comprehensive, appropriate, and timely legislation that applies to the present and will be relevant in the future.  

Related Resources 

The post The Global Race to Responsibly Regulating AI appeared first on Actian.


Read More
Author: Saquondria Burris

The Global Race to Regulate Bias in AI

The campaign for responsible use of Artificial Intelligence (AI) has grown like a massive wildfire, and the magnitude of the problem is growing faster than authorities can keep up with. Agencies around the world are working to make sense of it all and provide practical solutions for change. For global business leaders, this means staying informed about compliance, ethical standards, and innovation surrounding the ethical use of AI. To date, here’s the state of AI regulation and legislation around the globe: 

While the following is not a comprehensive list, it shows the distance that needs to be traveled to adequately regulate AI. 

United States

In the U.S., progress toward regulating AI is well underway. The Federal Trade Commission has been working to join the campaign, starting with appending responsible AI to current laws. The burden of change is placed on business leaders to hold themselves accountable for mitigating bias. In April 2020, the FTC published a blog covering U.S. AI regulation to warn and guide businesses about the misuse of AI.  

“The use of AI tools should be transparent, explainable, fair and empirically sound,” Andrew Smith, Bureau of Consumer Protection at the FTC, stated. In the release, Smith highlighted some important points for businesses using AI to remember: 

  • Transparency in the collection and use of data 
  • Explain decision-making to consumers 
  • Fair decision-making 
  • Robust, empirically-sound data and modeling 
  • Accountability for compliance, ethics, fairness and non-discrimination 

Thus far, they’ve accomplished regulating the equitable use of AI under: 

The Fair Credit Reporting Act (FCRA): Biased algorithms used in housing, employment, insurance, and credit decisions are banned. 

The FTC Act (FTCA): Bans the use of racially discriminatory bias in AI commercial use. 

The Equal Credit Opportunity Act (ECOA): Prohibits discrimination in credit decision-making based on race, color, religion, nationality, sex, marital status, age, or the use of public assistance. Discriminatory AI is banned against “protected classes.” 

In 2022, the Equal Employment Opportunity Commission (EEOC) released technical assistance guidance for algorithmic bias in employment decisions, based on the provisions under the Americans with Disabilities Act (ADA). Charlotte Burrows, Chair of the EEOC, reported that more than 80% of all employers and more than 90% of Fortune 500 companies are using such technology. Although there aren’t any federal laws that specifically target use of AI, they serve as the foundation for future legislation and regulations.  

Europe

Europe has been working on regulating the commercial use of technology since 2018. The General Data Protection Regulation (GDPR) is a resource for achieving and maintaining compliance with Europe’s laws regarding the responsible use of AI. There has been much debate amongst executives and regulators regarding the European Union’s enactment of a comprehensive set of rules for governing artificial intelligence. Executives are arguing that the rules will make it difficult to contend with international competitors. 

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi. 

China 

In 2017, the Chinese State Council released the Next Generation Artificial Intelligence Development Plan as a set of guidelines surrounding the use of specific AI applications. The release was regarding currently active provisions on the management of algorithmic recommendations of Internet information services and the management of deep synthesis of Internet information services, which is still being drafted.  

In May 2023, China’s Cyberspace Administration (CAC) drafted the Administrative Measures for Generative Artificial Intelligence Services. It requires a “safety assessment” for companies desiring to develop new AI products before they can go to market. It also mentions the use of truthful, accurate data, free of discriminatory algorithms. It focuses on prevention as the major first step for responsible AI. 

Brazil 

In December 2022, Brazilian Senators released a report containing studies and a draft of a regulation relating to responsible AI governance. It serves to inform future regulations that Brazil’s Senate is planning. The focal point of the regulation was the presentation of three central pillars: 

  • Guaranteeing the rights of people AI affects 
  • Classification of risk levels 
  • Predicting Governance Measures 

Japan 

In March 2019, Japan’s Integrated Innovation Strategy Promotion Council created the Social Principles of Human-Human-Centric AI. The two-part provision is meant to address a myriad of social issues that have come with AI innovation. One part established seven social principles to govern the public and private use of AI: 

  • Human-centricity 
  • Education/literacy 
  • Data protection 
  • Ensuring safety 
  • Fair competition 
  • Fairness 
  • Accountability & Transparency 
  • Innovation 

The other part, which expounds on the 2019 provision, targets AI developers and the companies that employ them. The AI Utilisation Guidelines are meant to be an instruction manual for AI developers and companies to develop their own governance strategy. There’s also the 2021 provision, Governance Guidelines for Implementation of AI Principles, which features hypothetical examples of AI applications for them to review. While none of these regulations are legally binding, they are Japan’s first step in starting the race to regulating AI. 

Canada 

In June 2022, Canada’s federal government released the Digital Charter Implementation Act. This contained Canada’s first piece of legislation to strengthen the country’s efforts to mitigate bias. The charter included the Artificial Intelligence and Data Act, which regulates international and interprovincial trade in AI. It requires that developers responsibly ensure to mitigate risk and bias. Public disclosure requirements and prohibitions on harmful use are also included. The charter is preliminary to moving toward officially enacting legislation regarding AI in Canada. 

India 

Currently, there are no official regulatory requirements in India regarding the responsible use of AI. The Indian Commission NITI Aayog has released working research papers being used to begin to address the issues. The first installment of the paper, Towards Responsible #AIforAll, discusses the potential of AI for society at large and recommendations surrounding AI adoption in the public and private sectors. The next part, an Approach Document for India, established principles for responsible AI, the economic potential of AI, supporting large-scale adoption, and establishing and instilling public trust. The final paper, Adopting the Framework: A Use Case Approach on Facial Recognition Technology, is meant to be a “benchmark for future AI design, development, and deployment in India.” 

Switzerland 

There are currently no specific regulations that govern the responsible use of AI. Already enacted laws are being used to inform cases as they present themselves. For example, the General Equal Treatment Act, their product liability and general civil laws address prevention of bias in the public and private sectors. 

The Future of a Global Approach 

To limit or completely eradicate AI bias, there needs to be a communal effort and commitment to accuracy, trust, and compliance. Business leaders and developers should target preventive, corrective and measures for transparency, accuracy, and accountability when employing AI. Regulators must also do their due diligence in providing comprehensive, appropriate, and timely legislation that applies to the present and will be relevant in the future.  

Related Resources 

The post The Global Race to Regulate Bias in AI appeared first on Actian.


Read More
Author: Saquondria Burris

The Global Race to Regulate Bias in AI

The campaign for responsible use of Artificial Intelligence (AI) has grown like a massive wildfire, and the magnitude of the problem is growing faster than authorities can keep up with. Agencies around the world are working to make sense of it all and provide practical solutions for change. For global business leaders, this means staying informed about compliance, ethical standards, and innovation surrounding the ethical use of AI. To date, here’s the state of AI regulation and legislation around the globe: 

While the following is not a comprehensive list, it shows the distance that needs to be traveled to adequately regulate AI. 

United States

In the U.S., progress toward regulating AI is well underway. The Federal Trade Commission has been working to join the campaign, starting with appending responsible AI to current laws. The burden of change is placed on business leaders to hold themselves accountable for mitigating bias. In April 2020, the FTC published a blog covering U.S. AI regulation to warn and guide businesses about the misuse of AI.  

“The use of AI tools should be transparent, explainable, fair and empirically sound,” Andrew Smith, Bureau of Consumer Protection at the FTC, stated. In the release, Smith highlighted some important points for businesses using AI to remember: 

  • Transparency in the collection and use of data 
  • Explain decision-making to consumers 
  • Fair decision-making 
  • Robust, empirically-sound data and modeling 
  • Accountability for compliance, ethics, fairness and non-discrimination 

Thus far, they’ve accomplished regulating the equitable use of AI under: 

The Fair Credit Reporting Act (FCRA): Biased algorithms used in housing, employment, insurance, and credit decisions are banned. 

The FTC Act (FTCA): Bans the use of racially discriminatory bias in AI commercial use. 

The Equal Credit Opportunity Act (ECOA): Prohibits discrimination in credit decision-making based on race, color, religion, nationality, sex, marital status, age, or the use of public assistance. Discriminatory AI is banned against “protected classes.” 

In 2022, the Equal Employment Opportunity Commission (EEOC) released technical assistance guidance for algorithmic bias in employment decisions, based on the provisions under the Americans with Disabilities Act (ADA). Charlotte Burrows, Chair of the EEOC, reported that more than 80% of all employers and more than 90% of Fortune 500 companies are using such technology. Although there aren’t any federal laws that specifically target use of AI, they serve as the foundation for future legislation and regulations.  

Europe

Europe has been working on regulating the commercial use of technology since 2018. The General Data Protection Regulation (GDPR) is a resource for achieving and maintaining compliance with Europe’s laws regarding the responsible use of AI. There has been much debate amongst executives and regulators regarding the European Union’s enactment of a comprehensive set of rules for governing artificial intelligence. Executives are arguing that the rules will make it difficult to contend with international competitors. 

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi. 

China 

In 2017, the Chinese State Council released the Next Generation Artificial Intelligence Development Plan as a set of guidelines surrounding the use of specific AI applications. The release was regarding currently active provisions on the management of algorithmic recommendations of Internet information services and the management of deep synthesis of Internet information services, which is still being drafted.  

In May 2023, China’s Cyberspace Administration (CAC) drafted the Administrative Measures for Generative Artificial Intelligence Services. It requires a “safety assessment” for companies desiring to develop new AI products before they can go to market. It also mentions the use of truthful, accurate data, free of discriminatory algorithms. It focuses on prevention as the major first step for responsible AI. 

Brazil 

In December 2022, Brazilian Senators released a report containing studies and a draft of a regulation relating to responsible AI governance. It serves to inform future regulations that Brazil’s Senate is planning. The focal point of the regulation was the presentation of three central pillars: 

  • Guaranteeing the rights of people AI affects 
  • Classification of risk levels 
  • Predicting Governance Measures 

Japan 

In March 2019, Japan’s Integrated Innovation Strategy Promotion Council created the Social Principles of Human-Human-Centric AI. The two-part provision is meant to address a myriad of social issues that have come with AI innovation. One part established seven social principles to govern the public and private use of AI: 

  • Human-centricity 
  • Education/literacy 
  • Data protection 
  • Ensuring safety 
  • Fair competition 
  • Fairness 
  • Accountability & Transparency 
  • Innovation 

The other part, which expounds on the 2019 provision, targets AI developers and the companies that employ them. The AI Utilisation Guidelines are meant to be an instruction manual for AI developers and companies to develop their own governance strategy. There’s also the 2021 provision, Governance Guidelines for Implementation of AI Principles, which features hypothetical examples of AI applications for them to review. While none of these regulations are legally binding, they are Japan’s first step in starting the race to regulating AI. 

Canada 

In June 2022, Canada’s federal government released the Digital Charter Implementation Act. This contained Canada’s first piece of legislation to strengthen the country’s efforts to mitigate bias. The charter included the Artificial Intelligence and Data Act, which regulates international and interprovincial trade in AI. It requires that developers responsibly ensure to mitigate risk and bias. Public disclosure requirements and prohibitions on harmful use are also included. The charter is preliminary to moving toward officially enacting legislation regarding AI in Canada. 

India 

Currently, there are no official regulatory requirements in India regarding the responsible use of AI. The Indian Commission NITI Aayog has released working research papers being used to begin to address the issues. The first installment of the paper, Towards Responsible #AIforAll, discusses the potential of AI for society at large and recommendations surrounding AI adoption in the public and private sectors. The next part, an Approach Document for India, established principles for responsible AI, the economic potential of AI, supporting large-scale adoption, and establishing and instilling public trust. The final paper, Adopting the Framework: A Use Case Approach on Facial Recognition Technology, is meant to be a “benchmark for future AI design, development, and deployment in India.” 

Switzerland 

There are currently no specific regulations that govern the responsible use of AI. Already enacted laws are being used to inform cases as they present themselves. For example, the General Equal Treatment Act, their product liability and general civil laws address prevention of bias in the public and private sectors. 

The Future of a Global Approach 

To limit or completely eradicate AI bias, there needs to be a communal effort and commitment to accuracy, trust, and compliance. Business leaders and developers should target preventive, corrective and measures for transparency, accuracy, and accountability when employing AI. Regulators must also do their due diligence in providing comprehensive, appropriate, and timely legislation that applies to the present and will be relevant in the future.  

Related Resources 

The post The Global Race to Regulate Bias in AI appeared first on Actian.


Read More
Author: Saquondria Burris

Algorithmic Bias: The Dark Side of Artificial Intelligence

The growth of social media and the advancement of mobile technology has created exponentially more ways to create and share information. Advanced data tools, such as AI and data science are being employed more often as a solution for processing and analyzing this data. Artificial Intelligence (AI), combines computer science with robust datasets and models to facilitate automated problem-solving. Machine Learning (ML) models, a subfield of AI that uses statistical techniques that enables computers to learn without explicit programming, use data inputs to train actions and responses for users. This data is being leveraged to make critical decisions surrounding governmental strategy, public assistance eligibility, medical care, employment, insurance, and credit scoring.  

As one of the largest technology companies in the world, Amazon Web Services (AWS) relies heavily on AI and ML as the solution they need for storing, processing, and analyzing data. But, in 2015, even with their size and technical sophistication, they discovered bias in their hiring algorithm. It was biased to favor men because the data set it referenced was based on past applicants over the previous 10 years, which contained a much larger sample of men than women. 

Bias was found in an algorithm COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is used by US court systems to predict offender recidivism. The data used, the chosen model, and the algorithm employed overall, showed that it produced false positives for almost half (45%) of African American offenders in comparison to Caucasian American offenders (23%). 

Without protocols and regulations to enforce checks and balances for the responsible use of AI and ML, society will be on a slippery slope of issues related to bias based on socioeconomic class, gender, race, and even access to technology. Without clean data, algorithms can intrinsically create bias, simply due to the use of inaccurate, incomplete, or poorly structured data sets. To avoid bias, it starts with accurately assessing the quality of the dataset, which should be: 

  • Accurate 
  • Clean and consistent 
  • Representative of a balanced data sample 
  • Clearly structured and defined by fair governance rules and enforcement 

Defining AI Data Bias 

The problem that exists with applying Artificial Intelligence to make major decisions is the presence and opportunity for bias to cause significant disparities in vulnerable groups and underserved communities. A part of the problem is volume and processing methods of Big Data, but there is also the potential for data to be used intentionally to perpetuate discrimination, bias, and unfair outcomes 

“What starts as a human bias turns into an algorithmic bias,” states Gartner. In 2019, Algorithmic bias was defined by Harvard researchers as the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems. Gartner also explained four types of algorithmic bias: 

  • Amplified Bias: systemic or unintentional bias in processing data used in training machine learning algorithms. 
  • Algorithm Opacity: end-user data black boxes, whether intrinsic or intentional, cause concern about levels of integrity during decision-making. 
  • Dehumanized Processes: views on replacing human intelligence with ML and AI are highly polarized, especially when used to make critical, life-changing decisions. 
  • Decision Accountability: there exists a lack of sufficient reporting and accountability from organizations using Data Science to develop strategies to mitigate bias and discrimination. 

A study by Pew Research found that “at a broad level,” 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. This may be true when you’re looking at data about shipments in a supply chain or inventory predicting when your car needs an oil change, but human demographic, behaviors, and preferences can be fluid and subject to change based on data points that may not be reflected in the data sets being analyzed.  

Chief data and analytics officers and decision-makers must challenge themselves by ingraining bias prevention throughout their data processing algorithms. This can be easier said than done, considering the volume of data that many organizations process to achieve business goals. 

The Big Cost of Bias  

The discovery of data disparities and algorithmic manipulation to favor certain groups and reject others has severe consequences. Due to the severity of the impact of bias in Big Data, more organizations are prioritizing bias mitigation in their operations. InformationWeek conducted a survey on the impact of AI bias on companies using bad algorithms.  It revealed that bias was found to be related to gender, age, race, sexual orientation, and religion. In terms of damages to the businesses themselves, they included: 

  • Lost Revenue (62%) 
  • Lost Customers (61%) 
  • Lost Employees (43%) 
  • Paying legal fees due to lawsuits and legal actions against them (35%) 
  • Damage to their brand reputation and media backlash (6%) 

Solving Bias in Big Data 

Regulation of bias and other issues created by using AI, or having poor-quality data are in different stages of development, depending on where you are in the world. For example, in the EU, an Artificial Intelligence Act is in the works that will identify, analyze, and regulate AI bias. 

However, the true change starts with business leaders who are willing to do the leg work of ensuring diversity and responsible usage and governance remain at the forefront of their data usage and policies “Data and analytics leaders must understand responsible AI and the measurable elements of that hierarchy — bias detection and mitigation, explainability, and interpretability,” Gartner states. Attention to these elements supports a well-rounded approach to finding, solving, and preventing issues surrounding bias in data analytics.  

Lack of attention to building public trust and confidence can be highly detrimental to data-dependent organizations. Implement these strategies across your organization as a foundation for the responsible use of Data Science tools: 

  • Educate stakeholders, employees, and customers on the ethical use of data including limitations, opportunities, and responsible AI.  
  • Establish a process of continuous bias auditing using interdisciplinary review teams that discover potential biases and ethical issues with the algorithmic model. 
  • Mandate human interventions along the decision-making path in processing critical data. 
  • Encourage collaboration with governmental, private, and public entities, thought leaders and associations related to current and future regulatory compliance and planning and furthering education around areas where bias is frequently present. 

Minimizing bias in big data requires taking a step back to discover how it happens and preventive measures and strategies that are effective and scalable. The solution may need to be as big as big data to successfully surmount the shortcomings present today and certainly increasing in the future. These strategies are an effective way to stay informed, measure success, and connect with the right resources to align with current and future algorithmic and analytics-based bias mitigation. 

Related Resources 

The post Algorithmic Bias: The Dark Side of Artificial Intelligence appeared first on Actian.


Read More
Author: Saquondria Burris

RSS
YouTube
LinkedIn
Share