Ad Image

AI Regulations Are Coming – here are 3 ways business leaders can help to formulate the new rules

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise technology. In this feature, Cohesity‘s Greg Statton offers commentary on upcoming and new AI regulations and how enterprises can help write the rules.

The calls for regulating artificial intelligence (AI) are getting stronger. Developers, business leaders, academics and politicians on both sides of the aisle are calling for new rules aimed at making AI more accountable. This summer, Senate Majority Leader Chuck Schumer unveiled a legislative framework for regulating the new technology and announced a series of “AI Insight Forums” with top AI experts to explore how these regulations would work. Even self-described libertarian business leader Elon Musk has called for regulations around AI.

As questions surrounding AI regulation move from ‘if’ to ‘when’ and ‘how’, business leaders need to be proactive with helping the federal government formulate and roll out these new rules. A failure to participate in the process could threaten businesses’ ability to use AI to innovate, create new products and compete with foreign entities. AI is poised to radically transform the world around us, and the time to act is now. Business leaders simply can’t watch the situation play out from the sidelines.

Restoring Trust by Building a Culture of Responsible AI

As AI continues to evolve, the federal government is concerned about safety, security and trust – including potential risks associated with misuse of the technology such as biosecurity, cybersecurity and broader societal effects like protecting Americans’ rights and physical safety. Data plays a crucial role in creating these concerns (AI is only as good as the data you feed into it, of course), and the quality and management of this data directly influences the outcomes and potential risks that come with AI.

Business leaders can partner with the government to address these concerns by committing to common-sense, responsible AI development and deployment. This includes ensuring products are safe before launching them into the marketplace, building security and privacy safeguards directly into AI engines and earning the public’s trust by building a culture of transparency and accountability throughout their organizations. In parallel, businesses should prioritize research on the societal risks posed by AI systems – collaborating with governments, civil society and academia to share information and best practices for managing AI risks.

Here are three ways businesses can proactively address these concerns and create a culture of responsible AI in their organizations:

Develop & Implement Internal AI Ethics Guidelines

The business community can offer the government a set of best practices and guidelines by developing its own set of internal rules and regulations. This requires establishing a clear set of ethical principles that focus on transparency, fairness, privacy, security and accountability. Most companies today already adhere to clear policies around risk and bias, so it wouldn’t be much of a stretch to extend these policies across all data in the organization.

As policies change over time, it’s important to look back and curate older data sets that are still relevant to make sure new, evolving principles are being applied. As models learn, even small, imperceivable biases can create huge problems down the road – just ask Apple. Building and applying these guidelines internally ensures that government regulations will be means tested when rolled out on a grander scale.

Encourage Collaboration & Knowledge Sharing

As AI democratizes, organizations should foster a culture of collaboration and knowledge sharing within the organization and with key stakeholders – such as employees, partners, customers and the public as a whole. We’re seeing a rapid pace of innovation the world has never seen before, but this agility has expanded threat surfaces beyond traditional perimeters and threatens to create information silos within organizations. Knowledge workers across disciplines have different ideas and use cases for AI that engineers or developers haven’t considered. Openly encouraging cross-functional collaboration makes it easier to monitor and control AI development and use throughout the organization while breaking down silos.

Provide AI Ethics Training and Education

AI has the power to make employees’ lives easier, but it comes with great responsibility. Businesses need to make sure their workers understand the risks associated with using public and proprietary AI tools – especially the risks surrounding adversarial attacks, privacy preservation and bias mitigation. Clear guidelines around the kind of data they can input into AI engines protect personally-identifiable information (PII), intellectual property (IP) and other trade secrets. Consent is also important – making sure customers, employees and other stakeholders are comfortable with their data being used to train AI models. The last thing businesses want is for employees to go rogue and use AI on their own without the guidelines set out by the organization. The lack of visibility and control would be a recipe for disaster and set a dangerous precedent at a time when federal regulations are being formed.

Get Proactive to Get Ahead

Public trust is eroding across the board, according to the Edelman Trust Barometer, and governments are looking to increase regulations to get back on track. AI is firmly in their crosshairs, and business leaders need to get out in front of the process to provide clear ethical and procedural guidelines around the new technology’s development and deployment. This includes developing their own responsible AI principles internally, encouraging collaboration across the organization and ensuring all stakeholders are up to date on clear guidelines around protecting PII, IP and other critical data points. AI is going to change the world, and businesses have an opportunity to show regulators that they are serious about taking responsibility for using the technology in a safe, ethical manner.

Share This

Related Posts