Generative AI: Legislation or Self-Regulation?
Read time: 6 minutes
With access to GLG’s Library, you can read and/or download the full transcript of this teleconference.
Generative AI holds both promise and risk for global brands, to say nothing of the college student looking for homework help. Simply defined, generative AI creates different types of content — such as text, images, and audio — based on machine learning language models capable of consuming large amounts of text. This content can appear to be created by humans.
Generative AI became more accessible because of new tools like ChatGPT, and the latest iterations promise to upend many business sectors. Meanwhile, the specter of regulation hangs over the widespread adoption of AI as governments look toward an “accountability mechanism” for the technology.
Dr. Mukesh Dalal, an AI and machine learning expert who builds business applications using generative AI, joined us on a teleconference, hosted by GLG’s Abhishek Tyagi, to discuss business and regulatory implications of generative AI and his predictions for the future. An edited summary of the discussion follows.
Can you share insights on some of the emerging AI business models?
AI business models are evolving rapidly. There is more data available to AI, while computation costs decrease. That’s leading many businesses to take notice. AI brands and vendors that aren’t typical AI players are enhancing offerings with AI capabilities. Hardware brands like NVIDIA and Qualcomm are creating hardware for AI use cases. Public cloud brands like Amazon and Microsoft are adding AI to current services and creating new AI-based services. Database brands like Microsoft and Snowflake are enhancing data capabilities to support use cases.
New brands are also creating AI-specific offerings. Databricks is creating data services for AI, while Snorkel is allowing people to label AI training sets. Companies like OpenAI are creating general-purpose and pretrained models like GPT, and Siemens is creating industry-specific models.
What are the risks of intellectual property (IP) and plagiarism in generative AI technology?
One of the biggest risks is the music or video or a text article can be so similar to copyrighted material. That’s a clear violation of copyright laws. Another area is where students are fulfilling homework assignments using generative AI systems.
Are there emerging solutions to resolve this, and what are the challenges with watermark AI?
Watermarking can detect whether content was generated by AI. It’s a difficult problem, and most people believe it’s unsolvable. It’s not possible to securely watermark all AI-generated content. Technologies will evolve, and people violating these technologies will also evolve. It’s a cat-and-mouse game without a perfect solution.
Another solution is disclosure. When you use generative AI content, you disclose it. If this becomes required, it could work, but there will always be people who’ll violate this and remain undetected.
Can you talk about some of the current U.S. AI regulations?
The federal government is reluctant to impose regulations that might slow AI innovation or place the U.S. in a weaker position, compared with countries like China. U.S. AI regulations are led by states. California is the leader in regulating generative AI. There are regulations on specific capabilities like facial recognition, and some jurisdictions within the U.S. have banned it.
Another example is ChatGPT, banned in New York City schools, though it’s not clear how long that will continue, because some educators claim ChatGPT can be a powerful tool. Several colleges are actually encouraging the use of ChatGPT.
How do you evaluate the EU Artificial Intelligence Act?
The European Union is creating regulations to apply across EU countries. The act was adopted in December 2022, but countries like Germany want improvements. It isn’t clear how soon this will get implemented. Two European standards organizations are defining the AI Act clearly so it can be applied.
Other applications will be highly regulated. Let’s say a company is using AI to scan job applicants. AI may introduce bias, and certain applicants might be at a disadvantage.
Do you see the AI Act addressing challenges emerging across all the generative AI segments, such as images, text, and artworks?
Generative AI development has sped beyond the way AI Act was designed. It didn’t anticipate ChatGPT. Some risk will not be covered by the AI Act but might be incorporated by the standards body when they’ve drafted details. It’ll need to evolve to handle a range of risks.
Comparing different sectors and applications, what kind of regulation is needed?
This is a controversial topic. Experts are divided. There are several existing regulations around product safety and defamation, which we should apply to AI. Applying existing regulation to AI first is ideal.
There are cases where new harms have been created based on current regulations. Section 230 of the U.S. Communication Act, for example, protects internet platforms from being treated as publishers. If I post something defamatory about someone on Facebook, the legal burden is on me because Facebook is not a publisher. That needs updating.
How do you see the impact of AI technologies on the auditing market?
AI is clearly an enabler and a risk for corporations. As companies adopt AI, risk increases. It will be important to understand how much a company is exposed and how they mitigate that. It suggests there may be growth in the AI audit industry, and not just for AI; it’s also for data. The audit scope will include the data that is used to drive AI applications.
Financial auditors that are already auditing experts will add AI. It may become one of the fastest-growing sectors of their business. Deloitte and PricewaterhouseCoopers, already advanced in data use and AI because of their consulting business, may become leaders. Other financial auditors like EY and KPMG will evolve their businesses to focus on AI and data audits.
What is your opinion on the shift in the media and the entertainment industry due to generative AI?
Generative AI will cause tremendous upheaval. Media and entertainment companies rely on generating new content, such as new movies, scripts, and songs. Many professionals have traditionally created that content, and the industry has been paid handsomely. In the past, nonprofessionals also created content, but the quality was low. My home video posted on YouTube isn’t anything like a Spielberg movie.
With generative AI, the quality rapidly improves, and sometimes it’s difficult to distinguish it from professional content. AI tool costs will decrease, and the differentiator might be marketing and promotion. There could be other consequences. There might be fewer professionals and artists employed, but those employed would be paid more. Expect mass upheaval in the next few years.
What are some effective solutions to the threat this technology poses to artists?
Artists will need to find ways to keep their valuation and their jobs. They might focus on different types of content, advertising that content and increasing output. Even if cost per unit goes down, you make it up in volume. Each artist generating 10 content pieces today, 10 videos today, could generate thousands of content assets per year.
What is your outlook on the emerging players?
I expect cloud providers to keep adding AI-based revenue-generating services or adding AI to existing services to generate revenue. Companies like Amazon, Google, and Microsoft will benefit from AI adoption. They will deploy it on public clouds and have short-term advantages. Other organizations with short-term advantages are companies like OpenAI that provide the tools to create and deploy AI capabilities.
Data companies may use AI and the data cloud, and will compete with the current public cloud vendors. There’s uncertainty in these models. They’ll evolve, but it is unclear who’ll win.
It’s going to be a wild ride. Regulations will grow, and risks will grow. We won’t know how to deal with risk in the short term. When we look back in five years, we’ll marvel at the growth that has occurred because of AI.
About Dr. Mukesh Dalal
Dr. Mukesh Dalal has expertise in AI and machine learning and is currently Founder-CEO at AIDAA, focused on building business applications using GPT-3 and DALL-E-2. Previously, he was Chief AI Officer, leading analytics and automation strategy, at Stanley Black & Decker Inc. He was Global Head of AI & Data and Chief Analytics Officer at Bose Corporation. Before that, he was Principal Scientist at Charles River Analytics Inc., and he led R&D in data science and machine learning as Chief Scientist at BAE Systems. Mukesh was also Chief Executive Officer at Big Data, and Chief Architect and Scientist at Webtrends. Mukesh has a PhD in artificial intelligence.
This technology industry article is adapted from the GLG Teleconference “Regulating Generative AI: Legislative Action or Industry Self-Regulation?” If you would like access to the transcript for this event or would like to speak with technology industry experts like Mukesh Dalal or any of our approximately 1 million industry experts, contact us.
Questions Asked during the Teleconference:
- Can you share insights on some of the emerging AI business models?
- How do you see the medium- to long-term sustainability of these business models?
- What are the risks of IP and plagiarism? Do you see this as a threat to the overall generative AI technology?
- Any emerging solutions that can help resolve such issues? What are the challenges with watermark AI solution?
- Could you talk about some of the current regulations that you see for AI in the U.S.?
- How do you evaluate the EU Artificial Intelligence Act?
- Can the EU AI Act address challenges emerging across all generative AI segments (image, text, artworks, etc.)?
- What are the pros and cons of different approaches ensued by different government entities?
- Comparing different sectors and applications, what kind of regulation is needed — legislative action or industry self-regulation?
- What is the impact of artificial intelligence technologies on the AI auditing market?
- Looking at the landscape and major players in the AI auditing market, what could be a competitive advantage?
- What is your opinion on the shift in the media and entertainment industry that you observe with generative AI?
- What are some of the effective solutions to the threat this technology poses to artists, given the rise in generative images, synthesis, and voice technologies?
- Are businesses prepared for a successful implementation of generative AI with the backdrop of associated risks?
- What is your short- to medium-term outlook on emerging players and the sustainability of business models considering the industry challenges?
Enter your contact information below and a member of our team will reach out to you shortly.
Subscribe to Insights 360
Enter your email below and receive our monthly newsletter, featuring insights from GLG’s network of approximately 1 million professionals with first-hand expertise in every industry.