Today, we are enthralled by the possibilities of AI. As awareness of AI grows, so does the use of its technology, along with an increasing need for regulation and ethical discernment by the companies that develop, deploy and use it.
While AI policies are emerging, development continues to outpace regulations. With its rapid expansion, the AI industry faces growing scrutiny over ethical concerns, including data usage and transparency, especially regarding data training and potential copyright infringement issues.
AI technology continues to permeate the inner workings of nearly every imaginable data source and the parallel ethical challenges that exist today may be completely different in weeks, months or years. Now more than ever, the importance of internal accountability structures, including transparency in data usage and the AI decision-making process, should be paramount.
Growing Concern for Legal and Ethical Risks
The gap between AI’s fast-paced evolution and regulatory delays leaves companies vulnerable to lawsuits and public backlash. As AI applications expand and the race to the top intensifies, so does the number of potential legal challenges. Already, the rise of lawsuits over copyright infringement is concerning. In 2023, more than 10,000 trade groups, authors, companies and others complained to the Copyright Office about the use of creative works by AI models.
Transparency into how data is collected, used and updated, especially as it relates to copyright, is essential. Traceability for ethical accountability is not as available as it should be. Companies must implement mechanisms to clarify AI’s decision-making processes. Data garnered to train AI models should be accurate, unbiased and, perhaps most importantly, ethically sourced, especially in content creation and other creative fields. As the market matures, data sellers will need to break down the contents of their datasets in a clear, standardized way. Think of this approach like a nutrition label for data. Companies that invest in cleaning and organizing their data will be better positioned to grow their market share.
A particularly key adjective here is unbiased. AI image creation tools made headlines in the past year due to offensive racial and ethnic depictions of people, even when trained on different datasets. The same issues are likely to appear in generative AI video. How a company removes bias and incorporates diversity into its datasets will be increasingly important. This includes having a clear input strategy, diverse decision-makers at the table, red-teaming practices to test for blind spots and a defined QA process to regularly check for bias.
Algorithmic bias is real and one of the biggest threats of AI. Generative outputs are only as good as the sum of the inputs. And in a world where the media has so inaccurately reflected the majority of the population for years, these biases will creep into generative models, perpetuating stereotypes and systemic prejudices. Organizations should commit to the responsible and ethical training of foundational models.
Deloitte’s 2024 State of Ethics and Trust in Technology Report surveyed over 1,800 business and technical professionals globally and found that 54% perceived generative AI as posing the highest ethical risks among emerging technologies. Only 27% reported having distinct ethical standards for generative AI within their organizations, further highlighting the gap between development and regulation.
Another subset of ethical AI development is to prioritize fairness and implement audits and reviews to prevent discriminatory algorithms and outputs. As AI use cases continue to evolve, new ethical challenges, e.g., synthetic content, are emerging that require agile, company-driven solutions.
The Chance at Leadership in AI Regulation
Beyond compliance, companies have the potential to influence and shape industry-wide standards and best practices by establishing their own governance. The opportunity to set the bar for what ethical AI looks like is largely up for grabs.
As regulatory actions continue to expand in 2025 and beyond, AI companies can either react to the rules or actively shape them. To mitigate risks and instill integrity, companies need to prioritize a bottom-up approach as it relates to their specific technologies and data sourcing.
Ethical AI should not just be on the regulatory agenda–it should be a business imperative. Companies must make it a priority to build trust and accountability from the inside out.

