Artificial Intelligence

Transparency in AI: A Myth or a Fact?


Artificial intelligence is where innovation propels us forward at an unprecedented pace, and the quest for transparency has emerged as a pivotal and non-negotiable discourse. As businesses enthusiastically race to scale and deploy AI solutions powered by trusted data, an essential inquiry looms large: Is transparency in AI a myth or an attainable reality?

Through a nuanced exploration of recent endeavors, we aim to shed light on the evolving AI landscape, offering insights that resonate with businesses, policymakers, and technology enthusiasts alike.

The Data & Trust Alliance’s Provenance Standards

In a groundbreaking move, the Data & Trust Alliance (D&TA) announced a pioneering initiative on November 30, unveiling a set of 8 robust Data Provenance Standards. Crafted collaboratively by experts from 19 esteemed organizations, these standards mark a significant stride towards enhancing transparency in AI.


These standards include identifiers or pointers of metadata representing the data that constitutes the current dataset, providing a comprehensive lineage view.


The initiative emphasizes the identification of the origin, be it an individual, organization, system, or device, offering unparalleled clarity on the genesis of the dataset.

Legal Rights

Legal and regulatory frameworks applicable to the dataset, coupled with required data attributions, copyright or trademark details, and localization and processing requirements, are meticulously covered.

Privacy and Protection

Recognition of any sensitive data associated with the dataset and the application of privacy-enhancing techniques form a critical aspect of these standards.

Generation Date

A timestamp marking the creation of the dataset is included, offering a temporal perspective crucial for understanding data relevance.

Data Type

Identification of the data type within the current set provides insights into organization, potential use cases, and challenges associated with handling and utilization.

Generation Method

The initiative focuses on identifying how the data was produced, whether through data mining, machine-generated processes, IoT sensors, or other methods.

Intended Use and Restrictions

The standards meticulously outline the intended use of the data and specify which downstream audiences should not be granted access to the current dataset, ensuring compliance and responsible usage.

These comprehensive standards collectively illuminate the origins of datasets, empowering companies to verify data trustworthiness with unprecedented granularity. 

The AI Act and European Union Regulation

The European Union’s AI Act stands as a landmark legislative initiative, a pivotal step toward shaping the future of artificial intelligence (AI) within the region. Unveiled as a flagship endeavor, this act is designed to establish a framework that ensures the development and deployment of AI systems are both safe and trustworthy. At its core, the AI Act adopts a risk-based approach, emphasizing a nuanced evaluation of the potential harm AI systems may pose to society.

Key Elements of the AI Act:

Rules on High-Impact AI Models

The AI Act introduces specific rules targeting high-impact AI models that have the potential to cause systemic risk. By focusing on these models, the legislation aims to mitigate risks associated with their deployment, fostering a more secure and accountable AI landscape.

Governance Framework at the EU Level

An essential facet of the AI Act is the establishment of a comprehensive governance framework, operating at the European Union level. This centralized governance structure is designed to provide oversight, guidance, and enforcement capabilities to ensure the responsible development and usage of AI technologies.

Prohibitions and Fundamental Rights Impact Assessments

The provisional agreement includes a set of prohibitions that delineate certain AI practices deemed unacceptable due to their potential harm. Additionally, the AI Act mandates fundamental rights impact assessments for deployers of high-risk AI systems. This assessment is a proactive measure to evaluate the potential impact of AI systems on fundamental rights before they are put into use.

Classification of AI Systems as High-Risk

The AI Act classifies AI systems based on their risk level, creating a tiered structure. High-risk AI systems are subject to more stringent requirements and obligations to gain access to the EU market. This classification framework aims to strike a balance between ensuring safety and preventing unnecessary regulatory burden on low-risk AI systems.

Implications and Global Influence

The AI Act is not just a regional regulation; it has the potential to set a global standard for AI regulation. Similar to the General Data Protection Regulation (GDPR), the AI Act positions the European Union as a key influencer in tech regulation on the world stage. By adopting a risk-based approach and addressing high-impact AI models, the EU aims to lead the way in shaping responsible AI practices globally.

As the first legislative proposal of its kind, the AI Act reflects the EU’s commitment to supporting the development and uptake of safe and trustworthy AI across both private and public sectors. Its provisions underscore the significance of aligning AI innovation with ethical considerations and societal values, thus laying the groundwork for a responsible and transparent AI ecosystem within the European Union and potentially beyond.

Challenges in AI Transparency

  1. Diminishing Transparency among Major AI Model Companies

Despite the increasing emphasis on transparency, there’s a discernible trend indicating a reduction in transparency practices among major AI model companies. This poses a significant challenge as transparency is fundamental to building trust and understanding the inner workings of AI systems.

  1. Foundation Model Transparency Index Findings

The Foundation Model Transparency Index (FMTI), developed by Stanford HAI, serves as a comprehensive evaluation tool. Unfortunately, the findings from this index are less than promising. Scores ranging from 12 to 54 suggest that even the highest-scoring companies are only achieving an average level of transparency.

  1. Lack of Consistency Across Indicators

The evaluation is based on 100 indicators, reflecting a broad spectrum of transparency considerations. However, a notable challenge is the lack of consistency across these indicators, making it difficult to establish a unified standard for transparency in AI.

  1. Need for Improved Disclosure Practices

The scores underscore a pressing need for major AI model companies to enhance their disclosure practices. Transparency is not only about revealing the underlying algorithms but also providing comprehensive insights into the data sources, model training processes, and potential biases inherent in the AI systems.

  1. Complexity in Communication

Communicating transparency effectively to diverse stakeholders can be complex. Striking a balance between providing detailed information for technical audiences and presenting accessible summaries for the general public poses a communication challenge for companies.

  1. Addressing Bias and Fairness

Transparency in AI should extend beyond technical details to encompass the mitigation of biases and ensuring fairness in algorithms. Achieving this requires concerted efforts from companies to disclose their strategies for bias detection, prevention, and overall fairness in AI applications.

  1. Navigating the Evolving Regulatory Landscape

The evolving nature of AI regulations globally adds an additional layer of complexity. Companies must navigate a dynamic regulatory landscape, ensuring compliance with emerging standards while maintaining transparency that aligns with evolving legal requirements.

  1. Balancing Trade Secrets and Transparency

AI model companies often grapple with the challenge of balancing the need for transparency with the protection of intellectual property and trade secrets. Disclosing certain aspects of AI systems may conflict with preserving proprietary information, creating a delicate balance to be maintained.

  1. Continuous Improvement and Accountability

Achieving transparency is an ongoing process that demands continuous improvement. Companies must not only disclose information transparently but also remain accountable for addressing emerging challenges, evolving technologies, and the dynamic expectations of stakeholders.

  1. Educating Stakeholders

Educating stakeholders, including the general public, about the nuances of AI systems and the significance of transparency poses a persistent challenge. Bridging the gap between technical intricacies and layperson understanding is essential for fostering trust and informed discussions.

The Ongoing Dialogue: Myth or Fact?

As we navigate the intricate landscape of AI transparency, it becomes evident that the journey is ongoing. The revelations from the Data & Trust Alliance’s standards, the transformative impact of the European Union’s AI Act, and the stark challenges illuminated by the Foundation Model Transparency Index underscore the intricacies of this journey. As we grapple with the delicate balance between innovation and responsibility, it is clear that transparency in AI is not a static destination but a continuous dialogue. The ongoing discourse, fueled by collaborative efforts and a commitment to ethical principles, shapes the narrative of AI development. In this landscape, the quest for transparency transcends rhetoric, becoming an imperative for the responsible evolution of AI technology on a global scale.

Use responsible and transparent AI with KiwiTech. Our expertise aligns with the evolving landscape of AI development, ensuring innovation with integrity. Contact us today!

Subscribe to our Newsletter
Stay current with our latest insights