Global geopolitics

Decoding Power. Defying Narratives.


Elon Musk’s Call for Honest AI :The Power of Truth Matters

It’s Even More Important in the Context of AFRICA

Elon Musk’s statement, “AI needs to say the truth and know the truth, even if the truth is unpopular”, underscores a critical principle in the development of artificial intelligence: the pursuit of truth above convenience, bias, or social pressure. In an era dominated by misinformation, propaganda, and ideological division, his call for truthful AI isn’t just a technological necessity, it’s a moral and geopolitical imperative with profound benefits for humanity.

However, ensuring AI’s commitment to truth requires more than just good programming; it demands accountability from those who own, control, and regulate AI systems. Without clear responsibility, AI can easily be weaponized for manipulation, censorship, and corporate or political agendas. The question is: Who ensures AI remains committed to truth? And how do we enforce it on a global scale?

Truth in AI matters, it shouldn’t even be debated. The foundation of any advanced AI system is data. If that data is distorted or manipulated to fit a narrative, the AI’s output becomes misleading, reinforcing false beliefs rather than dismantling them. For AI to serve as a tool for human progress, it must be grounded in factual accuracy, even when the truth is uncomfortable. It is imperative to combat misinformation which spreads faster than the truth. Studies from MIT found that false news travels six times quicker than factual reports on social media. An AI system designed to prioritize truth could counteract this by debunking false claims in real-time, providing verified information and preventing the manipulation of public opinion.

Truth is important for informed decision making. Governments, businesses, and individuals increasingly rely on AI to make high-stakes decisions, from economic policy to healthcare strategies. An AI that tells the unpopular truth can prevent catastrophic mistakes. For example, during the COVID-19 pandemic, accurate data about the virus’s transmission and treatment was vital. A truth-driven AI could have cut through conflicting reports and misinformation, potentially saving lives. The progress of science relies on questioning assumptions and following evidence, even when results challenge widely held beliefs. AI systems trained to seek and communicate the truth could accelerate breakthroughs by analyzing vast amounts of data, identifying errors, and proposing innovative solutions. Imagine an AI that could identify flaws in drug trials, detect errors in climate models, or even challenge long-standing theories, all based on empirical evidence, not popular opinion.

There must be accountable ethical and legal responsibility of AI Owners and Controllers. The organizations and individuals who develop and deploy AI, whether tech giants, governments, or private entities, hold immense power over global information flow. Musk’s statement touches on these ethical responsibilities of AI developers. An AI that conforms to prevailing social narratives rather than the truth can become a tool of manipulation. This is especially dangerous if these powerful entities, governments, corporations, or media, use AI to control information.

A truthful AI, on the other hand, ensures transparency and accountability. If an AI system exposes corruption, highlights economic inequality, or reveals uncomfortable scientific realities, it empowers the public with knowledge. Democracy thrives when citizens are well-informed, and truthful AI can play a key role in restoring trust in institutions.

Moreover, AI’s role in shaping future generations cannot be ignored. Children growing up with AI assistants, educational tools, and personalized content feeds deserve technology that teaches them truth, not ideology. Allowing AI to prioritize facts over social acceptance cultivates a culture of critical thinking and resilience.

So, the obligations for these powerful entities must include transparency in AI algorithms and data sources AI. Companies should disclose the data sources, training methods, and decision-making processes behind their AI systems. If an AI presents something as “fact,” there must be a clear trail showing how that conclusion was reached.

An independent oversight and regulation be agreed upon so that a truly truthful AI cannot be left solely in the hands of corporations or governments with vested interests. Independent regulatory bodies, composed of multidisciplinary experts, should audit AI models to ensure they are not misused for propaganda or suppression of inconvenient truths.

Legal accountability for AI-generated misinformation is imperative, if an AI system spreads false or misleading information, its developers should be held accountable. Just as media organizations face legal consequences for defamation or misinformation, AI companies must be legally responsible for the integrity of their systems.

There is need to prevent AI-Driven censorship. I have experienced on all AI platforms when seeking information on climate change, covid and mRNA vaccines, individuals like Bill Gates and opaque organisations such as Bilderberg group. AI should not be used as a tool for silencing dissenting voices under the guise of “fact-checking.” When you dig deeper into the ownership of fact-checking organisations, they are all owned by nefarious individuals and organisations tied to the controllers of the information space. Owners of AI platforms must ensure their systems distinguish between falsehoods and differing perspectives. The users should have the ability to challenge AI-generated information, request transparency, and demand corrections when AI is proven wrong. Just as media platforms can be held accountable for false reporting, AI-driven services should be subject to public scrutiny and correction mechanisms.

A consensus to achieve a truthful AI at the International Level must be on top of the agenda, because AI is a global technology, ensuring its commitment to truth requires international cooperation and legally binding agreements. Without such measures, AI development will be dictated by the most powerful corporations and authoritarian regimes, which may use it to manipulate public perception.

What are the key steps that can be taken for internationally recognised AI governance? This is a challenge as the key international or supranational bodies are compromised and biased. However, global AI truth standards have to be achieved. An independent mechanism via the United Nations (UN) or an independent AI ethics body should establish a universal framework for AI truth verification. This must define, what constitutes “truth” in AI-generated content, how AI should handle contested issues (e.g., scientific debates, political events) and standards for fact-checking and verifying sources. AI transparency treaties are required and countries must agree on mandatory transparency for AI algorithms and data sources. Just as international agreements govern nuclear weapons and climate action, an AI treaty should prevent the use of AI for large-scale disinformation campaigns. Preventing concentration of power by using decentralized AI governance models such that AI infrastructure is not monopolized by a handful of corporations. Decentralized AI models, where independent entities can audit and verify AI outputs, should be promoted to prevent corporate or state-controlled narratives.

Safeguards needed for AI and human rights protections via a truly independent International Criminal Court (ICC) or human rights organizations should monitor and investigate cases where AI is used to violate human rights, suppress free speech, or manipulate elections. An agreement on a Global AI Accountability Council, similar to but not corrupt or biased like financial regulatory bodies such as the International Monetary Fund (IMF) for economics, an AI Accountability Council should be established. This council would include ethicists, scientists, and legal experts who oversee AI-related disputes and ensure compliance with truth-based AI standards.

The world stands to face harsh consequences for Ignoring AI truthfulness. If AI is allowed to prioritize profit, political agendas, or ideological bias over truth and the consequences could be devastating. The capabilities of mass misinformation as AI-driven falsehoods could shape public opinion, leading to manipulated elections, social unrest, and economic instability. The erosion of Trust if AI becomes a tool for deception, public trust in technology, institutions, and even fundamental truths will collapse. The real threat of technocratic fascist authoritarianism as governments or corporations that control AI narratives could impose ideological uniformity, stifling independent thought. The scale of censorship witnessed during covid was unprecedented, and it is clear AI-driven censorship and thought policing is real. If AI systems selectively promote certain viewpoints while suppressing others, freedom of speech could become obsolete.

There is therefore a monumental task in building a truthful AI. Musk’s vision, while compelling, isn’t without challenges. The question of who defines “the truth” remains a major hurdle. Science, for all its rigor, evolves, what’s accepted today may be refuted tomorrow. Therefore, AI must be designed not only to convey current truths but to adapt as knowledge advances. Additionally, biases in data and algorithms remain a concern. To create a genuinely truthful AI, developers must ensure training data is diverse, representative, and free from ideological skew. Open-source development, transparent auditing, and diverse teams working on AI projects can mitigate this risk.

If AI is recognized as a force for human progress not as a tool of control and centralisation of power, a

truthful AI benefits all of humanity. It can drive informed decision-making, from politics to medicine, people make better choices when they have access to factual information. Accurate information can prevent panic, ensure effective treatment, and improve public compliance with health measures Drive scientific and technological advancement by prioritizing truth to accelerate research by uncovering errors, testing hypotheses, and debunking false claims. As a champion of global stability, when AI-driven misinformation is minimized, international relations and economic systems become more predictable and fairer. We are facing another economic bubble. In finance and global markets, a truthful AI analysis could prevent bubbles, crashes, and manipulative practices by providing an unbiased assessment of economic conditions. Responsible AI promotes empowered citizens. A well-informed public is the foundation of democracy and innovation. Truthful AI ensures people have access to unbiased, verifiable knowledge. When it comes to climate action, a truth-based AI could cut through politicized debates, focusing on scientific data to propose viable environmental solutions.

Musk’s statement is not just about AI, it’s about the future of human civilization a vision where technology becomes a force for enlightenment, not manipulation. If AI tells the truth, even when unpopular or uncomfortable, it can be humanity’s greatest tool to safeguard human progress. But if it becomes a tool of deception, we risk entering an era of digital totalitarianism where truth is defined by those in power. As things stand and what we have witnessed during the Biden era in particular, signs are that if the status quo is not challenged, we are headed for a technocratic digital prison. The fact that The Vatican is said to be holding 50miles worth of books and manuscripts not open to the public or world at large says a lot about the chances of a truthful AI.

The choice is ours. Will we build AI that serves truth, or AI that serves to control? The truth may not always be popular, but it remains humanity’s most powerful ally. AI, driven by that principle, could help ensure that future generations inherit a world grounded in knowledge, reason, and reality, not illusion.

Lastly, I have to put this in context to Africa as it is a special case. In the context of Africa’s ongoing struggle for liberation, the emergence of truthful and ethical artificial intelligence (AI) could serve as a powerful tool for countering the cultural hegemony imposed by dominant global powers. As Antonio Gramsci’s concept of cultural hegemony explains, the ruling class of the West has historically maintained its dominance not merely through military or economic means but by shaping the worldview of societies, making their values, norms, and ideologies appear as “common sense.” This cultural dominance is particularly evident in Africa, where Western media, education systems, and digital platforms often dictate narratives, marginalizing local perspectives and perpetuating neocolonial structures. Truthful AI, if developed and deployed equitably, could disrupt this dynamic by amplifying African voices, preserving indigenous knowledge, and challenging the monopolization of information. However, the current reality is that Africa does not own or control the infrastructure of the internet or AI, leaving it vulnerable to the same forces of hegemony and hybrid warfare that have historically oppressed the continent.

Hybrid warfare, a strategy that combines conventional military tactics with information warfare, cyberattacks, and psychological operations, relies heavily on controlling narratives and manipulating perceptions. In Africa, this has often taken the form of disinformation campaigns, electoral interference, and the exploitation of ethnic and political divisions by external actors. Truthful AI, designed to prioritize accuracy, transparency, and inclusivity, could counteract these efforts by providing reliable information, debunking false narratives, and fostering critical thinking. For example, AI-driven platforms could be used to document and disseminate African histories, cultures, and perspectives, countering the Eurocentric narratives that dominate global discourse. Moreover, AI could empower local journalists, activists, and scholars by providing tools for data analysis, fact-checking, and storytelling, enabling them to challenge the status quo and advocate for systemic change.

However, the potential of AI to serve as a liberatory tool is contingent on Africa’s ability to own and control its digital infrastructure. Currently, the internet and AI ecosystems are dominated by a handful of Western corporations and governments, which often prioritize their own interests over those of African nations. This lack of sovereignty in the digital realm mirrors the economic and political dependencies that have long constrained Africa’s development. To truly harness the power of truthful AI, Africa must invest in building its own technological infrastructure, fostering local expertise, and creating regulatory frameworks that ensure AI is used ethically and equitably. This would not only counter the cultural hegemony imposed by the West but also strengthen Africa’s resilience against hybrid warfare and other forms of neocolonial exploitation.

In the spirit of Gramsci’s critique of hegemony, truthful AI could help Africans reclaim their narratives and assert their agency in the global order. By challenging the “common sense” imposed by dominant powers and centering African perspectives, AI could become a catalyst for liberation rather than another tool of oppression. However, this vision can only be realized if Africa takes control of its digital future, ensuring that technology serves the interests of its people rather than perpetuating the chains of neocolonialism. The struggle for liberation, both in the physical and digital realms, remains as urgent as ever.

©GGTvStreams



Leave a comment