OPINION | Why we don’t trust AI, and why we should

Listen to this article:

The author says artificial intelligence (AI) isn’t something forced on people. It’s a choice. Picture: SUPPLIED

Artificial Intelligence (AI) isn’t something forced on people.

It’s a choice.

But that choice is often clouded by fear, misinformation, and cultural resistance. I’ve seen this firsthand. AI is not a monster lurking in the shadows, nor is it a saviour sent from the heavens. It’s a tool, powerful, evolving, and deeply misunderstood. And like any tool, its impact depends on how we choose to use it, who builds it, and whether we’re willing to understand it before we judge it.

In August this year, I had the opportunity to present at the Falling Walls Lab Aotearoa on a project titled Breaking the Wall of Legal Inaccessibility: An AI-Powered Legal Companion for the Marginalized. My presentation wasn’t just a pitch, it was a reflection of my belief in the potential of AI to solve real problems.

I proposed a system that could help people who struggle to access legal services, especially in communities where lawyers are expensive, legal language is complex, and justice feels out of reach. The AI tool I envisioned wasn’t meant to replace lawyers or judges. It was designed to work alongside experts, offering support to those who need it most.

When I returned to Fiji, I knew I would face resistance. And I did. Many people disagreed with the idea, even after I explained how the system would be built with human oversight, cultural sensitivity, and expert collaboration. Their skepticism didn’t surprise me. It wasn’t ignorance, it was caution, shaped by experience and cultural values. And that’s something I deeply respect.

The truth is, there are valid reasons why people don’t trust AI. Around the world, AI has been misused in ways that have caused harm. A case in Hong Kong shows just how sophisticated and dangerous AI-powered deception can be.

A finance worker at a multinational firm was tricked into transferring $25 million to fraudsters who used deepfake technology to impersonate the company’s chief financial officer during a video conference call. The worker believed he was speaking to real colleagues, but every person on the call was a digitally generated fake. The scam was so convincing that he dismissed his initial doubts and authorised the transaction.

This incident highlights the growing threat of deepfakes, AI-generated videos or images that mimic real people with alarming accuracy. When technology can imitate voices, faces, and gestures so well that even trusted employees are fooled, it’s no wonder people feel uneasy. These aren’t just technical tricks, they’re tools of manipulation, and they erode public trust.

Bias is another major concern. AI systems learn from data, and if that data is biased, the AI will be biased too. There have been cases where AI tools used in hiring processes rejected candidates based on race or gender. Facial recognition systems have misidentified people, especially those from minority groups. These errors aren’t just technical, they have real consequences, affecting people’s lives, opportunities, and safety.

Job displacement is also a fear that many people share. The World Economic Forum predicted that AI could replace up to 85 million jobs by 2025. For individuals working in industries that are vulnerable to automation, AI doesn’t feel like a helpful tool, it feels like a threat to their livelihood. This fear is especially strong in regions where job opportunities are already limited and economic stability is fragile.

Cultural resistance plays a significant role too. In many Pacific communities, technology is viewed with suspicion. It often feels foreign, disconnected from local values and traditions. AI systems are usually designed in Western contexts, using Western languages and norms. They rarely reflect indigenous knowledge or cultural practices. When people feel that a tool doesn’t understand them or their way of life, they’re less likely to trust it. Trust requires familiarity, and AI often feels unfamiliar.

There’s another issue that’s becoming more visible in education: the use of AI detection tools in student assessments. It’s a sad reality that students who don’t use AI at all can still be flagged by these systems as having done so.

I’ve seen this happen firsthand. As a mentor, I reviewed a student’s work, an essay with a few grammatical errors but strong, original points. He had clearly put in the effort. Yet he returned disappointed, having received an 80 per cent AI detection score and a failed grade. It’s hard to believe what counts as “real” anymore. A student with excellent writing skills might be penalised simply because their work is “too perfect,” and the system assumes no human could have written it.

This raises serious questions. Are we discouraging genuine effort? Are we punishing students for being articulate? AI detection tools are meant to protect academic integrity, but when they misfire, they undermine it. We need better systems, ones that understand nuance, context, and the diversity of human expression.

At the same time, AI has opened doors for people who have long been excluded, not out of pity, but through practical empowerment. For individuals with disabilities, AI has helped level the playing field. Voice-to-text tools allow those with limited mobility to write and communicate. Visual recognition software helps people with low vision navigate their environments. AI-powered captioning and translation tools make classrooms, workplaces, and public spaces more inclusive. These aren’t charity tools, they’re dignity tools. They allow people to participate fully, contribute meaningfully, and live independently.

Despite all the concerns, AI has also brought about meaningful change. In healthcare, AI has helped doctors detect diseases like cancer earlier and more accurately. In disaster response, AI models have predicted floods and earthquakes, allowing communities to prepare and save lives. In education, AI tutors have supported students in remote areas, helping them learn without access to formal classrooms. These are not just theoretical benefits, they are real-world examples of AI making a positive impact.

In the legal field, AI has helped individuals understand their rights and navigate complex systems. My own project aimed to do just that. It wasn’t about replacing legal professionals, it was about empowering people who couldn’t afford one. It was about giving someone the confidence to ask questions, understand legal documents, and make informed decisions. That kind of support can make a real difference in someone’s life.

So how do we build trust in AI? First, we need transparency. People should know how AI works, what data it uses, and how decisions are made. If a system makes a mistake, there should be a way to correct it. No one should feel powerless in front of a machine.

Second, we need collaboration. AI should be developed with input from experts in law, health, education, and culture. Engineers alone cannot design systems that serve everyone. We need lawyers to guide legal tools, teachers to shape educational platforms, and community leaders to ensure cultural relevance.

Third, AI must be culturally sensitive. It should speak local languages, respect traditions, and reflect community values. In the Pacific, that might mean working with village elders, using storytelling methods, or incorporating indigenous knowledge. AI should feel like it belongs, not like it’s being imposed.

Fourth, human oversight is essential. AI should assist, not decide. Final decisions, especially in areas like law and healthcare, must remain with humans. Machines can offer suggestions, but people must have the final say.

Finally, we need education. People need to learn what AI is, how it works, and what it can and cannot do. That education should happen in schools, community centres, and homes. It is happening, and that is great! When people understand AI, they can make informed choices. They can decide whether to use it, how to use it, and when to say no.

AI is not perfect. It has flaws, risks, and limitations. But it also has potential. It can help us solve problems that have been ignored for too long. It can support people who have been left out. It can make systems fairer, faster, and more accessible.

People don’t have to use AI. And that’s the beauty of it, it’s a tool, not a rule. But let’s make sure that when people choose, they’re choosing based on facts, not fear. Disagreement is valid. Scepticism is healthy. But fear should not

freeze progress.

AI won’t take over the world. But it might help us take better care of it.