Australia really needs to step up when it comes to its future with artificial intelligence. The launch of DeepSeek serves as a wake-up call, highlighting the fact that if we don’t invest in our own AI solutions, we’ll be stuck relying on foreign tech that might not even resonate with our values—plus, it often carries the baggage of where it comes from. This dependency means that our user data and the economic perks that come with it are just flowing overseas, making us subject to foreign laws and corporate agendas.

Data Security and Influence in AI Chatbot Interactions

When you use AI chatbots like ChatGPT, Gemini, or DeepSeek—whether on websites, mobile apps, or through APIs—you’re not just interacting with a bot; you’re handing over your data and getting AI-generated replies in return. The fact that DeepSeek keeps its data in China and adjusts its responses to fit the Chinese Communist Party’s narratives brings two big issues to the table: one is the risk of our data being used for foreign interests, and the other is the potential for AI-generated answers influencing public opinion. AI platforms from outside Australia must follow the laws of their home countries. In the case of DeepSeek, it means they must comply with China’s national intelligence laws, which can require companies to hand over data to the government when requested. This includes everything from text and audio to user info like registration details and IP addresses. The ongoing flow of Australian data into China’s system is a major risk we can’t afford to overlook. Using AI chatbots like ChatGPT, Gemini, or DeepSeek involves sharing personal data in exchange for responses, raising concerns about data security and influence. DeepSeek’s data storage in China poses risks of foreign use and compliance with Chinese intelligence laws, potentially compromising user information. This ongoing data flow from Australia to China presents significant risks that should not be ignored.

Individually, bits of data might not seem like much, but collectively, they can provide insights that could be misused against Australia’s interests. A report from ASPI in 2024 shows that the CCP is keen on harvesting user data from popular Chinese apps and platforms to monitor public opinion and societal trends for their propaganda goals.

Chatbots, in particular, can gather data that helps understand how people feel in different countries and can be wielded as tools for influence. The way AI models are built depends heavily on the priorities of their creators, the data they’re trained on, and how they’re fine-tuned. This means AI doesn’t just dish out information; it can be programmed to support specific narratives while ignoring others. Most chatbots come with some level of moderation to filter out harmful content, but DeepSeek goes further by embracing political censorship. It avoids discussing sensitive topics like the Tiananmen Square protests and sticks to the CCP’s official stance on issues like Taiwan and the South China Sea. This kind of AI-generated content can sway public perception, which is risky for democracy and social unity, especially as these tools become more common in search engines, education, and customer service.

Empowering Australia’s AI Future: Balancing Innovation and Safeguards

Australia needs to take proactive steps to set up safeguards against known risks. It’s crucial to make sure that AI systems used in the country reflect our values, security needs, and regulatory standards. This won’t happen by accident—it requires Australia to actively engage in AI development and create regulatory frameworks that protect us from harm while encouraging innovation at home. What DeepSeek has shown us is that you don’t need to be a massive tech giant to create competitive AI models. With just 300 people, they developed their model for less than US$6 million, which is a fraction of what giants like OpenAI have spent on their models. Some experts point out that this figure may not fully reflect all costs, especially access to advanced processors before export controls kicked in, but the takeaway is clear: significant AI strides are achievable without huge financial backing. DeepSeek’s success illustrates that talent is even more crucial than just having financial muscle, which opens the door for Australia to make a real mark in AI development.

To make the most of this potential, Australia needs to create a supportive environment that encourages local talent and innovation. The recent AU$32 million investment into health tech firm Harrison.ai by the National Reconstruction Fund is a great start, but we need to think bigger. Australia should ramp up investment in education and research, bolster existing developer communities—especially those focused on open-source solutions—and support commercialisation efforts. We also need to share success stories to keep the momentum going. A robust AI sector would let us tap into the benefits of AI without trying to outspend global tech behemoths. We should prioritise creating a nurturing space for AI talent and ethical practices so that Australia can enjoy both economic and social gains.

If we don’t step up our investments in local AI capabilities, we run the risk of giving away control over tech that will shape our economy, security, and society in the future. This isn’t just a tech issue; it’s a strategic one. Without taking decisive action, we’ll remain passive consumers of AI shaped by foreign priorities and interests, which could undermine our democratic integrity, economic security, and public trust in AI systems.

Tackling this challenge needs more than just regulatory measures; it calls for ongoing support for a vibrant domestic tech ecosystem.

Leave a Reply

Your email address will not be published. Required fields are marked *