Artificial intelligence (AI) is not a new pursuit, tracing back to Aristotle and his attempts to formalise rational thought with logical reasoning. While AI is broader than just ChatGPT, its recent rise in popularity started a cycle of mass market intrigue and, consequently, R&D investments, signifying that AI may be past its ‘early adoption’ phase.
Not wanting to be ‘laggards’ on the innovation curve, the Australian Government has started to put some thought into how it may interact with AI. Two notable publications are the inaugural ‘Long-term Insights Briefing’ and the ‘Australia’s AI Ethics Principles’. Through these early steps, it is quite clear that the use of AI in Government is intended to be transparent, accountable, and fair.
Like with most good intentions, proper execution is often the challenge. Reflecting on this, a few thoughts came to mind:
Duality of prompting: Artificial intelligence, whether it be a machine learning classification model or a more complex generative large language model (e.g., ChatGPT), needs to be prompted by a human. Think of these prompts as giving tasks to the AI model – “Is this a spam email?” or “What is the meaning of life?”. For technology enthusiasts, much like talking to a friend, it is a lot easier to trust their words at face value. Here lies the challenge: thinking is complicated, and our differing perspectives drive biases and limitations – this is the same for an AI model, except trade in the human lived experience for training data and algorithms that go into developing that model. As such, while the ‘knee-jerk’ reaction is to trust the machine (much like we trust a calculator) – it is important to respect these highly sophisticated AI models to know that we should take a step back and critically analyse what the machine is telling us. Just as we prompt the machine, it is important also to treat what the machine tells us as prompts.
Timing: When asked why most start-ups fail, most entrepreneurs would point to bad timing. Picture a tennis player perfectly timing a forehand; much of the success comes from preparation for the swing. Before any models can be implemented, we should think about the foundation that it sits on – Is there a use case? What about training data? How about tools and infrastructure? Have we thought about privacy and governance? … the list goes on. Even though technology adoptions will take that leap of courage to implement, taking time to think about the run-up can help you better time that jump.
Change management exercise: following the same line of thought, part of implementing AI will come down to convincing people that this change does not have to be so scary. While the change management process is well embedded, what has yet to be discovered is the peripherals we discussed about preparing our swing (policy, governance, data, infrastructure, etc.). Building this understanding will be an iterative process of trying the technology, testing its limitations, and learning from it. Fortunately, with ChatGPT being so accessible as an example of AI, it is now easier to spread the art of possible. Through this will come innovations, lessons learned, and guidance pushing the Community of Practice forward. It is exciting to imagine a world where AI is as ubiquitous as air (courtesy of ChatGPT). Data and Analytics