Can Lawyers Use ChatGPT in Australia?
Australian lawyers are increasingly curious about using ChatGPT and other AI tools in their practice. But the question is not simply whether they can, it is whether they should, and under what conditions. The use of public AI platforms like ChatGPT raises critical concerns around client privilege, confidentiality, data sovereignty, and professional obligations that every Australian legal practitioner must understand.
Understanding ChatGPT and Public AI Platforms
ChatGPT is a large language model developed by OpenAI, a US-based company. When you use ChatGPT, whether through the free version or ChatGPT Plus, you are sending your prompts and data to servers controlled by OpenAI. This data may be used to train future models, stored on overseas servers, and subject to US data access laws including the CLOUD Act.
For Australian lawyers, this creates immediate tension with fundamental professional obligations. Client privilege and confidentiality are not optional considerations, they are cornerstone principles of legal practice that cannot be compromised for convenience or efficiency.
Client Privilege and Confidentiality Concerns
Legal professional privilege protects communications between lawyers and clients from disclosure. It is one of the most fundamental protections in the Australian legal system, recognised in common law and codified in various evidence acts across jurisdictions.
When a lawyer enters client information, case details, or privileged communications into ChatGPT, several risks emerge. First, the information leaves the secure environment of the law firm and enters a third-party system. Second, that information may be retained, processed, and potentially used for purposes beyond the lawyer's control. Third, the lawyer may have no practical way to retrieve, delete, or verify the destruction of that information.
The Law Council of Australia and various state law societies have issued guidance making clear that lawyers must maintain confidentiality regardless of the tools they use. Using public AI platforms with client data, without explicit informed consent and appropriate safeguards, likely breaches these obligations.
Data Sovereignty and Australian Legal Practice
Data sovereignty refers to the principle that data is subject to the laws and governance of the nation where it is stored. For Australian lawyers, data sovereignty is not merely a technical concern, it is a legal and ethical imperative.
When client data is sent to ChatGPT, it travels to servers outside Australia, typically in the United States. This means Australian client information becomes subject to US laws, including the CLOUD Act, which allows US authorities to access data held by US companies regardless of where it is stored.
For clients in sensitive matters, including government clients, regulated industries, or matters involving commercial confidentiality, this loss of data sovereignty is unacceptable. Australian lawyers have a duty to protect their clients' interests, and that includes ensuring their data remains within Australian jurisdiction and subject to Australian privacy laws.
The Privacy Act 1988 (Cth) requires organisations to take reasonable steps to protect personal information and to consider the privacy implications of sending personal information offshore. For lawyers, who hold personal information of a particularly sensitive nature, these obligations are heightened.
Australian Law Society Guidance
Law societies across Australia have responded to the rise of AI tools with guidance that balances innovation with professional responsibility.
The Law Society of New South Wales has emphasised that while AI tools offer potential benefits, lawyers must ensure they comply with their professional and ethical obligations. This includes maintaining client confidentiality, ensuring competence in the use of technology, and obtaining informed consent where client data will be processed by third parties.
The Law Institute of Victoria has similarly highlighted that lawyers remain personally responsible for all work product, regardless of whether AI was used in its creation. Relying on AI-generated content without verification can lead to errors, including hallucinated case law or incorrect legal principles.
The Queensland Law Society has issued guidance noting that lawyers must understand how AI tools process and store data, and must assess whether the use of such tools complies with privacy laws and professional obligations.
Across all jurisdictions, the message is consistent. AI tools are not prohibited, but their use must be carefully managed to ensure compliance with existing legal and ethical frameworks.
The Professional Obligations Framework
Australian lawyers are bound by professional conduct rules that govern their use of technology. These obligations do not change because the technology is new or exciting.
Lawyers must maintain competence in their practice, including understanding the tools they use. A lawyer who uses ChatGPT without understanding how it processes data, where that data is stored, and what happens to it afterwards, is not exercising the competence expected of the profession.
Lawyers must also act in their clients' best interests. Using a tool that exposes client data to offshore access, potential training datasets, or security vulnerabilities is difficult to reconcile with this duty, particularly where safer alternatives exist.
Finally, lawyers must maintain public confidence in the legal profession. High-profile cases of lawyers submitting AI-generated case law that did not exist have damaged that confidence. Australian lawyers must learn from these cautionary tales and adopt AI responsibly.
Practical Risks of Using Public AI in Legal Practice
Beyond the ethical and regulatory concerns, there are practical risks that Australian lawyers must consider.
Hallucinations are a well-documented problem with large language models. ChatGPT can generate plausible-sounding legal analysis that is entirely incorrect, cite cases that do not exist, or misstate legal principles. For lawyers, who are responsible for the accuracy of their work, this creates significant liability risk.
Security vulnerabilities also pose concerns. Public AI platforms have been subject to data breaches, prompt injection attacks, and other security incidents. Client data entered into such systems may be exposed through these vulnerabilities.
Lack of transparency is another challenge. ChatGPT operates as an opaque system, lawyers cannot audit how their queries are processed, what data is retained, or how the model reaches its conclusions. This opacity is fundamentally incompatible with the rigour and accountability expected in legal work.
When Might ChatGPT Use Be Appropriate?
There are limited scenarios where Australian lawyers might use public AI tools like ChatGPT without breaching their professional obligations.
Lawyers might use ChatGPT for general research on non-confidential matters, such as exploring unfamiliar areas of law in preliminary research, provided no client-specific information is included. Even then, any information generated must be independently verified.
Lawyers might use ChatGPT for drafting assistance on templates or precedents that contain no client information. However, the final work product must be reviewed and approved by the lawyer, who remains fully responsible for its accuracy and appropriateness.
In all cases, lawyers should consider whether there are better alternatives that do not carry the same risks.
The Privacy Wedge: Why Private AI Matters
The solution to the ChatGPT dilemma is not to abandon AI entirely, but to adopt AI that respects the unique requirements of legal practice.
Private AI platforms are designed specifically for sensitive professional environments. They operate within Australian jurisdiction, do not use client data for training, provide full data sovereignty, and offer the transparency and control that lawyers need.
This is the Privacy Wedge: the recognition that legal professionals cannot use the same AI tools as the general public because they operate under fundamentally different obligations. Lawyers need AI that protects privilege, maintains confidentiality, and keeps data within Australian control.
Block Box AI: Built for Australian Legal Practice
Block Box AI addresses the specific needs of Australian lawyers by providing enterprise-grade AI that never compromises on privacy or compliance.
Unlike ChatGPT, Block Box AI operates with full data sovereignty. Client data stays within Australia, subject to Australian law and protected from offshore access. There is no training on client data, no retention beyond operational necessity, and complete transparency about how data is processed.
Block Box AI is designed for legal workflows. It understands Australian legal practice, integrates with existing practice management systems, and provides the reliability and auditability that lawyers require. There are no hallucinated cases, no unexplained outputs, and no uncertainty about compliance.
For law firm partners and legal operations leaders, Block Box AI represents the responsible path to AI adoption. It delivers the efficiency and capability benefits of AI without the ethical compromises and regulatory risks of public platforms.
Risk Mitigation and Best Practices
Australian lawyers considering any AI adoption should follow a structured approach to risk management.
First, conduct a thorough privacy impact assessment. Understand what data will be processed, where it will be stored, and who will have access. Assess compliance with the Privacy Act and professional obligations.
Second, obtain informed consent where required. If client data will be processed by AI, clients should understand and consent to this, including understanding any offshore data transfers or retention policies.
Third, implement verification processes. Never rely on AI-generated content without independent verification by a qualified lawyer. AI is a tool to assist, not replace, professional judgment.
Fourth, choose appropriate tools. Prioritise AI platforms built for legal practice with appropriate privacy, security, and compliance features. The convenience of free public tools is not worth the professional risk.
The Regulatory Landscape Is Evolving
While current law society guidance provides principles, the regulatory framework around AI in legal practice continues to develop.
The Australian Government has released its interim response to the Automated Decision-Making and Artificial Intelligence consultation, signalling potential future regulation. The Australian Law Reform Commission may consider AI-specific guidance for the legal profession. Professional indemnity insurers are beginning to ask questions about AI use and may adjust coverage accordingly.
Australian lawyers should stay informed about regulatory developments and ensure their AI practices remain compliant as the framework evolves.
Moving Forward Responsibly
The question for Australian lawyers is not whether to use AI, but how to use it responsibly. ChatGPT and similar public platforms pose significant risks to client privilege, confidentiality, and data sovereignty that are difficult to mitigate within existing professional obligations.
However, AI offers genuine benefits for legal practice: improved efficiency, better research capabilities, and enhanced service delivery. The solution is to adopt AI that is built for the legal profession, with privacy and compliance at its core.
Block Box AI provides this solution, offering Australian lawyers the benefits of AI without compromising the principles that underpin legal practice. For law firms ready to embrace AI responsibly, the path is clear: choose tools that respect your obligations, protect your clients, and keep Australian legal data where it belongs, in Australia.
The future of legal practice will include AI. The question is whether that AI will serve the profession's values or undermine them. Australian lawyers have the opportunity to choose wisely and lead the way in responsible AI adoption.

