Why Public AI Is Now Dangerous for Lawyers

Bottom Line Up Front:
Your firm has a data leak. Regulators in Australia and the US have moved from caution to sanction. Using public tools like ChatGPT for client work is now a professional conduct breach. It can cost you your license. You cannot simply ban AI because your staff will use it anyway. To survive the 2025 regulatory shift, you must replace Public AI with Private AI.
The "Ostrich" Problem: You Can't Ban What You Can't See
Partners often believe they have solved the AI risk by banning ChatGPT.
This is a dangerous illusion.
According to the Menlo Security 2025 Report¹:
- 68% of employees use generative AI tools.
- 57% admit to pasting sensitive data into them.
- Most use personal accounts to bypass firm firewalls.
This is "Shadow AI." Your junior lawyers and paralegals face huge pressure to meet billing targets. They know AI can draft a document in seconds. If you do not provide a safe tool, they will use an unsafe one behind your back.
Ignorance is no longer a defense. If a junior lawyer pastes a refugee claim into ChatGPT, you are the one who loses your license.
The Regulatory Siege: It's Not Just a Warning
Regulators are no longer watching. They are acting.
1. The Australian Hammer: The Dayal Precedent
In late 2025, the Victorian Legal Services Board (VLSB+C) punished a principal lawyer, Mr Dayal, for AI misuse. The penalty was severe²:
- Stripped of rights to run a firm.
- Banned from handling trust money.
- Forced into supervised practice for two years.
The VLSB+C has drawn a red line. They state that lawyers "cannot safely enter confidential... information into public AI chatbots."³
2. The OMARA Trap
For Registered Migration Agents, the risk is even higher. Code of Conduct Section 35 forbids you from "allowing" disclosure to a third party.⁴
- The Breach: Pasting client data into free ChatGPT sends it to a US server. That is a disclosure.
- The Risk: OMARA has already suffered data breaches.⁵ They now apply heightened scrutiny to agents who leak data to foreign tech giants.
3. The US Warning Shot
In the US, courts are fining lawyers for AI "hallucinations" (fake cases). In Mata v. Avianca and Kohls v. Ellison, lawyers and experts were publicly shamed for filing AI content they didn't check.⁶⁷
This happens because of "Plausibility Bias." This means we trust computers because they sound confident, even when they are wrong.⁸
Public AI vs. Private AI: What is the Difference?
Most lawyers think "AI" is just one thing. It is not. You must know the difference between the App you use and the Model that powers it.
Public AI Apps (ChatGPT, Gemini, Claude)
These are consumer products. They connect you to a smart model, but the app records everything.
- Memory: The app saves your chat history to the cloud.
- Training: The provider uses your conversations to teach the AI.
- Privacy: Engineers can often view your chats to check for safety.
Private AI Workspaces (e.g. Grella, Phala Network, Red Pill)
These are secure workspaces. They connect to the exact same smart models, but they change the rules.
- Data Handling: Your data is readable only when the AI is working on it. It is locked and unreadable at all other times.
- No Training: Private AI apps sign contracts that ban model training.
- Ownership: You are the only one who can see your data. The provider is just a processor, not a storage unit.
The Solution: The "Private AI" Setup
You already use Clio, LEAP, Smokeball, or Actionstep to manage your practice because you trust their security. You need that same standard for your AI.
To fix the "Shadow AI" leak, you must give staff a sanctioned Private AI workspace.
The "Kill Switch": Zero Data Retention
To use AI legally in 2025, you need to change the software setup.
- Public AI: Uses "Retain and Train." Your client's story is stored. It is used to train the next model. It becomes part of the internet's permanent memory.⁹
- Private AI (The Compliance Standard): Uses Zero Data Retention.
Think of Public AI like writing on a blackboard. The words stay there, and others can learn from them.
Private AI is different. It is like writing with disappearing ink. The words are there only for the split second the computer is reading them to do the work. The moment the job is finished, the ink vanishes. Nothing is saved, so there is nothing to steal.
Top-tier Private AI saves your data in a Trusted Execution Environment (TEE). This creates a secure shelter for both the App and the powerful chips (GPUs) running the open-source models.
Imagine a Black Box made of magic steel. You put your client's file inside, the box does the work, and the answer comes out. But the box is locked from the inside. Not even the people who made the box or the company that owns the computer chips can peek in to see what is happening. Your data stays invisible to everyone.¹⁰¹¹
Keep Client Data Private
Grella gives your team a safe AI tool. No data storage. No training. No risk to your license.
The Technical Reality: Where is your Data?
Do not be fooled by the "Sydney Server" myth.
- Data Residency: The hard drive is physically in Australia.
- Data Sovereignty: Australian Law controls the data.¹²
If you use a US-owned public AI (like standard ChatGPT), your data is subject to the US CLOUD Act. The US government can demand your client's data without a warrant. This bypasses other national courts. The right Private AI can ensure legal control stays in your country.¹³
Your Action Plan
Do not wait for a breach. Fix your process today:
- Audit "Shadow AI": Survey your staff anonymously to find out what tools they use. Also, ask IT to scan for unauthorized email signups to see which AI sites employees have joined.
- Update Engagement Letters: Add a clause that tells clients you use "Secure, Private AI (Zero Data Retention)." Transparency is your best defense.
- Deploy the "Modern Stack": Pair your case management system (Clio/LEAP/Smokeball) with a Private AI workspace. Never use an AI tool unless they prove:
- Data Ownership (You own all inputs and outputs).
- Zero Data Retention (Contractual guarantee).
- No Training on client data.
- For a step further: ISO 27001 & SOC 2 Type II certification.
Technology speeds up your process. If your process is insecure, AI just speeds up your risk. Fix the model first.
Industry Reports & Data
1. Menlo Security. (2025). The 2025 State of Shadow AI Report. The report analyzed telemetry from global organizations. It found a 68% surge in shadow AI usage. 57% of users admitted to inputting sensitive data into public tools.
Regulatory & Case Law
2. Victorian Legal Services Board + Commissioner (VLSB+C). (2025). Statement on Mr Dayal matter. The regulator varied the practising certificate of a principal lawyer who tendered AI-generated false citations. They stripped him of the right to practice as a principal or handle trust money.
3. VLSB+C. (2024/2025). Statement on the use of AI in Australian legal practice. Explicit guidance warning that lawyers "cannot safely enter confidential, sensitive or privileged client information into public AI chatbots."
4. Office of the Migration Agents Registration Authority (OMARA). Code of Conduct for Registered Migration Agents, Section 35 (Confidentiality). Mandates that agents must not "allow to be disclosed" client information to third parties without consent. See also Section 53 regarding reasonable steps for electronic storage.
5. OMARA. (2025). Data Breach Notification — OMARA agents portal. July 2025 notification regarding an accidental data breach exposing registered migration agent details. This highlights the regulator's own challenges with data security.
6. Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). Federal court sanctioned attorneys for "subjective bad faith" after they submitted a brief containing non-existent cases generated by ChatGPT.
7. Kohls v. Ellison, No. 24-cv-3754 (D. Minn. Jan. 10, 2025). The court struck the declaration of an expert witness after it was revealed he used GPT-4o to draft it. This resulted in hallucinated citations. The court ruled this error "shatters his credibility."
8. Victorian Legal Services Board + Commissioner (VLSB+C). (2024). Generative AI and Lawyers. Guidance note explaining "plausibility bias." The fluency of AI output induces a false sense of credibility. This leads lawyers to bypass verification.
Technical Definitions
9. OpenAI. Data Usage Policies. Standard consumer terms (Free/Plus) allow data retention and training by default. "Temporary Chat" or Enterprise agreements are required to opt-out of training effectively.
10. Lefebvre. (2025). Advanced Privacy in Legal AI: What Zero Data Retention Really Means. Defines ZDR as a state where "inputs are never stored, reused, or made accessible beyond the immediate session." Data is electrically wiped from memory after processing.
11. Phala Network & Red Pill AI. (2025). Confidential Computing Standards. Defines the use of Trusted Execution Environments (TEEs) to ensure code and data integrity during AI processing.
12. Swift Digital. (2025). Data Sovereignty vs Data Residency. Explains the distinction that "Data Sovereignty" refers to the jurisdictional control (whose laws apply). "Data Residency" only refers to physical location. Data in Australia can still be subject to US law (CLOUD Act) if not sovereign.
13. Grella.ai. Privacy Policy & Security Documentation. Grella is an Australian entity (Sydney, NSW) subject to the Australian Privacy Principles (APPs). Data is encrypted with organization-specific keys. This ensures Grella cannot decrypt data regardless of storage jurisdiction.
Stop the Data Leak Today
Your staff is already using AI. Give them a safe tool instead of an unsafe one. Grella keeps your client data private. No training, no storage, no risk to your license.