Innovation & Intellectual Property
Managing AI and reducing risk
Andrew Nicholson, Partner - Mullins
Generative AI tools such as Chat GPT are already routinely used in most organisations. That will become more widespread following the
Government's release of mandatory guidelines and voluntary safeguards in September 2024, which require Departments to adopt policies
for the efficient use of AI (within the Department) and for all staff to be trained in those matters by March 2025.
However, the majority of organisations have not yet implemented sufficient guidelines and protocols regarding the use of AI.
A recent Deloitte survey found:
Most respondents are now frequent users of AI programmes in the workplace, with many stating AI programmes help them to:
o work more efficiently (63%)
o be more creative (54%)
o improve the quality of their work (45%)
The broad use of Generative AI poses major risks, with 61% of respondents stating that they did not have any internal guidelines on the use
of AI. In many organisations, AI implementation and use is not controlled by management, and there are no clear guidelines for its use.
Instead, AI is largely being implemented and used by employees of their own volition and in 26% of cases this occurs without management
being aware.
Most respondents also confirmed that they had used AI tools and disclosed sensitive information in non-secure environments, such as on
personal computers and mobile phones.
How should organisations address these risks?
Here are our top 6 tips:
-
Compliance Framework: Organisations must remain aware of evolving regulations related to AI use within their
industry and ensure use of AI applications aligns with current legal standards and compliance protocols. An example is the Supreme Court
of NSW adopting a practice direction which largely bans the use of AI by legal practitioners in Court proceedings. The legal landscape
surrounding AI is dynamic, and it is imperative to establish clear policies and make all employees aware of them.
-
Data Protection and Privacy: Businesses should be obtaining clear and informed consent from users around the
collection, processing, and storage of data, particularly if confidential or sensitive personal information is involved. Updating privacy
policies and collection notices as well as implementing strong data privacy measures is a must.
-
Confidentiality: The risk of disclosure of confidential business information should be addressed by establishing
a framework for what can be shared with AI, implementing robust security measures and establishing clear policies.
-
Cybersecurity: Protecting data means prioritising robust cybersecurity. Ensuring the security of AI systems and
the data contained in them helps mitigate the risk of legal consequences in the event of cyber-attack or breach.
-
Transparency: Organisations should be open and clear about their use of AI so that stakeholders understand when
AI is used to produce work, or even make decisions which could impact them. Essentially this is a full disclosure and 'truth in
advertising' type test.
-
Intellectual Property Rights: The ownership and possible protection of AI-generated content is complex.
Organisations should consider whether they can own work which is produced or contributed to by AI and ensure that they clearly document
(including in customer and supplier contracts) who is to obtain the rights in (or which are produced by) those proprietary assets.
Key takeaways
-
Organisations must assess how AI is being used by their staff and suppliers, particularly where they need to provide 'AI clean'
services to customers.
- Policies should be developed and staff trained on guidelines for procurement and acceptable use of AI.
- Privacy policies should be updated and reviewed.
-
Contracts with suppliers and customers should also be reviewed to ensure transparency and so that matters such as who owns IP or data
produced by AI are addressed.

If you need any assistance to work through these matters or to implement policies or guidelines, please reach out.