Andrew Nicholson, Partner - Mullins Lawyers
Generative AI tools such as Chat GPT are already widely implemented throughout most organisations. However, the majority of businesses have not properly considered the risks involved or implemented any protocols regarding the use of AI.
Most respondents are now frequent users of AI programmes in the workplace, with many stating AI programmes help them to:
However, the broad use of Generative AI poses major risks for companies with 61% of respondents stating that their company did not have any internal guidelines on the use of AI. In many of those businesses, AI implementation is not controlled by management, and there are no clear guidelines for its use. Instead, AI is largely being implemented by employees themselves and in 26% of cases this occurs without management being aware of it.
Most respondents also confirmed that they had used AI tools for work related purposes in non-secure environments, such as on personal computers and mobile phones.
Here are our top 6 tips:
Businesses must remain aware of evolving regulations related to AI use within their industry and ensure use of AI
applications aligns with current legal standards and compliance protocols. The legal landscape surrounding AI is dynamic, and it is
imperative to establish clear policies and make all employees aware of them..
Businesses should be obtaining clear and informed consent from users around the collection, processing, and storage of
data, particularly if sensitive personal information is being used. Updating privacy policies and, collection notices and implementing
strong data privacy measures is a must.
The risk of disclosure of confidential business information should be addressed by establishing a framework for what can be shared with AI,
implementing robust security measures and establishing clear policies.
Protecting data means prioritising robust cybersecurity. Ensuring the security of AI systems and the data contained in
them helps mitigate the risk of legal consequences in the event of cyber-attack or breach.
Businesses should be open and clear about their use of AI so that stakeholders understand how it might impact them,
including where AI is used to produce work on their behalf, or even make decisions which could impact them. Essentially this is a full
disclosure and 'truth in advertising' type test.
The ownership and possible protection of AI-generated content is complex. Businesses should consider how to control/own AI produced work and ensure that they clearly document (including in contracts and policies) the protection of proprietary assets.
The
legal minefield of AI in the workplace can be challenging, and we are here to chat about how you are addressing the points within your own
organisation.
Should you require information, please contact us: w: www.mullinslawyers.com.au
| Ph: 3224 0222
Learn more about how Queensland Leaders can assist your business.
Phone +61 7 3392 1661
Email info@qldleaders.com.au
Website: www.qldleaders.com.au