The term AI broadly encompasses any technology that can perform tasks that usually require human intelligence, such as language comprehension, image recognition, decision-making, or data analysis.
Workplaces need to quickly get up to speed on these developments to capitalise on the benefits while proactively managing the potential security, legal, ethical, and reputational risks associated with the technology.
Whether you know it or not, your employees will already be experimenting with this technology, using their personal accounts on such platforms for work-related tasks, thereby exposing employers to a range of unanticipated challenges.
AI in Action: How Employees are Incorporating Artificial Intelligence into Their Daily Work Routines
At work, employees are utilising AI tools such as ChatGPT and Bing Chat in a multitude of ways to improve their productivity and efficiency.
Fact-Checking: Employees use ChatGPT to check facts in documents they are producing or reviewing, similar to using Google or Wikipedia.
Content Generation: Another common use of ChatGPT is generating content, such as drafts of speeches, memos, cover letters, articles, routine emails, logos, ads, images, and videos.
Editing: ChatGPT helps edit documents, as it was trained on millions of documents and is adept at fixing grammatical errors, providing clarity, and improving readability.
Idea Generation: Brainstorming and generating lists are other ways employees use ChatGPT and other AI.
Coding: ChatGPT can be used for coding: both for generating new code and checking existing code.
Enhancing Communication: ChatGPT is used to streamline and enhance employee and customer communication. For example, salespeople use ChatGPT to craft personalised emails or messages, and customer service agents use it to handle queries or complaints.
Improve Productivity: Employees are automating or streamlining their tasks and workflows, such as accountants using AI to process invoices or taxes, managers using AI to schedule meetings or assign tasks, and data entry people using it to generate Excel formulas.
Learning and Development: ChatGPT and other AI tools can help employees learn new skills or knowledge relevant to their jobs.
The Dark Side of ChatGPT: The Risks and Concerns Surrounding the Use of AI in the Workplace
As impressive as ChatGPT and other AI platforms are, they come with strong they come with strong potential security, legal, ethical, and reputational risks.
Security Risks:
AI systems may be vulnerable to cyberattacks, such as hacking, malware or data breaches, which could result in business data breaches, loss of sensitive information or unauthorised access to company systems.
Data breaches: AI systems such as ChatGPT process large amounts of sensitive data, and if that data is not adequately secured, it may be vulnerable to theft or misuse. Data breaches could occur due to vulnerabilities in the AI system or poor security practices by employees or third-party vendors.
Malware attacks: AI systems may be susceptible to malware attacks that could compromise the system’s functionality or allow unauthorised access to sensitive data. Malware attacks could occur through the AI system’s internet connection or malicious code being introduced into the system through a software update or other means.
AI model poisoning: AI models are trained on large datasets, and if an attacker can manipulate the training data, they may be able to poison the model and cause it to make incorrect or biased decisions. This could have serious security implications if an AI system is used for critical decision-making.
Denial-of-service attacks: AI systems may be targeted by denial-of-service (DoS) attacks that could cause the system to become unavailable or unresponsive. DoS attacks could occur through the AI system’s internet connection or resource exhaustion due to excessive system use.
Malicious code injection: An AI system used for code validation may be vulnerable to exploits or attacks, allowing an attacker to compromise the system and inject malicious code into the codebase.
Insider threats: Employees with access to the AI system used for code validation may pose a security risk through accidental actions or malicious intent. For example, an employee may intentionally manipulate the input data to bypass the validation process, inject malicious code into the codebase or abuse their access privileges to steal sensitive data.
Over-reliance on AI: AI systems may not be able to detect or prevent all types of errors, bugs, vulnerabilities, or malicious code. They may also make mistakes or produce false positives or negatives. Over-reliance on an AI system for code validation may lead to a false sense of security, with developers trusting that the code has been properly validated without conducting their own checks.
Legal Risks:
ChatGPT and other AI may generate content that infringes on intellectual property rights, defames someone, or violates laws and regulations such as privacy and data protection laws.
All businesses should review the potential legal risks of using AI with their corporate lawyer.
Working through legal liability is made even more challenging when employees use their personal ChatGPT account to do the work rather than the business account.
Contractual Risks: Many business contracts restrict a business’s ability to share confidential information with a third party. In addition, business contracts also often limit the scope of the purposes for which private information can be used.
ChatGPT and other AI platforms are third parties who, in their terms and conditions of use, have given themselves almost unlimited power to use all content provided to it to enhance the system’s functionality.
Sharing client information with ChatGPT or other AI platforms through, for example, having AI assist in drafting confidential reports, editing emails, or drafting quotes could result in a breach of contract.
Privacy Risks: ChatGPT and other AI platforms have the right to access all conversations for training purposes. This means that the AI organisation may view personal, private information or commercially sensitive or confidential that is shared as part of the use of the system.
In addition, there are currently no consistent Australian laws surrounding handling deletion rights or requests to remove data from AI conversations.
Consumer Protection Risks: In Australia, businesses that use ChatGPT or other AI technology for customer interactions or content creation may face consumer protection risks if they do not clearly disclose to consumers that they are interacting with an AI system rather than a human representative. Failure to disclose this information could lead to claims of unfair or deceptive practices, which could result in legal action by consumer protection authorities or affected individuals.
In addition to the legal and regulatory risks, businesses may face reputational damage if consumers perceive their use of AI as deceptive or unethical. This could impact customer trust and loyalty and may ultimately harm the business’s bottom line.
Moreover, clients who pay for content generated by ChatGPT without being informed that an AI system created it may feel misled and demand a refund or compensation, leading to further financial and reputational damage.
Intellectual Property Risks: In Australia, businesses that use ChatGPT or other AI technology for software development or content creation may face intellectual property risks. The use of AI technology to generate software code or other content may raise questions about whether such content is protectable by copyright, particularly if a human being did not author it.
There is also a risk that ChatGPT and any content it produces may be deemed a derivative work of copyrighted materials used to train the model. This could result in claims of copyright infringement if the software code, marketing materials, or other content generated by ChatGPT appears substantially similar to the copyrighted training data.
Furthermore, if employees submit confidential code, financial data, or other trade secrets and confidential information into ChatGPT for analysis, there is a risk that other users of ChatGPT may be able to access and compromise this information, potentially resulting in a breach of confidentiality obligations.
If the software submitted to ChatGPT includes open source, businesses should consider whether such submission could trigger possible open source license obligations.
Vendor Risks: In Australia, companies that work with vendors may face vendor risks associated with ChatGPT. These risks are similar to those faced by companies using ChatGPT in-house.
Companies should review their contracts with vendors to ensure that the risks associated with ChatGPT use are addressed. Specifically, contracts should clarify whether information provided by the vendor to the company can be generated by ChatGPT without prior consent and whether the vendor can enter confidential company data into ChatGPT.
Bias and Discrimination Risks: ChatGPT and other AI relies on algorithms that may be biased towards certain groups or individuals. AI systems may inherit or amplify biases from their data sets, algorithms, or human inputs which could result in unfair, inaccurate, or harmful outcomes for certain employees or customers. For example, if an AI system is used for recruitment or performance evaluation, it could result in discrimination based on gender, race, age, or other protected characteristics.
Ethical Risks:
The use of AI in the workplace raises ethical concerns, such as the potential for AI to replace human workers, the impact of AI on job security, and the ethical implications of AI decision-making.
There is also a risk of the AI system being programmed with biases or values that may not align with the organisation’s ethical principles.
Lack of Human Oversight: AI systems may operate autonomously or without sufficient human supervision, control or intervention. This could raise ethical, legal or moral issues around accountability, responsibility and liability for the outcomes of the AI systems.
Lack of transparency and explainability: AI systems may be unable to provide clear and understandable reasons for their decisions or actions, especially if they use complex or opaque algorithms. This could make it difficult to verify, audit, or debug the code that they check or validate or the outcomes that they generate.
Reputational Risks:
ChatGPT or other AI systems may damage the reputation of a business if it produces inappropriate or offensive content that is shared or published externally that could cause harm or offence to employees or customers.
Quality Control: ChatGPT and other current forms of AI can deliver inaccurate results and have been known to make up facts or citations. This means that all AI outputs need to be carefully reviewed for reliability and accuracy before being used. Reviewing facts requires employees with a core level of knowledge to spot and fix errors. However, if an employee does not have this base knowledge, they may approve content or code for dissemination that is flawed or inaccurate, creating problems for a business’s reputation.
Strategies to Reduce the Risks of AI in the Workplace
The explosion of ChatGPT and other AI use in the workplace means that businesses need to address the potential risks.
Conducting a Comprehensive Risk Assessment: This involves assessing the risk level associated with using AI across all areas of the business.
Implementing Security Controls: Businesses should consider implementing various security controls such as encryption, authentication, authorisation, logging, and auditing for data and processes to ensure data protection.
Policy and Procedure Development: To guide the use of AI and address issues such as accountability, transparency, quality, security and ethics, businesses should develop and implement relevant policies and procedures. (Our HR suite of policies includes a policy on The Use of AI in the Workplace).
Establishing Roles and Responsibilities: This involves defining the roles and responsibilities for human oversight and intervention in AI-generated or assisted work to ensure that the AI system operates within the desired parameters.
Training: It is essential to train managers and all employees on the safe, legal, and ethical use of AI in the workplace to ensure that everyone understands the benefits and limitations of AI and how to use it appropriately.
In conclusion, the AI revolution is rapidly changing the business landscape, and it’s becoming increasingly clear that AI-powered tools like ChatGPT are no longer just the preserve of large corporations.
As AI continues to evolve, it’s poised to become even more integral to how small businesses operate. Small businesses can stay competitive, enhance productivity, and streamline their operations by adopting and adapting to AI.
However, it’s important to approach AI cautiously and ensure that your business is adequately prepared for the challenges and opportunities that come with it.
By staying informed, conducting thorough risk assessments, implementing appropriate security controls and developing relevant policies and procedures, small businesses can reap the rewards of AI while minimising its potential downsides. With the right approach, small businesses can leverage AI to enhance performance and deliver more value to their customers.