Embracing Generative AI

What is generative AI?

Generative Artificial Intelligence (AI) is a type of artificial intelligence that can generate new content by learning from existing data. This can be done by applying self-supervised or unsupervised machine learning to the data set obtained from various sources such as images, audio, videos and text. Popular examples of generative AI include ChatGPT, Bard, DALL-E, Midjourney, DeepMind, AutoGPT and Baby AGI.

Generative AI was introduced in the 1960s, but it only gained great popularity with the inception of ChatGPT in November 2022. Its ability to execute work processes across all industries and individuals has been widely used to enhance productivity, customer experience and product development. Generative AI tools draw on large language models (LLMs) to create their responses and in the process of doing so, transmit and process data over the Internet. LLMs also learn from end-users’ prompts and can offer one end-user’s input data to other end users when similar prompts are put forward to the generative AI tool.

 

Drawbacks of generative AI

While generative AI and LLMs may be tapped on to enhance work processes and output, they are not without their drawbacks. Publicly accessible generative AI tools generally do not provide end users with details of data sources that was drawn on to generate output, nor are end users any wiser as to how much, how long and where input data is being stored. As a result, it is possible for data breach to occur should information meant to be restricted within specific individuals or organisations is fed into the LLM during a query, only to be shared with other public end-users later. Further, as any information that individuals or organisations put up on the Internet may be used by the LLMs to generate new content, there is a real danger for individuals or organisations to be intentionally or unintentionally misrepresented and be exposed to undesirable consequences.

 

What can be done to reduce the risks of using publicly accessible generative AI tools?

Companies offering generative AI solutions as part of their suite of products have taken note of these drawbacks and included within their products features to ensure safe and secure use. For example, Microsoft’s Copilot System promises to protect end user’s data by not training the LLM on tenant data or prompts1. Google’s Duet AI promises to design its products in accordance with Google’s AI Principles that enable the end user to be in control of their data2, enabling the end user to set the right policies for their organisation. However, relying on such solutions require paid subscriptions and thus, may not be feasible for end users with budget concerns. Moreover, if personal devices are being used for work against company policies, it is not possible for companies to then prevent their employees from using publicly accessible generative AI tools, or monitor what information the employees input into the tool. How then, may we tap on the affordances of publicly accessible generative AI tools while ensuring safe, secure and responsible use? Here are some guidelines to help prevent potential data breaches or cyber-attacks.

 

1. Limit the amount of information shared and/or stored or opt out of their training model

Organisations should limit the amount or type of data that they share online or with the public. Note that confidential and/or proprietary data should not be submitted to any public LLMs as prompts. Thus, organisations must understand how their data will be used, who will have access to it, and if their data will be shared with the model provider’s partners. Depending on the sensitivity of the data, organisations can also request the model provider not to share it with their partners, even if it is anonymised, and to delete the data and not use it for training or improving any of their models.

Organisations whose work deal with mainly highly sensitive and/or confidential data should consider putting in place security and access controls, and rules to prevent employees either from uploading content into or connecting to platforms hosting publicly accessible generative AI tools. For these organisations, ensuring data security outweighs the benefits of using free generative AI tools.

 

2. Educate or train your employees on the safe and responsible use of generative AI tools

Organisations should educate their employees on what generative AI is and the potential benefits and drawbacks of using it. Clear guidelines on the use of the organisation’s data, for example, how proprietary or client data should and should not be used, what kind of tasks associated with a specific type of data set may be completed using publicly accessible generative AI tools, has to be established. Such guidelines, as well as consequences of violating privacy policies or laws can be conveyed through Codes of Conduct or regular email reminders to employees. The guidelines can also be worded into user awareness checklists to ensure that the expectations of legal and ethical use of generative AI tools are met.

 

What else should companies take note of as the use of generative AI tools become more pervasive?

1. Keep antivirus software and browser extension patches up to date

Meta, the parent company of Facebook, has warned that hackers have been taking advantage of the huge interest in generative AI tools like ChatGPT to carry out nefarious activities. These include malicious software posing as generative AI tool tricking users into installing malicious code into their devices3 or installing bogus Chrome browser extension from the official Web Store that masquerade as OpenAI ChatGPT service to harvest Facebook session cookies and hijack accounts4. When this happens, organisations will be at risk of a cyberattack and be exposed to a possible data breach. Organisations should therefore keep their antivirus software and browser extension patches up to date and consider providing employees with access to trusted generative AI tools or platforms.

 

2. Enhance your employees’ vigilance against and recognition of impersonation attempts

Employees should be reminded that social engineering attacks have become more sophisticated and difficult to detect with the advancement of technology, especially with the emergence of generative AI tools. Examples of how these tools have been abused to create deepfakes or imitate writing styles as part of social engineering or phishing attacks to obtain confidential data should be shared. Organisations should also educate employees on how to differentiate authentic communication materials sent by their company, partners and clients from questionable ones. For example, organisations may regularly share and remind employees on key features to check for in communication materials. They should conduct regular phishing campaigns to reinforce the employee’s ability to detect phishing emails and encourage good cybersecurity practices. Depending on the nature of work, a standard verification process to screen for potential deepfakes may be introduced, and employees should be reminded to always verify with the official website(s), organisation(s) or person if in doubt.

 

Conclusion

With AI becoming increasingly prevalent in our daily lives, generative AI will be here to stay. Although the benefits of using publicly accessible generative AI tools come with associated challenges and risks, organisations should not stop employees from using them as it will not deter determined employees from utilising these tools via their personal devices. They should instead implement policies and provide clear guidelines on the appropriate use of data on these AI tools.

Organisations can consider providing employees with the avenue to trusted AI platforms with access to data based on the principle of least privilege. Employees should also be educated on how to identify impersonation attempts and be equipped with strategies to manage them. Taken together, these will bring about a win-win situation whereby organisations are able to continue to manage their cybersecurity risks effectively while their employees can make use of generative AI tools to manage their workload, improve their work processes and productivity.

 

References

[1] https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/

[2] https://workspace.google.com/blog/product-announcements/duet-ai

[3] https://www.bangkokpost.com/tech/2562765/hackers-promise-ai-install-malware

[4] https://thehackernews.com/2023/03/fake-chatgpt-chrome-browser-extension.html