Experts Blog

What ChatGPT means for your Organization's Future
March 10, 2023
Theresa Payton

From the Desk of Fortalice Solutions CEO, Theresa Payton:

Background

Microsoft recently confirmed it had invested $10 billion into its partnership with OpenAI, the Silicon Valley-based maker of ChatGPT, the artificial intelligence (AI) chatbot launched in November to sustained media fanfare. The technology is improving every day, and already shows promising signs for employers and employees alike. Staring at a blank screen? Use ChatGPT to help generate ideas or different trains of thought. Trying to enhance programming code? Ask your AI assistant to add some features to your program. Need some historical statistics for a report you are writing? Ask ChatCPT to look back at technology trends. Recently, I even asked it to write a haiku about one of my favorite subjects: Privacy. “In solitude's peace, my heart and mind find release, Privacy, my bliss.” Not half bad.

There are obvious benefits associated with this fresh innovative technology, but as with any “new toy,” it has its limitations and risks, as well. As Ronald Reagan used to say: “Trust but verify.” With this Fortalice Solutions' Client Advisory, I hope to provide important takeaways for organizations regarding the usage of conversational ChatGPT, and other, lesser-known AI platforms. At its most productive, employers and employees can look at ChatGPT as a super search assistant. The quickly evolving technology is remarkable. It uses machine learning and artificial intelligence algorithms and is built on the foundation of OpenAI's GPT-3 language models.

While there are some very tangible benefits to ChatGPT, Fortalice believes strongly that there is a need for risk assessments, updated policies, and processes to protect intellectual property and company-sensitive information. It’s also important that employees factcheck responses from AI generative tools, and companies should be aware of the risks associated with manipulation, hidden engineering bias, and potential copyright infringement.

Just like the current media maelstrom over all-things AI, this advisory primarily focuses on ChatGPT. That said, many of the recommendations can apply to other conversational AI platforms (e.g., ChatSonic, Google Bard, Jasper Chat, OpenAI Playground, Perplexity AI, and YouChat), as well. It should be noted that none of the other conversational AI competitors (yet) do all the key functions of ChatGPT.

ChatGPT is a powerful search assistant that uses machine learning and artificial intelligence algorithms. With new technology comes new concerns about security, especially as they relate to the subject of my haiku above: Privacy. ChatGPT's use by Snap, as well as competing products by Google and Meta, highlight the growing interest in artificial general intelligence projects.

Key Concerns

Still very much in its infancy, the conversational AI landscape is not yet governed by substantial national or international law or court precedent protecting the organizations and individuals who use the new technology. To that end, I wanted to share the following concerns.

• With any Open AI service like ChatGPT, the risk for manipulation is extremely high. Bad actors can feed ChatGPT (or other conversational AI platforms) disinformation and misinformation that the actors could then use to manipulate the recipient of the information.

• ChatGPT is what’s known as a generative and pre-trained transformer. This means that machine language is trained by humans to tune, tweak, and improve the performance of the tool. With this arrangement, however, you should not dismiss the potential for “Hidden Engineering Bias,” which exists due to the assumed conscious and unconscious biases of the technology’s engineers. For example, in the early days of facial recognition software, there were terrible failures recognizing people of color. These tools are only as good as the engineers who build them and the data lakes on which they have been trained.

• Employees who use this technology in the workplace must fact check responses from AI generative tools using alternative (i.e., more established) sources. In addition to the facts, employees should watch for style as well, monitoring for grammar, spelling, and punctuation. Many AI responses, for example, come back in passive voice. Meanwhile, employers must have an explicit policy regarding the usage of these tools within the workplace. (See below for a sample policy.)

• As the technology improves and the use of it becomes more standard place, it is important to always treat conversational AI platforms as assistants or tools. They cannot, and should never, replace critical thinking.

Steps to Take

Whether your employees have started using ChatGPT yet or not, the technology is here, and it is important that your organization has the necessary policies and procedures in place to address the risks inherent in the technology. Fortalice recommends employing these strategies to help your organization prepare for this latest innovation.

• Risk Assessments: Within your organization, conduct a review of the usage of conversational AI and set your own policies and processes now, before regulators force them upon you later.

• Third-Party Vendor Management: Ask your third-party vendors for self-attestation requiring disclosure if they use these types of tools as part of the services they provide you.

• Intellectual Property and Company Sensitive Information Risk: OpenAI gathers data from ChatGPT users. To that end, OpenAI could use the inputs of you and your employees to further train and fine-tune ChatGPT. In other words, what you put into ChatGPT can become part of the public domain.

• Policy by Role: Update employee manuals to instruct employees not to search business-sensitive information (e.g., client names, company confidential information) on search platforms, social media, or on conversational AI tools, such as ChatGPT.

Caveat: Exceptions to this rule may include employees conducting investigations or marketing campaigns. For these employees, develop specific training that protects your company.

• Copyright and Trademark Infringement: To date, it remains unclear if the text responses to questions and prompts in ChatGPT are considered “original copy.” In this gray area, it is possible the responses could violate copyrights and trademarks, so be sure that your policy includes language directing your employees to ask the conversational AI tool to provide its sources.

Sample Policy

If using ChatGPT or other AI tools, employees are prohibited from referring to confidential, proprietary, or trade secret information or entering such information into AI chatbots or language models. Additionally, to protect our brand promise of confidentiality to clients, employees are also prohibited from entering any client names or client information into AI chatbots or language models. Finally, if using any AI tools, there is no guarantee that AI tools are secure or produce accurate information.

Let's Talk
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.