Man working with a machine - chatGPT!

Should you Ban ChatGPT?: No, You Should Accelerate Adoption

Marc Simard
|
June 23, 2023

Some companies (Apple, Samsung, Verizon, etc.) are limiting - or even banning - the use of generative AI like ChatGPT for developers. Should your company consider doing the same? 

At Setori, we think the answer is a resounding no. While some risks (data leakage, copyright infringement) exist, the benefits (dev satisfaction, productivity gains) vastly dominate. Additionally, it is impractical to try to put the genie back in the bottle. ChatGPT - and Bard, and Claude, and Bing, and Copilot, and others - are here to stay.

Is your company restricting - or considering limiting - the use of generative AI for developers? If so, we want to talk with you about it. Drop us a line by emailing marc@setori.ai

Your best engineers are curious, love new tools, and know how to work around rules - they’ll use third-party Large Language Models (LLMs) if they want to. To maximize the impact on your organization, better shape how these tools are used, instead of if they are used.

The risks: Data leakage and IP considerations

According to an exclusive piece by the Wall Street Journal in May, due to confidentiality concerns, Apple told at least some of its employees not to use ChatGPT or GitHub Copilot. Similarly, in another internal memo reviewed by Bloomberg, also in May, Samsung banned ChatGPT for its employees. Breaking this ban could lead to “disciplinary action up to and including termination of employment”. Similarly, in February, Verizon announced that it had made ChatGPT inaccessible from their corporate systems as it can “put us at risk of losing control of customer information, source code and more”.

Data leaks

On March 20th, ChatGPT had its first data leak. OpenAI - the company behind ChatGPT - shared that there was a bug that “allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history”.

There are other failure modes too. Even if ChatGPT is not compromised, individual user accounts - which could contain confidential employer information if used in prompts - can be hacked. For example, GroupIB found credentials for 26,802 hacked ChatGPT accounts in May 2023 on the dark web.

Finally, based on their Terms of Use (as of June 2023), for non-API users, OpenAI reserves the right to use inputted content to “improve model performance”. While it is possible to opt out, without proper education and guidance, employees might not go through this additional effort to protect company information.

Other risks

LLMs are trained on massive datasets. Some of the works in the datasets are copyrighted. While OpenAI, again from their Terms of Use, “assigns to you [the user] all its right, title and interest” in the output of ChatGPT, it is still unclear if copyright will be challenged in at least some jurisdictions.

Finally, LLMs tend to hallucinate plausible - but ultimately wrong - code. This introduces a new type of subtle bugs, that are more difficult to identify, potentially adding complex issues in codebases. These hallucinations could also be weaponized by bad actors - for example, if ChatGPT consistently hallucinates a non-existing package, an attacker could actually create this package with malicious code and publish it.

The benefits: Happier, more productive engineers

Despite these risks, generative AI is making developers happier and more productive. According to StackOverflow’s 2023 Developer Survey (90,000+ respondents), ~70% of developers are using AI - or are planning to use it soon - in their development process. That’s a lot.

“We use ChatGPT to write code….[Software development that] used to take anywhere from eight to 10 weeks…now can be done in less than a week.”

—Girish Mathrubootham, CEO, Freshworks

Developers love these tools too - 77% of users are favourable or very favourable of AI tools for development, citing increased productivity (33%), sped-up learning (25%), and greater efficiency (25%) as the most important benefits.

A September 2022 study (note that the tools have gotten even better since then) from GitHub shows that 88% of developers felt more productive using GitHub Copilot and that almost three-quarters of developers felt that this tool enabled them to focus on more satisfying work. This is not a story of craftspeople being turned into mindless automatons - 60% of developers felt more fulfilled using Copilot.

This points to twin benefits - as a first order, teams embracing generative AI like ChatGPT could outdeliver their competitors. Additionally, by having happier, more fulfilled engineers, motivation and retention could be higher, further compounding benefits.

The path forward: Be proactive, accelerate!

To recap - this year, the majority of developers will be using generative AI in their workflows. It’s making them more productive. And they love it. We suggest taking a proactive approach with your development team to help reduce the risks while increasing the benefits of tools like ChatGPT.

Reducing the risks 

If you do not have a self-hosted implementation of ChatGPT, you can assume that your developers are using the consumer version of the product. At least, make sure that your developers are using ‘incognito mode’ and asking OpenAI to exclude their work-related prompts from training data for future models.

More broadly, to reduce risk, you should write - and broadly communicate - a policy around confidential inputs. Is there a particular subset of the codebase that has heightened confidentiality risk? Make sure developers are not using that as prompt material.

 

Finally, consider self-hosting a solution with an OpenAI API key. As per OpenAI’s Terms of Use, this will at least ensure that all your employees’ inputs are excluded from future training data by default. We’ll write an article in the future about the best options to do this.

Increasing the benefits

Self-hosting a chat interface with an OpenAI key can also bring additional benefits like sharing prompts across your workforce and codifying workflows. To further accelerate learning across your organization, you should create space for sharing how folks are using generative AI tools. Lunch and learns, a session at an off-site, or a topic of a retro can all be good places to share learnings. Using AI tools effectively should also become part of your new developer onboarding process.

For workplaces that do not have a self-hosted chat interface, consider covering the cost of a premium subscription to ChatGPT (or other generative AI tools). There is a significant performance gap between GPT-4 and earlier models. At $20/month, it’s a no-brainer if your average developer can save >30 mins using it!

As always, feel free to connect if you have any thoughts (marc@setori.ai) or if we can be helpful.

Take care,
Marc