The impacts of ChatGPT — the chat bot launched by OpenAI — are rapidly evolving and, as Early’s editors, we are curious about how they will play in the world of work. Since its launch in November 2022, ChatGPT has been the focus of fear-mongering, criticism and celebration. So, instead of decisively stepping in any one direction, we’re using this story to tread — optimistically and cautiously — into the ambiguity of this technological development. To do so, we’ve gathered some thoughts from our friends at HR consultancy Bright + Early to help guide the way. 

Early Magazine Editors: What was your first impression when you saw ChatGPT stories popping up over the past couple months?

Eric Hutchinson, Sr. Consultant at Bright + Early: So, firstly, my background before HR is in neuroscience, and I'm really interested in neural networks and how they work. ChatGPT is not connected to any databases. It's not connected to the internet. It doesn't know what it knows. It's a completely different way of interacting with a query system than what we've ever done before. ChatGPT is trained on statistical probabilities of what it thinks that you want to see. And I find that fascinating.

Trisha Neogi, Compensation Lead at Bright + Early: My first response to ChatGPT was panic. Immediate panic. I am very fearful of AI technology informing or advising human thinking. With AI, humans are the ones training the machines. And, unfortunately, when we train the machines we're already putting in our bias. There's lots of research and case studies that show examples of when humans have incorrectly trained AI and caused disastrous things to happen that have hurt many people. 

I think it’s great that ChatGPT is available for public use and that we’re trying to democratize this. But, on the other hand, to know that you can use it as a kind of Google or a search function, and you won’t know where the sources are is scary. In that situation, you’re kind of losing that critical thinking ability of deciphering what's fake news and what's real news because you're leaning on the AI to make those decisions for you. So, yeah, when I first heard of ChatGPT I was like, “Holy shit. Critical thinking is dying.”

Nora Jenkins Townson, Founder of Bright + Early: This was a big debate on our team. In my view, sometimes there’s an immediate moral panic about new technologies. Like many other examples from the past, people think AI is going to replace jobs, which I don't think is really happening — capitalism will always find ways to create new jobs, for one reason or another — but what it may be doing is changing people’s jobs. Outside of that particular fear, I do think that there are valid concerns around ChatGPT’s bias and inputs. Overall, though, I’m excited.

Early: What use cases do you see for a technology like this in the workplace or in the HR function specifically?

Eric: ChatGPT does standard writing very well. Basically, if you want a cover letter, ChatGPT can write you a very good one. And oftentimes during the hiring process, we're looking at communication skills and using cover letters as a proxy. But now with ChatGPT you can’t know who wrote it.

Early: Do you think eventually that would cause people to phase out cover letters, which we all sort of want anyways?

Eric: I think because most folks are going to be using tools like this, it just means that the cover letter will no longer be a selective tool to evaluate communication skills. But, in general, ChatGPT will level the playing field when it comes to writing skills while also removing them as a selection criteria entirely.

I think, ultimately, where we're going to see ChatGPT starting to work inside of HR departments is with quickly spinning up a policy template, a job description, an employee handbook, or a procedure, things like that. You can use ChatGPT as an accelerant for work. It could reduce what would be four hours of work into an hour of work, but only if you already know what you're doing to begin with.

Nora: HR professionals do spend a lot of time writing and a common complaint from people who work in HR is that they're not able to get a lot of heads down work because they get tapped on the shoulder with day-to-day issues. So, it could save them some time as long as they’re still adding a human touch and making sure that the work is accurate.

There's also a lot of potential uses in terms of writing legal documents that could save companies on lawyer time. I wouldn’t advise that now, it’s still a bit scary, but in the future, maybe. Another area where I could see it creating change in the tech industry is by saving teams time on coding. While it's not capable of producing code at an architecture level or making decisions, some lower level programming can be created.

Like Eric said, it also might lower the barrier to entry on professional writing. Someone might be brilliant but a poor writer, or not working in their native language, so ChatGPT is a tool they might be able to use.

Early: Could you see yourself eventually using this for some writing work in an HR capacity?

Trisha: Yeah, because we do some form of this already. In the context of our work, ChatGPT could basically create a generic document or piece of communication, sort of like a template. We use some templated documents and policies already in our work, which we then build off of and modify for specific use cases and clients. So, I don't really see the difference between this AI bot developing a template for me, or me creating a template. In both cases, I think it’s fine as long as the HR professional is using their own critical thinking and not taking the template at face value.

Nora: I think we can see it as similar to something like Grammarly but on steroids. It's a tool in a toolkit that might help us work a bit faster.

Early: What should people be wary of when using ChatGPT in HR?

Nora: I think somebody could be careless and use it to write company policies or use it to do the bulk of their work then not review that work for accuracy and bias. That would be a problem.

Eric: What we also might see is a lot of small and medium-sized businesses using this tool in lieu of HR or other professionals. When that happens we're going to see a lot of wonky policies and things like that, because an expert wasn't interfacing with it. Companies that see HR as a bit of a necessary evil may start being like, "Well, no. I've got the ChatGPT. Why do we even need HR? We could write all the policies that we want with this bot." And it's like, sure. But it's a could or should situation.

Early: Can you see a way that this technology could be used maliciously in the context of work?

Trisha: I don't know about malicious intent. But I think the way we value work will change. For example, you might not need a coordinator to draft all your policy work or write your legal documents anymore. That role could become obsolete.

Eric: I think it is good that it's being given a democratized level of access. This is something that OpenAI in general is doing well relative to other big players inside of AI, like your Googles of the world, your Amazons and Apples. It’s important that this AI is open source because if only very powerful organizations or wealthy individuals have this, then it's problematic, because it creates such a massive power imbalance.

Overall, I see it (AI) being incredibly disruptive. We're going to have people who can do things much faster now, in the same way that we can do something faster with a calculator or a computer. It's likely to change a lot of industries, including HR.

*Interviews condensed for clarity