From the steam engine and electricity to computers and the Internet, technological advances have always disrupted labor markets, eliminating some jobs and creating others.
“Artificial intelligence” is still a misnomer—the smart computer systems still don’t know anything—but the technology has reached a tipping point where it is about to affect new classes of jobs (artists and knowledge workers).
In particular, the emergence of large language models and AI systems that are trained on large amounts of text means that computers can now produce human-sounding written language and turn descriptive sentences into realistic images.
The Conversation has asked five AI researchers to explain how great language models will affect artists and knowledge workers.
As these experts point out, the technology is far from perfect, which raises problems—from misinformation to plagiarism—that affect human workers.
Here is the list of questions asked, from which you can jump to the corresponding answers:
Creativity for all and loss of skills?
Lynne Parker, Vice Chancellor, University of Tennessee
The great linguistic models are placing creativity and knowledge within everyone’s reach.
Anyone with an internet connection can now use tools like ChatGPT or DALL-E 2 to express themselves and make sense of repositories of information.
For example, by producing text summaries, especially remarkable is the depth of human experience displayed by the great linguistic models.
In a matter of minutes, beginners can create illustrations for their business presentations, craft marketing proposals, get ideas to overcome writer’s block, or generate new computer code to perform specific functions, all at a level of quality often attributed to developers.
Of course, these new AI tools can’t read minds. It takes a new simpler kind of human creativity in the form of text messaging to get the results the human user is looking for.
Through iterative instructions, an example of human-AI collaboration, the AI system generates successive rounds of results until the person writing the instructions is satisfied with one of them.
For example, the (human) winner of the recent Colorado State Fair contest in the digital artist category, using an AI-based tool, demonstrated creativity, but not the kind that requires brushes and an eye for color and texture.
While opening the world of creativity and knowledge work to everyone has significant advantages, these new AI tools also have drawbacks.
1- They could accelerate the loss of required human skills that will remain important for years to come (especially writing).
Educational institutions developed and enforce policies on the permitted use of large language models to ensure fair play and desirable learning outcomes.
2- These AI tools raise questions about the protection of intellectual property.
While humans routinely draw inspiration from existing artifacts in the world, such as architecture, writing, music, and paintings by others, there are unanswered questions about the proper and fair use of training examples by large linguistic models protected by copyright or open source.
Ongoing lawsuits debate this issue, which may have implications for the future design and use of large language models.
While the implications of these new AI tools are being debated, the public seems ready to embrace them.
The ChatGPT chatbot quickly went viral, as did the Dall-E mini imager and others.
That implies a significant amount of unrealized creative potential and the importance of ensuring that information and creative effort are available to everybody.
Potential inaccuracies, biases, and plagiarism
Daniel Acuña, Professor of Computer Science, University of Colorado Boulder
I’m a regular user of GitHub Copilot, a tool to help people write computer code, and I’ve spent countless hours playing with ChatGPT and similar AI text generation tools.
In my experience, these tools are good for exploring ideas you haven’t thought of before.
I have been impressed with the model’s ability to translate my instructions into coherent text or code.
They’re useful for discovering new ways to improve the flow of my ideas, or using software packages I didn’t know existed to solve problems.
Once I see what these tools produce, I can assess their quality and edit them thoroughly. I think they raise the bar for what is considered creative.
One of the problems is its inaccuracies, big and small. With Copilot and ChatGPT, I’m constantly looking for ideas that are too shallow, for example, thin text or inefficient code, or results that are just plain wrong, like analogies or wrong conclusions, or code that doesn’t work.
If users are not critical of what these tools produce, they will be potentially harmful.
Meta recently shut down Galactica, its language model for scientific texts, because it made up “facts” that seemed true.
The concern was that she could contaminate the Internet by pretending to be certain while posting lies.
Another problem is bias. Linguistic models can learn from biases in input data and reproduce them.
These biases are hard to see in text generation but are very clear in image generation models.
The OpenAI researchers, creators of ChatGPT, have been relatively careful about what the model will respond to, but users often find ways around these barriers.
Once humans have been surpassed, the labor niches and the “handmade” will remain
Professor of Community Information, University of Michigan
Human beings love to believe that we are special, but science and technology have shown time and time again that this belief is wrong.
It used to be thought that humans were the only animal that used tools, built equipment, or spread culture, but science has shown that other animals do these things too.
Meanwhile, technology has debunked one claim after another that cognitive tasks require a human brain.
Last year, a computer-generated work won an art contest. I think the singularity – the time when computers catch up with and surpass human intelligence – is on the horizon.
How will human intelligence and creativity be valued when machines are smarter and more creative than the brightest people? There will probably be a progression.
In some areas, people still value humans doing things, even though a computer can do them better.
A quarter of a century has passed since IBM’s Deep Blue defeated world champion Garry Kasparov, but human chess competitions—for all their drama—have not gone away.
For example, readers don’t care if the image that accompanies a magazine article was drawn by a person or by a computer: they want it to be relevant, new, and perhaps entertaining.
This is not a question of black and white. Many fields will be hybrid, and some Homo sapiens will find a niche that benefits them, but most of the work will be done by computers.
Consider the manufacturing industry: today much of the work is done by robots, but some people oversee the machines, and there is still a market for handmade products.
If history is any guide, advances in AI will almost certainly drive more jobs disappearing, creative people with uniquely human skills will get richer but fewer in number, and those with creative tech will become the new megarich.
There might be a silver lining: when even more people are left without a decent livelihood, they could band together politically and contain rampant inequality.
Old jobs will disappear and new ones will emerge.
Mark Finlayson, Professor of Computer Science, Florida International University.
Great linguistic models are sophisticated sentence-completion machines. If given a sequence of words (“I would like to eat on”) they will return the possible endings.
Some like ChatGPT, which have trained on a record number of texts (billions), have surprised many, including AI researchers, with how realistic, comprehensive, flexible, and context-sensitive their responses are.
Like any powerful new technology that automates a skill – in this case, the generation of consistent, if somewhat generic text – it will affect those who offer that skill to the market.
To imagine what might happen, it is worth recalling the impact of the introduction of word-processing programs in the early 1980s.
Some jobs, such as typing, have almost completely disappeared. On the other hand, anyone with a personal computer could generate well-typed documents, greatly increasing productivity.
Technological advances generate new skills
Over the past two decades, biology and medicine have been transformed by the rapid advancement of molecular characterization, such as rapid and cheap DNA sequencing, and the digitization of medicine in the form of apps, telemedicine, and data analysis.
Some technological steps seem greater than others. Yahoo used human administrators to index emerging content during the early days of the World Wide Web.
The advent of algorithms that used information contained in the web’s link patterns to prioritize results radically altered the search landscape, transforming the way people gather information today.
The release of OpenAI’s ChatGPT signals another leap. ChatGPT integrates a state-of-the-art language model adapted to chat in a very easy-to-use interface.
It puts a decade of rapid advances in artificial intelligence at your fingertips.
In the same way that the skills to find information on the Internet changed with the advent of Google, the skills necessary to obtain the best results from the linguistic models will focus on the creation of instructions and templates that produce the desired effects.
The user could create even more specific messages by pasting parts of the job description, resume, and instructions, for example, “emphasize attention to detail”.
As with many technological advances, the way people interact with the world will change when AI models become widely available.
The question is whether society will take advantage of this moment to advance equality or to exacerbate differences.