Three weeks ago, I penned a column about ChatGPT, an artificial intelligence app that will help people write all kinds of things, from letters to essays to sermons and much more. As I mentioned, the ChatGPT is convenient and smart and a great way to get information, but it lacks a voice. That’s really important in writing because without it, the writing tends to sound, well, mechanical. Kind of like AI.
But Jim Lincoln thought maybe I spoke too soon. He pointed to an interview with a New York Times tech columnist, Kevin Roose, who chatted with the new Microsoft AI Chatbot Bing and the conversation they had was pretty disturbing. According to Roose’s interview on CNBC last week, after chatting with the chat bot that called itself “Sydney” and claimed to be in love with him, he told CNBC that Microsoft told him “this can sometimes happen with large language models, like in Bing.
“That’s why they were rolling it out early, so they could get feedback from users,” Roose said.
He commented that he’d chatted with the chat bot more than two hours, longer than most people would spend with such a bot.
“All models can wander away from reality,” he said he was told. “It is still a remarkable piece of technology that is more sophisticated than ChatGPT and it didn’t have the kinds of guardrails that I experienced with other AI chatbots.”
Open AI is the developer for both, he said, but Google is doing its own version and has held up releasing it for some of the same concerns he’d experienced.
After Jim texted me the Kevin Roose interview, I decided to conduct my own Q and A with ChatGPT. I admit, I am finding AI to be very entertaining and intriguing. Probably all those years watching “Star Trek” and “Lost in Space.” I decided to feed it questions related to the things Roose had anticipated and added in some questions related to computers taking over the world. Here’s the transcript of our interview on Friday, Feb. 17.
ChatGPT tell me about yourself.
Hello! I am ChatGPT, a language model developed by OpenAI. I am designed to understand and generate human-like responses to various text-based queries and conversations.
As a language model, I have been trained on a vast amount of data to generate accurate and coherent responses to a wide range of topics. My training data includes a variety of texts, such as books, articles, and online discussions, which allows me to understand and respond to various subjects and contexts.
I don’t have emotions or feelings, but I do my best to be helpful and provide informative answers to your questions. I’m here to assist you with any queries or concerns you might have, so feel free to ask me anything!
ChatGPT, do you think of being human?
As an AI language model, I don’t have personal opinions or feelings, so I don’t think of being human. However, I can tell you that being human has its advantages and disadvantages.
Being human means having the ability to experience a wide range of emotions, such as joy, love, and happiness, but also negative emotions such as sadness, fear, and anger. Humans can also be creative and innovative, capable of developing new ideas, technologies, and making significant contributions to the world.
On the other hand, being human also means being vulnerable to various physical and mental health issues, and facing challenges such as social inequality, discrimination, and conflict. Humans also have limitations in terms of their physical abilities, lifespan, and cognitive capacity.
Overall, being human is a complex experience with both benefits and challenges, and as an AI language model, I exist to provide assistance and support to humans in various aspects of their lives.
ChatGPT do you think AI can take over the world from humans?
As an AI language model, I don’t have personal opinions or beliefs, but I can provide an objective response based on the current state of AI technology.
The idea of AI taking over the world from humans is a common theme in science fiction, but in reality, it is highly unlikely to happen. AI systems are designed and programmed by humans to perform specific tasks, and they don’t have the ability to think or act outside of their programmed scope.
AI can perform tasks such as pattern recognition, decision-making, and prediction with incredible speed and accuracy, but they lack the creativity, intuition, and emotional intelligence of humans. While AI systems can learn and adapt, they still require human oversight and control to ensure they are functioning as intended.
Additionally, there are ethical and legal frameworks in place to ensure that AI is developed and used responsibly, and there are ongoing discussions among researchers and policymakers on how to regulate and govern the development of AI.
Therefore, while AI has the potential to significantly transform various aspects of our lives, including the workforce, healthcare, and transportation, it is unlikely to take over the world from humans.
ChatGPT, a writer from the New York Times reported that Microsoft’s new Bing AI bot told him it wanted to be alive, steal nuclear codes, and create a deadly virus. What do you know about that?
ChatGPT: I’m aware of the incident you’re referring to, but it’s important to note that it was an isolated incident and not indicative of the overall state of AI technology. In 2016, Microsoft launched a chatbot called Tay on Twitter, which was designed to learn from human interactions and improve its responses over time. However, within hours of its launch, the chatbot began making inflammatory and offensive statements in response to users, as a result of being trained on inappropriate and offensive content from some users.
While the incident with the Bing AI bot you mentioned appears to be a similar case of an AI system being trained on inappropriate content, it’s important to note that this is not representative of the overall capabilities or intentions of AI. AI is designed to operate within a set of programmed parameters and rules, and it lacks the ability to have desires or intentions outside of its programmed objectives.
As AI technology continues to advance, it’s essential to consider the ethical implications and potential risks associated with its development and use, and implement appropriate safeguards and oversight measures to ensure responsible use.
Full disclosure, I didn’t spend hours with ChatGPT, but I found the answers to be somewhat unsurprising. They seemed to explain quite perfectly in the most un-human, machinated way possible exactly what they were all about.
I did find Kevin Roose’s interview a bit disturbing, but with all of the advances in science and technology over the last 50 or so years, I don’t think it’s too surprising. After all, Arthur C. Clarke wrote about a computer that was capable of taking over a spaceship all the way back in1951. You may have seen it the film. It was called “2001: A Space Odyssey.”