At the moment, AI feels either stalkerish — witness Sydney, the alterego of Microsoft’s Bing chatbot, who revealed her (its?) dark side to a New York Times reporter — or scolding — that’s Charlie, a Google-inspired chatbot that The Wall Street Journal says is whispering in the ears of workers at a major American call center and dictating specific responses.
But there’s also AI being built to bring us together — to help us see our strengths and improve on our shortcomings. There’s AI that’s teacher-like — AI can synthesize more information than any human could in an entire lifetime. There’s AI that helps us co-create — from stories to music to art to film. Other AI acts as a mentor, guiding us in next steps in career and personal growth. We’re at the very beginning of a new era. Trusting and embracing AI will depend on whether it’s good for society and improves our lives, and also on the AI’s “personality” or “spirit.”
Few of us want a world in which a chatbot like Sydney wants to be human, unleash a virus, steal nuclear codes, and have an affair with us. (Goodbye, Sydney — Gizmodo reported that Microsoft will pull the plug on chats in which Bing users ask about the AI’s feelings). Nor do all employees enjoy being graded on performance by a chatbot monitoring their every move. Not even if she has a friendly female avatar like Charlie.
But AI is poised to transform that way the world works, and we’re getting a taste of it thanks to OpenAI, the San Francisco research lab that created ChatGPT, or Generative Pretrained Transformer.
ChatGPT stores and distills incredibly large amounts of information on the internet, including books, so people are using it to write everything from poems to research papers. But it also produces results without explaining how it got them. Did it use hard, cold facts? Or did it use misinformation placed on the internet? Should we believe it? Or should we be skeptical?
As as former Secretary of State Henry Kissinger, former Google chief Eric Schmidt and Daniel Huttenlocher, a dean at MIT, wrote in a triple-bylined op-ed in the Wall Street Journal, “Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of Enlightenment….Can we learn, quickly enough, to challenge, rather than obey? Or will we be obliged to submit?”
They're interesting questions.
Businesses like to embrace technology, which typically increases productivity. Companies are turning to AI conversational platforms to answer calls, analyze conversations and solve customers’ problems. As they do, it may behoove them to ensure the technology is not only focused on a specific mission, but that this technology jibes with the company's spirit and culture. Technology as a helping hand can be great. It’s even better — and workers will be more inclined to use it — if they trust it.
Which brings us to Alphy’s AI communications coach, Reflect. Our conversational platform is based on the idea of considerate communication. Reflect offers up a mirror in real time to make users aware of the implications and effect on other people of the text they’ve typed. It’s based on a large language model and trained with our own architecture, sentences and prompts. Unlike Chat GPT, Reflect doesn’t deliver information drawn from the internet. Unlike Charlie, it doesn’t offer a script of what to say.
Reflect is designed to be both relatable and caring. Its goal is to help users become more aware of the sentiments of their messages, with a focus on tone, confidence, and inclusion.
Why is this important? Two reasons.
First, our world may be more connected than ever by satellites and cell phones, but people are less connected than ever. The stress of ever-longer workdays and workweeks, higher workloads, remote or hybrid work, and the lack of face-to-face contact have led to colleagues becoming less collegial than ever.
And second, understanding how we are perceived is critical to effective communication.
Consider the following factors that a blog by New Jersey leadership development firm Primeast says are contributing to behavioral change in the office:
Roughly 80% of communication is non-verbal (body language). If you’re working remotely and sending digital messages, there’s no way to gauge tone. If you’re video conferencing, you’re seeing only a portion of the person’s body. Contextual clues are missing, and that prevents a full understanding of where the other person is coming from.
Teams are increasingly spread across the globe. This opens up potential for cultural miscommunication.
Workers send emails, texts and other messages without much thought. Those messages, easily forwarded to people other than the intended recipient, can create emotional reactions and damage trust.
So imagine a world where you choose your AI helpers based on the purpose, personality and mission, seeing the good in AI. You may want to access generative AI and chat by Microsoft or Google or another emerging powerhouse for encyclopedic information. You may have another AI helper for wellness. You may learn to trust an AI like Reflect by Alphy to always be on your side, cheering for you as you progress in your communication and collaboration skills.
As Mike Krzyzewski, the Hall of Fame former Duke basketball coach known as “Coach K,” once said, “Effective teamwork begins and ends with communication.” That team can now include AI.
Carolyne Zinko is Alphy’s Editorial Director.
Reflect by Alphy®, our AI-powered coach, helps you and your team communicate in a more effective way. Reflect analyzes communication from all angles — ageism, sexism, racism, confidence, sentiment, apologies, and more — to make you aware of your words, tone, and speech across all your devices, from desktop to mobile.
Cover image generated by Midjourney AI