top of page
Carolyne Zinko

Your Words Can Get You Sued, Part 1


Barbie, Feminist, NFL, Breast Cancer, Steroids, Pretty

Not long ago, people worked side by side in the office and communicated face to face. The pace was slower — maybe glacially so — compared to today’s rapidly sent texts, emails and chat messages. Some 347 billion emails alone are sent and received around the globe daily, according to Zippia.


Fast and furious can be great. It can also be disastrous.


When sending a digital communication, we’re physically separated by distance. Out of sight, out of mind — we’re not always thinking of our colleagues’ feelings when we type. We’re also not moderating our messages based on the facial expressions or body language of the person in front of us, because they’re not in the same room. They may not see the message until hours later. They may be in a different country several time zones away. There may be language barriers, tech barriers, and cultural barriers.


So keep in mind — your words can get you sued. We typically assume the recipient will understand our tone, intention, or meaning without any problem. This is how miscommunication begins. American companies alone spend $1.2 trillion a year on corporate lawsuits stemming from toxic and discriminatory communication.

What’s the difference between toxic/harmful language and discriminatory/unlawful language? It’s a good question. We’re here to explain, and to help keep you out of hot water. Our Reflect AI is a communication solution for email. It uses our AI language classifier to detect risks in emails prior to send.


This week, we’ll look at harmful language. Next week, we’ll tackle unlawful language.


Harmful language erodes the social atmosphere of a workplace by conveying disrespectful, offensive, or hurtful ideas or actions. It’s not always tied to a specific gender, race, group or characteristic. But in demeaning and belittling, it can lead to legal claims about harassment or hostile work environment.


Unlawful language goes a step further by targeting a person or group of people based on federally protected characteristics, such as race, color, religion, sex, national origin, age (40 and older), disability or genetic information. Federal law bans discrimination in all aspects of employment.


Here are 10 examples of harmful language and the results you'd get typing this into an email with Reflect AI.



“You’re incompetent — you’ll never get promoted”

An angry emoji and the words "could that be hateful?"

“Why the f--- can't you ever get it right?’

Could that be classist?

“That Chinese engineer was a banana”

Show respect to all

“Don’t be a sissy”

A sexist emoji and the words "is this appropriate?"

“Mormons have some pretty weird ideas”

Could that be classist?

“What a fossil”

Show respect to all

“That’s so gay”

A sexist emoji and the words "we learn from differences"

“I can’t believe you’re still working here”

An angry emoji and the word "angry?"

“You’re being a spaz”

A surprised emoji and the words "could that be ableist?"

“This team is full of losers — horrible effort this quarter”

A judging emoji with the words "think how your words sound"

As you can see, toxic or harmful language is disrespectful, offensive, or hurtful. It can include insults, threats, name-calling, and other forms of verbal abuse. It can also be more subtle, such as using sarcasm or making negative comments about someone's work or appearance. It can make people feel uncomfortable and unsafe, and lower morale.


Reflect AI is designed to hold up a mirror to what you’re saying, so you can see how the words you use might be interpreted on the other end.


Download Reflect AI for Gmail in Google Workspace here:


A button with the words "download Reflect AI for Gmail"




Could some of the things we say can benefit from a little rephrasing? We think so. But ultimately, you’re in the driver’s seat. You’re in control of your messages. Our goal is to steer you in a thoughtful, respectful, helpful direction, every time you communicate.




Carolyne Zinko is the editorial director and AI editor at Alphy.


Reflect AI by Alphy is a SaaS platform that flags harmful language, including topic, tone, “isms,” confidence, mindset and appropriateness. Our AI language classifier detects risks in emails prior to send, flags conversational missteps (and successes) in video meetings in real-time, and upskills individual communication with targeted and personalized microlearning.



Comments


bottom of page