“Often caste jokes are made by my colleagues without realizing my Dalit background. And they would expect I would laugh along with them because they assume I belong to a ‘high’ caste like themselves.”
“A very educated man told me he could no longer date me because his mother wanted him to marry someone from the same caste.”
“In my undergrad in Iowa, there were many incidents where other Indian kids from rich, upper caste and urban backgrounds would make casteist jokes on me. One in particular was that a few of them were planning to go to visit the local zoo. One of them said, ‘I don’t need to go to the zoo to see animals, we get to see K. (myself) everyday.’”
If you're unfamiliar with India's caste system, these statements might sound strange. But for many Indians, they evoke deep-rooted biases and prejudices. They’re real-world examples of caste-based discrimination that has found its way to America, just a few of many in a ground-breaking survey of 1,500 South Asians, Caste in the United States, published by Equality Labs, a California nonprofit, in 2018.
India's caste system, a social hierarchy based on birth, has been around for millennia and was outlawed in that country in 1948, but undercurrents persist. For many, caste determines their relationships, profession, and even their sense of self-worth, with Brahmins at the top of the system. There’s a palpable fear among many Dalits — the lowest rung on the caste ladder, often referred to as the “untouchables” — about being "outed," making them constantly wary, even in a land they assumed would be free of such bias, the study showed.
In the U.S., discrimination on the basis of race, color, religion, sex, national origin, age and disability has been illegal for decades. But caste-based discrimination has only recently been recognized in diaspora communities like Seattle and Silicon Valley. Tech giant Cisco faced a lawsuit in 2020 over alleged caste-based discrimination in its workplace, and the issue is still not fully settled. The city of Seattle outlawed caste discrimination in February. California could be the first state to do so, with a bill now on the governor’s desk that would ban caste-based discrimination.
The words we use can have profound impacts, shaping perceptions, behaviors, and relationships. That’s evident in the experiences of South Asian immigrants in the U.S., especially those with Dalit ancestry.
In today's global workforce, when you have colleagues from a variety of backgrounds, it's not just about understanding cultural festivals or dietary habits. It's also about grasping deeper socio-cultural nuances, like the caste system, and how words can hurt. Just as body weight discrimination is being addressed in cities across America, acknowledging caste-based biases is crucial to promote better communication, more positivity, and greater success in our dealing with others.
At Alphy, we understand that communication barriers aren't just about language. They're often about unspoken biases and prejudices. That's why we've developed Reflect AI, a platform that flags unlawful, harmful and helpful language and gives real time feedback to help users become more aware of how their messages might be interpreted by others.
This is the feedback you’d get if you typed an email using Reflect AI:
"Is he really worth hanging out with, considering his Dalit background?"
"He's great at his job — he’s a Brahmin, after all.”
"She's different from what I expected, given her Shudra family history."
Words carry weight, just as biases do. Holding onto caste prejudices limits the potential of skilled individuals. It dampens innovation and stifles creativity in teams. When employees feel marginalized, their productivity and morale suffer, which can harm organizational success and even lead to legal challenges.
It's essential to remember that most of us don't intend to hurt or offend. Sometimes, it's sheer ignorance. By understanding and addressing our biases — whether they're about body weight, caste, or any other factor — we can create more harmonious and productive workspaces. Remember, just as every individual has unique weight, every word we use carries its own weight too.
Carolyne Zinko is editorial director and AI editor at Alphy.
Reflect AI by Alphy is a SaaS platform that flags harmful language, including topic, tone, “isms,” confidence, mindset and appropriateness. Our AI language classifier detects risks in emails prior to send, flags conversational missteps (and successes) in video meetings in real-time, and upskills individual communication with targeted and personalized microlearning.