top of page
Amanda Nurse

Not A Scary, Faceless AI: How Alphy’s Reflect AI Stands Out


Multiple smiling faces covered in computer interacing

Artificial intelligence is made out to be a scary beast that’s only going to get bigger and more dangerous in the future. It’ll take over people’s jobs, pretend to be your girlfriend, create fake photos, or even respond to queries with hallucinations. But does this AI operate on its own? And is it far from human control? These are good questions. The answers can be no and no. That’s the case at Alphy, where our AI is tightly controlled. It does only what it’s asked to do. And what we ask it to do is pretty pioneering. 


Reflect AI detects harmful, unlawful and unethical language in workplace communication in real time. It alerts employees to problematic language and allows them to self-correct before hitting “send.” This is a safety net that protects people and companies from the risk of litigation over slip-ups with legal consequences. And it’s built on a foundation that was created by humans and is curated by humans. Why is that critical? Keep reading. 


Human Checks at Every Step


Contrary to the image of AI as a detached, self-operating entity, Reflect AI relies heavily on human-generated training content and human oversight to ensure it is effective and accurate. Letting the machines train themselves leads to utter dysfunction, as a recent study showed. 


Researchers at the University of Cambridge in England have found that systems that are trained on synthetic data alone will quickly spiral out of control and experience system collapse, according to a recent report in Nature magazine. That’s because the AI model doesn’t learn from reality, but its perception of reality. It will start repeating common words — and also errors — in the data it was trained on, leading to both homogenization and gibberish. It’s also less likely to reproduce words that were uncommon in the original data, which can lead to bias. Said study co-author and AI researcher Zakhar Shumaylov: “The message is, we have to be very careful about what ends up in our training data [or] things will always, provably, go wrong.” Or, as University of California at Berkeley researcher Hany Farid told the magazine, “If a species inbreeds with their own offspring and doesn’t diversify their gene pool, it can lead to a collapse of the species.”


Alphy addresses these concerns by incorporating rigorous human checks throughout the development of our AI. For starters, our team is composed of people of various racial, ethnic, religious and sexual orientations from around the world, who write sentences that train the classifier in racism, sexism, workplace discrimination, threats and intimidation, unfair lending, and more. These thousands of unique thoughts contribute to shaping our large language model. Any synthetic sentences created for additional training are evaluated and rewritten if necessary by the team of humans, and the large language model is meticulously maintained by our AI architect.


What’s more, our AI doesn't have "conversations" with users. Reflect AI was designed to hold up a mirror to people to show them how the words they're using might be perceived by others. We do that by flagging problematic language and issuing concise feedback, such as “Could that be sexist?” or “Could that be age discrimination?” or “Are you angry?” It will never tell you to put glue on your pizza, because it can’t. 


Foundations for Customer Protection


Reflect AI’s training is also deeply rooted in the law, which is crucial in an era where both AI ethics and workplace compliance are gaining prominence. Lawless AI systems appear to be the norm, with no knowledge of human life or standards. Remember Google’s AI recommending people eat rocks for nutrients? Or the AI that drafted a lawyer’s legal brief for the courtroom, complete with citations that were fake? If AI training isn’t grounded in ethical standards, how useful is it, really? And how dangerous? 


We initially trained Reflect AI on U.S. workplace discrimination laws (as spelled out by the U.S. Equal Employment Opportunity Commission). Reflect AI now also detects unfair credit practices, based on federal U.S. laws including the Fair Lending Act, the Equal Credit Opportunity Act and the Consumer Credit Protection Act. 


This evolution underscores the importance of integrating real-world legal frameworks into AI training to ensure that AI systems are grounded, ethical, and accurate enough to handle real-world queries without errors or hallucinations. That’s why our AI has the capability of addressing a broad spectrum of regulatory challenges, in real-time.


Driven by Real Human Needs


Reflect AI was built to address the human need to communicate clearly, efficiently and safely. By alerting users in real-time with feedback about potentially problematic language, it’s relevant for preventing inadvertent slips due to cultural or language gaps as well as intentional remarks that can get employees into legal hot water. This user-centric, mentoring approach reflects a broader industry shift towards making AI systems more responsive to human communications and interactions.


Reflect AI is also adaptable, providing tailored protections and functionalities. For the Australian market, we’ve trained it on popular slang, as well as the names of hundreds of groups of Indigenous Australians to detect racism against First Nations peoples. And for the financial sector, we’ve created a spin-off, Restricted Lists by Alphy, a new solution to protect banking confidentiality, mitigate regulatory, legal, and deal risk with greater detection accuracy and far fewer unnecessary alerts.


The bottom line: AI can be daunting and unpredictable. But Alphy’s Reflect AI is not the scary AI you're afraid of — it distinguishes itself as a model of responsible, human-centered technology. It’s a tool that’s built with human checks, grounded in safeguards, and driven by the real needs of the people who use it. Reflect AI isn’t here to take over — it’s here to help people communicate better, learn from their mistakes, and ultimately protect themselves in an increasingly complex world.


Amanda Nurse is the editorial and operations coordinator at Alphy.


Reflect AI by Alphy is an AI communication compliance solution that detects and flags language that is harmful, unlawful, and unethical in digital communication. Alphy was founded to reduce the risk of litigation from harmful and discriminatory communication while helping employees communicate more effectively.


Contact us for a demo at sales@alphyco.com.

bottom of page