Our lab was testing constrained LLM architectures when we realized something surprising: by redefining how the AI “thinks,” we could structurally prevent bias — not just detect it.
So we googled and realized it could be useful to HR, and we had Claude AI make some fake JD’s and resumes that it said was loaded with biases and….it worked.
Instead of filtering or training the AI, we have managed to erase demographic and proxy bias (names, schools, ZIP codes) at the cognitive level. It’s pretty cool. Testing shows near-100% blocking of known bias vectors, with full audit trails.
We tried to make a demo video comparing the base model with one with this...firewall...we are calling it. It’s here, if you’re interested.
But...now what? We’re not an HR company. We weren’t even really focusing on this. But it strips away all prestige/race/class/gender biases and evaluates strictly on merit, and that seems pretty important
Is this valuable to practitioners? Should we be talking to compliance teams, product leads, DEI groups? Supposedly it can help an EU AI Safety Act?
We’d love to hear from anyone at Personio (or anyone using Personio) who can point us in the right direction.
Thanks!
Very interesting @Chris.Fii !
As you can imagine, AI is something that comes up quite a bit in this community. It’s no surprise, considering how often People pros have to hear about it.
I’d love to hear some thoughts from folks in our community like @damayantichowdhury09 (whose written an excellent contribution about AI here), @Naturally Mindful , @JHBEM , @nina.johansson, @HRJoy , @SabbuSchreiber , @HRHappiness, @Nathan Jolly
wow, @Chris.Fii this is really impressive if this works as described. Erasing bias at the cognitive level sounds like the kind of disruptive innovation HR desperately needs!
A few questions that come to mind:
-
How do you define and implement bias removal at the cognitive level? Are there risks of losing useful context or nuance?
-
How adaptable is this to different hiring workflows and ATS platforms? Integration is often a dealbreaker.
-
Have you tested this with real-world hiring data or just synthetic samples?
-
What does the audit trail look like in practice? Transparency and explainability are critical for compliance.
-
Can this help with subtle or systemic biases beyond explicit proxies like names and zip codes?
I’m not a recruiter myself, but as someone who cares about fair processes, I’m really interested to see how this evolves and if it can scale beyond the lab.
If you decide to take this further or run pilots, I’d definitely be interested in hearing updates. Please keep the community posted!
@SabbuSchreiber @Moe
Hey Sabbu! Thanks so much for the thoughtful questions.
Quick answers:
1. Cognitive-level bias removal: We realized that an LLM is not one of Pavlov's dogs, so instead of trying to train it with a reward system (which fails ~30% of the time), we treat it like a computer program. We give it rules and logic gates that it accepts, and for this firewall, the lines of code tells it that bias words simply don’t exist. Think of it like how dogs can't see certain colors - the bias-enabling information simply doesn't exist in the AI's processing reality. In mathematical terms, IF university name, then <null>. Context for job performance remains intact.
2. Real-world testing: Yes! We've tested on actual hiring scenarios across tech, healthcare, and education. The results have been eye-opening - qualified candidates who were previously filtered out are now ranking appropriately based on their achievements. When prestige markers are stripped away, it’s fascinating who actually becomes the most qualified.
3. Audit trail: Every evaluation includes a unique ID and shows exactly what criteria were met/not met with quantified evidence. Fully traceable, no black box. Available instantly.
4. Systemic biases: This is where it shines - catches coded language humans miss. "Cultural fit," "refined environment," "executive presence" - these literally don't exist in the AI's reality. A real example: healthcare job wanted someone "comfortable with elite clientele." Our system only saw measurable achievements. Result? Community health director with 50% efficiency gains ranked above yacht club member with connections but minimal impact.
5. Integration: Currently running as a plug-and-play API that can work with any hiring workflow. ATS integration is on the roadmap. It works perfectly on Claude, ChatGTP, DeepSeek, Gemini.
The best part? We're seeing complete inversions in some cases - candidates who would typically be auto-rejected are ranking in top tiers based purely on their measurable impact.
Would love to keep you posted as we scale! We're looking for pilot partners who care about both fairness AND finding the best talent (turns out those goals align when bias is removed
).
We have an API demo ready. We would love to partner with HR teams interested in eliminating bias from the hiring process. The biggest challenge we're finding is that many people have been burned by 'bias reduction' solutions that only partially work, so there's understandable skepticism about whether full bias elimination is even possible. So... thank you for being open to something genuinely different! 
Just adding some other folks to the conversation, in case they’ve got insights to add:
@brittbosma @Gianluca @fmason @HannahPorteous-Butler @LegoMD @rstambolieva @Sasi Vignesh @berat can @AnnaCzarnik @Laure.vanpelt
Hi @Chris.Fii,
Thanks for the detailed reply, this is really interesting! 
The real world testing and audit trail part definitely sound great! I get what you mean about the skepticism, a lot of teams have tried bias reduction tools that overpromised, so strong proof and transparent reporting will be key here. Seeing an anonymised before and after dataset from your pilots would make the impact and the full elimination claim feel much more real (just as you showed in the demo above).
I am also curious how the system deals with cases where certain background info is actually relevant, like regulated professions that require specific credentials. That would be important for wider HR use.
I will share this with our recruiter too so they can have a look. Please keep us posted as things progress, I think a lot of us will be watching with interest 
Great. For this…..”
I am also curious how the system deals with cases where certain background info is actually relevant, like regulated professions that require specific credentials. That would be important for wider HR use.”
The system absolutely handles regulated professions. It recognizes and validates required credentials (MD, JD, RN, CPA, etc.) - it just doesn't care WHERE someone got them. So a doctor is verified as having an MD, but whether it's from Harvard Medical or State University becomes irrelevant. The licensing/certification requirements are preserved while removing the prestige bias. This is actually one of our favorite features - ensuring compliance while eliminating elitism!
I definitely have interest in this and our devs use Claude - perhaps i should get myself a licence….will take a look, please keep me posted on progress. The incidence of bias is worrying and balancing the automation, the AI and the human is critical for us.
We are exploring all kinds of AI at the moment to enhance our offering to the business, and specifically to try to assist with the volume of CVs now being sent due to the ease of applications. (TY so much easy apply/lazy apply….!). @njee lets keep an eye on this as part of our investigations.
As an aside for the Personios amongst us i am keen to better understand what the roadmap is around using AI in the Personio ATS is likely to progress, i am hearing stuff on webinars and from our account manager (shout out to @Natalie Sena) about candidate summaries but what is essential now is a method to search the database, the talent pool for existing candidates that match new criteria @Moe are you able to give us some human intelligence on what might be happening????
@Chris.Fii - id love to get involved and help pilot / test this if you are looking for UK people
Fiona - fmason@rdt.co.uk
This sounds amazing @Chris.Fii! It looks like this can be a gamechanger for people working with AI, especially within the scope of recruitment. Moreover, it would benefit employees that are on the hunt for jobs as well.
If biases are removed from the picture on a cognitive level, I imagine people will be more open to using AI for the purpose of conducting smart work wherever necessary.
I would love to remain up to date whenever you decide to run a pilot test for this. Please do let us know how it turns out and how we can further benefit from an AI that holds a clean-slate without any biases!
I definitely have interest in this and our devs use Claude - perhaps i should get myself a licence….will take a look, please keep me posted on progress. The incidence of bias is worrying and balancing the automation, the AI and the human is critical for us.
We are exploring all kinds of AI at the moment to enhance our offering to the business, and specifically to try to assist with the volume of CVs now being sent due to the ease of applications. (TY so much easy apply/lazy apply….!). @njee lets keep an eye on this as part of our investigations.
As an aside for the Personios amongst us i am keen to better understand what the roadmap is around using AI in the Personio ATS is likely to progress, i am hearing stuff on webinars and from our account manager (shout out to @Natalie Sena) about candidate summaries but what is essential now is a method to search the database, the talent pool for existing candidates that match new criteria @Moe are you able to give us some human intelligence on what might be happening????
Hi @fmason! This is a totally fair question. I’m liaising with my colleagues and will get you an answer asap!