Cyber Strategy

Building Inclusive AI: A Call to Action to Cybersecurity Leaders

May 8, 2025
QUICK SUMMARY

As artificial intelligence becomes increasingly embedded in cybersecurity operations, the need for inclusive and equitable design has never been more urgent. This article explores how AI systems that fail to reflect diverse human experiences can unintentionally exclude, misinterpret, or even harm users—especially those from underrepresented or neurodiverse backgrounds. With insights from responsible tech leader Meghan Maneval, it offers actionable strategies for cybersecurity professionals to design, train, and govern AI tools with intentional inclusivity. From adaptive interfaces to transparent oversight, building inclusive AI isn’t just ethical—it’s essential to ensuring effective and secure technology for all.

The Next Frontier of Responsible AI

As artificial intelligence transforms corporate and cybersecurity operations and decision-making, there’s a growing imperative that often gets overlooked: inclusivity. AI tools must reflect and respect the full spectrum of human conditions, identities, and lived experiences—not just to be fair, but to be truly effective.

Meghan Maneval, a leading voice in responsible technology and governance, says it best:

“Inclusive AI is not a luxury—it’s a necessity. If our systems aren’t built for everyone, they’re not secure for anyone. Implementing AI responsibly is making it equitable and fair.”

The Challenge: Biased Algorithms in Security Tech

Cybersecurity platforms increasingly leverage AI for threat detection, user behavior analytics, and policy enforcement. But when it comes to the end-users, cybersecurity employees, AI that isn’t designed with inclusivity in mind can:

  • Exclude marginalized groups through biased training data
  • Misinterpret behaviors due to cultural or neurological diversity
  • Over-police or under-protect specific populations
  • Create “invisible barriers” to entry in technical roles

In short, AI that isn’t trained on data representative of all populations runs the risk of returning inaccurate or bad decisions, disenfranchising underrepresented groups, or leading to cognitive overload, especially for those with neurodiversity challenges.

Meghan points out that she cannot stress enough the importance for AI tool innovators to be intentional with their tool’s design, how it functions, as well as the inclusive data from which it pulls. “Meet the neurodiverse individual where they are,” Meghan offers. All end users, as well as those upon whom the generated results are based, must be considered. This is a monumental task, since generative AI draws from the entire internet.

The actual function of the AI tool, along with the generated results, must take into account how all humans request and process information differently. 

Insights: How to Make AI More Inclusive

Here are the top concrete ways cybersecurity professionals can contribute to building more inclusive AI platforms:

1. Design with Neurodiversity and Accessibility in Mind

AI interfaces and content generation tools should be optimized for cognitive variety. That means readable fonts, non-triggering colors, and logical workflows. It also means making intentional design choices that foster inclusivity, which Meghan continuously advocates.                

To offer an example, Meghan drew from the Web Accessibility Guidelines (WAG). These are standards to ensure website accessibility and include considerations such as font styles and site colors. But WAG was created quite a long while after websites were prolific.

“We need to get ahead of this when it comes to AI,” Meghan states. “AI tools need to allow for adaptability from day one.” This means that features, such as subtitles, font size, color-schemes, and animation, can be adjusted by the end user to limit or eliminate distractions and facilitate cognitive processing differences.

Designers should try to break their model through extensive testing to identify bias and inequities. They must work to correct the breakdown so that the data and results can be trusted.  

2. Varied Training Data = Better Threat Detection

Ensure training datasets include behaviors and language patterns from a global and diverse user base. Match your results and decisions to the audiences for which your tool is intended. This reduces false positives and blind spots in anomaly detection.

As an example, Ms. Maneval explained that she has trained her GPT using only her published work to identify her voice. When she uses it to generate information, it leverages these examples of “good” to produce content in her tone and style of writing. However, this result is hyper-personalized to Ms. Maneval. If a global organization only trained its AI on its internal data, the results will exclude external or divergent individuals. Training AI on varied data ensures it has diverse examples of “good” to rely on.

3. Transparent, Auditable Governance Models & Governance by Mixed Committees

Organizations like NIST and ISO are developing standards for ethical AI. Your AI deployments should include regular audits to ensure compliance and broad-gauged benchmarks. The role of governance is the make AI data inclusion a standard; not an option. It comes back to that ever-vital concept of intentionality, but how should AI be governed? “This is the million dollar question,” Meghan answers.

AI does not have one industry-specific model. The healthcare industry created HIPAA, so that is a standard for that arena. The challenge now is how to create the right standards across all industries. Examples like WAG have not yet been holistically applied to AI.

There’s no universal framework for AI. “You have to start somewhere,” says Meghan. Make a list of who, within your organization, uses AI and how they use it. Look for opportunities to audit the platforms used and results generated. Variety in the room matters. Ensure that your AI auditors, oversight boards, and/or ethics panels include individuals from a myriad of backgrounds, disciplines, and perspectives.

4. Inclusive Content Generation Tools

From automated phishing simulations to policy drafts, content-generating AI tools should reflect embracive language—avoiding bias in gender, ethnicity, or ability. Tools like OpenAI’s moderation API can help detect problematic output.

Be careful with things like chat bots, too. Some of these can overwhelm users, and make an issue worse. The end user needs to be afforded opportunities to adjust the chat experience and appearance.

As AI platforms become integral to network security—generating alerts, predicting attacks, and managing access—governance must catch up. Organizations like The Cyber Guild are leading the charge in developing ethical frameworks that ensure fairness, accountability, and equity.

We need more champions like Meghan advocating for proactive AI governance models that enforce inclusivity across the development lifecycle—from ideation and coding to implementation and ongoing audit.

Key Takeaways

  • Universally encompassing AI enhances security accuracy and trust.
  • Bias in AI tools can lead to systemic vulnerabilities.
  • Cybersecurity leaders must adopt ethical governance models and population sensitivity datasets.
  • Thought leaders like Meghan Maneval are paving the way for actionable change.

Join the AI Inclusivity Revolution

Join The Cyber Guild to stay informed about how you can drive more inclusive, equitable, and secure AI systems across all industries, especially cybersecurity. Be part of the movement ensuring that no one is left behind in our digital future.


Meghan Maneval is a passionate and visionary leader with nearly 20 years of experience in governance, risk, security, and compliance. As the self-proclaimed “Risk Optimist,” she develops and executes business strategy, innovates and designs new solutions for the Governance, Risk, and Compliance (GRC) industry, and evangelizes the benefits and value of emerging technologies and industry growth.

Meghan is a thought leader, public speaker, and author who leverages her strong technical background and extensive knowledge of GRC to educate, advocate, and influence the adoption of highly secure and scalable technology solutions. She has extensive experience in requirements gathering, testing, and promoting SaaS and mobile applications in highly regulated industries, as well as implementing and monitoring technical, physical, and administrative security mechanisms.

She’s committed to fostering a collaborative community where open conversations about risk drive insight and innovation and where diversity, inclusion, and belonging are core values.

Susan
ABOUT THE AUTHOR
Susan Powell

Marketing Director with Interstate Moving | Relocation | Logistics