Using open source to support explainable AI in the public sector

Gaining better visibility into the AI process can improve “explainable AI”–the ability for machines to clearly demonstrate and explain the rationale behind their recommendations–leading to increased trust in the systems.2 Agencies can now get closer than ever before to this ultimate goal by:

  • Bringing together teams with different talents and expertise to build solutions that meet this criteria, similar to the creation of DevOps teams.
  • Creating AI solutions on stable, trusted, and open platforms that allow public sector organizations to gain better visibility into AI data modeling and analysis processes while minimizing uncertainty and risk.

In this whitepaper, we will examine how open source software, along with the core cultural tenets of the open source community, can help the public sector achieve its AI objectives. With the right combination of technology and development methodology, agencies can build more transparent AI solutions, faster, resulting in greater efficiencies and more accurate and trusted decisions.

Red Hat

Share content on email