FDA is trying to catch up with AI….AI you’ve got some explaining to do!

Finally the FDA is starting to make an effort to catch up with the times by taking the fist step in formulating a frame work of AI and Medical Applications.

Call me crazy, but Explainable AI (XAI) should be a big part of the conversation. The current generation of AI lacks the basic capability of letting the user know how it arrived to it’s conclusion (We are forced to blindly trust the AI).

This is what is called the “black box problem“.

Due to the “black box problem” the current generation of AI technologies face ethical, legal & trust barriers for mass adoption.

Now law such as the European Union’s GDPR and recently proposed US legislation are bringing the issue of lack of transparency into forefront.

The current lack of transparency becomes more acute in mission critical AI Applications such as medical field application where life and death decisions need to be made and justified.

Would you fully trust an AI to make the decision to undergo a medical procedure to remove an organ because there is a 75% probability of a certain diagnosis? How did it arrive to this conclusion? How is the AI sure you don’t fall under the 25%?

This is where XAI shines. XAI is able to explain to it’s users why it arrived to it’s conclusion or why not something else… in essence it opens the AI black box and tears down these trust, ethical and legal adoption barriers.

At the Brain Mechanics Foundation we are leading the way in XAI Product R&D & are looking for exceptional individuals to contribute to this Open Source XAI project.

If you are interested in helping shape the future of Open Source Health, please check out our R&D Projects page to explore the ways you could contribute to this and other keystone health technology projects.