With data science community thriving to decode the so called black-box notion attached with Machine Learning based solutions, Deep Learning poses an even bigger challenge. While building any Machine Learning or Deep Learning based solutions, domain expertise plays a huge role in the selection of features and the subsequent explaining of the model’s mechanisms. Across the research community, there has been some bit of success in feature engineering of numeric and textual data, however replicating that success for visual content is still at a nascent stages. This talk provides an in depth look at some of these challenges of improving AI’s explainability and highlights some of the possible techniques to overcome those challenges.