The area I’m personally hoping will make big progress is explainability. Currently a lot of AIs operate as a ‘black box’, where it can tell that the object in an image is either a dog but is not able to explain why, since it’s such an intricate machine. Explainability is trying to push away from this ambiguity. The SHAP values method is an exciting recent development in the area: it takes an AI and uses cooperative game theory to try and figure out the mechanics hidden in the model and is able to clearly describe how it works to humans. A lot of the fear behind AI is attributed to the fact we find it difficult to explain properly, so I think there should definitely be a push in making AI more understandable.