In our experiment, we focus on an approach known as Decision making using game theory. We apply principles from game theory to model the relationships between rating actions, news, market signals and decision making. We postulate the use of design capability indices to facilitate the teams making a ranged set of decisions, instead of specific ones.
Modern machine learning models are highly flexible but lack transparency. Can we devise methods to explain the predictions of such models, without restricting their expressiveness? Can we do so even if we don't know anything about their architecture, i.e., if they are "black-boxes"? In this project, we are developing methods for explaining the predictions made rather than constraining the models themselves to be interpretable. We are particularly interested in providing explanations for the predictions of complex machine learning models that operate on structured data, such as sentences, trees or graphs. For example, we use statistical input-output analysis to learn to interpret predictions of sequence-to-sequence models, such as those used in machine translation and dialogue systems.
Our goal is to do fundamental research, invent new technology, and create frameworks for forecast in order to drive a human-centered approach to artificial intelligence.