The Neuro-symbolic AI Lab

Reasoning & Explainable AI

Research

The Reasoning & Explainable AI lab aims at developing systems which are capable of complex, abstract and flexible inference.

We operate at the interface between neural and symbolic AI methods aiming to enable the next generation of explainable, data-efficient and safe AI systems. Our research investigates how the combination of latent and explicit data representation paradigms can deliver better inference over data.

Our current research areas include:

  • Inference & Explanations
    • Natural language inference
    • Abstractive inference
    • Explanation generation
    • Explainable question answering
    • Scientific inference & explanations
  • Neuro-symbolic models
    • Multi-hop reasoning
    • Semantic control
    • Semantic probing
  • Extraction & Representation
    • Sentence & discourse representation
    • Open information extraction
    • Knowledge Graphs
    • Scalable Knowledge-based inference
  • AI applications in cancer research