Blog posts

  • Automatically Generating Abstractions for Planning

    An effective hierarchical decomposition of a problem would solve a task at lower level without violating the conditions in more abstract/higher levels of the hierarchy. Knoblock (1994) formalizises this intuition as ordered monotonicity property. This post briefly explains that property and describes how to learn the hierarchy using the sufficient condition for that property.

  • Logical Neural Network

    Ryan Riegel, et al. (arxiv 2020) proposes to build Neural Network by adding one neuron for each logical gate and literal in a logic formulae and hence building a neural framework for logical inference. This article reviews their work. It was written jointly with Siwen Yan, as part of the course on NeuroSymbolic systems by Prof. Sriraam Natarajan.

  • Augmenting Neural Networks with First-order Logic

    Declarative knowledge, first-order rules are used in ILP (a lot) to reduce dependency on the data. Since deep neural network are data hungry, can we use some first-order rules and reduce their data requirement? This post reviews the work by Tao and Srikumar (ACL 2019) which attempts to answer this research question.

  • Types of Neuro-Symbolic Systems

    I attended the AAAI 2020 conference in NY, and one of the most influencing talk in that conference (for me, of course!) was the address by Prof. Henry Kautz on The Third AI Summer. In that talk, he provided some taxonomy for the future Neural and Symbolic approaches. This article is my attempt to summarize that taxonomy.

  • Tools for Causal Inference

    I read the Book of Why last year and recently in the reading group at Starling lab we read the 7 Tools of causal inference by Prof. Pearl. I was a little taken by how much I have forgotten about causal inference. So I found the need to jolt down my understanding, so I can refer to it later. This article summarized my current understanding of the tools presented in the paper, based on the paper and the book.

  • Active Feature Elicitation

    Natarjan et al., IJCAI 2018 is one of the most ebullient papers from the Starling Lab (in my opinion, of course!). It formalizes a unique problem setting called Active Feature Elicitation. The task here is to select the best set of examples on whom the missing features can be queried actively. This blog post summarizes my understanding of that paper.

  • Attacking GNN with Meta Learning

    This article reviews a very exciting ICLR 2019 paper: Adversarial Attacks on Graph Neural Networks via Meta Learning. This was originally written as part of a class assignment at UT dallas.

  • Learning Symbolic Representations for planning

    In the pursuit of learning planner from data, I ended up reading Konidaris et al. (JAIR 2018). Getting through this paper was an onerous task. Which I would not like to do again. So, here are my notes on the key concepts from that paper, which are relevant for learning high-level, abstract planner.

  • Deep Relational RL

    Relational RL has not made a lot of splashes in real-world because it is easier to write a planner than learn a relational RL agent. This might be about to change with the current achievements of the graph based relational reasoning approaches. This article summarizes my understanding of the pioneering work of Vinicius Zambaldi et al. (ICLR 2019) on Deep Relational RL.

  • Relational Network

    Overview of Adam Santoro et al. (NeurIPS 2017).