Abstract:
Robot grasping is an important component to perform manipulation tasks. It is considered one of the ways for robots to interact with the environment [1]. Grasp planner plans a valid grasp for the underlying object. A valid grasp is a grasp, which prevents an object to fall [2]. The problem arises when the robot needs to grasp an object for a specific task. In this case, a valid grasp may not be a valid grasp. For instance, a robot needs to pour a liquid from one container to the other. Technically, in this case, a top grasp is a valid grasp but consider the pouring action, the top grasp is not valid. Existing methods to generate a grasp are mostly based on analytical and geometrical solutions. These approaches have performed well in generating a valid grasp, which is not a task/action specific. To generate an action-specific grasp, a robot needs to understand the task and its prerequisites. These prerequisites can be grounded on social norms or technical aspects. These norms and values act as constraints, which robots need to consider while planning a task-specific valid grasp. Induction of task-specific constraints in grasp planning requires the robot to have cognitive skillsets along with geometrical ones. The cognitive skillset requirement includes but is not limited to perception, short- & long-term memories, contextual awareness, situation awareness, semantics, and cultural understanding along with social norms. A cognitive architecture is required to put all these features into an executable framework. The challenge is to design and implement such cognitive architecture in such a way that the robot may be able to learn and perform task-specific grasp considering the given constraints. The research intends to develop such an artifact that allows the robot to perform task-specific grasp. Indeed, at this stage of research, the intention is not to develop a domain-independent task-specific grasp planner. Therefore, the application and validation of the contribution will be validated using kitchen domain objects.