Motion planning for wall climbing is discussed in this project. In wall climbing, an agent starts with an initial pose and then uses protrusions on the wall (called holds) to lock onto and climbs to the finish hold at the top. In this project, the goal is for the agent to reach the finish hold moving as naturally as possible. We illustrate that joint angles constraints and center of mass control contribute significantly to natural motion synthesis. Local minima poses are avoided by using random sampling based asymptotically globally-optimal inverse-kinematics solves. These coupled with gradient descent make the agent reach snap to holds reliably. The classical climbing techniques of switching pivots and matching hands are also programmed. The neck position prediction for a two arm agent is non-trivial and a good candidate for learning based methods. We propose a neural network based policy trained using cross-entropy optimizer for this task. We visualize and provide insights into the learnt policy. Using these methods we are able to send the routes reliably with natural looking motion.