Local linearization
If \(x = f(q)\) maps joint configuration \(q\) to task-space coordinates \(x\), then a small update satisfies
where \(J(q)\) is the manipulator Jacobian. The inverse problem is to find a joint increment \(\Delta q\) that reduces a task error \(e = x_d - x\).
Damped least squares
Near singularities the pseudoinverse can become numerically unstable. Damped least squares regularizes the problem:
The damping term \(\lambda\) trades exactness for robustness. In practice this is often the better choice for real manipulators, especially when sensor noise and model mismatch are unavoidable.
Redundancy resolution
Redundant manipulators allow secondary objectives in the Jacobian null space, for example posture regularization, joint-limit avoidance, or obstacle biasing.
dq_primary = J.T @ torch.linalg.solve(J @ J.T + lam**2 * I_task, error)
dq_null = (I_joint - J_pinv @ J) @ dq_secondary
dq = dq_primary + dq_null
This makes inverse kinematics relevant well beyond textbook derivations. It sits directly inside practical robot stacks, often beneath planners or high-level policies.