Where is oussama khatib from




















A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Use of this web site signifies your agreement to the terms and conditions. Oussama Khatib Also published under: O. In fact, one of my students Dieter Vischer, who was working on this project worked also at DLR on some of the aspects of torque control and there is a very good connection between Stanford robotics lab and DLR in the collaboration that was very fruitful.

In fact, DLR were the first to produce an amazing machine, the lightweight robot, that brought torque control not only to the lab, but somehow moved it together with Ralph Caupe sp? So, I believe torque control is essential to render the robot compliant and to allow the robot to interact with the world and also to think about robots in a different way.

It's not anymore programming the robot, it's really thinking about the interaction of the robot with dynamic forces or contact forces.

So, we moved forward with the development of torque control and, as I said, we moved to take manipulation to the field to the environment by combining mobility and manipulation and then we started looking at interaction between platforms to do collaborative work. So, Romeo and Juliet were working together while sometimes Romeo would be doing a lot of home work like domestic work, like ironing, vacuuming, and all of that, but sometimes the two would be interacting to carry a heavy object and the concept there was sort of different from just the concept of saying well we built a robot and they will do the work.

The idea was we wanted to create a machine that can assist the human and work with the human and the idea was essentially it can represent the concept as follows. The human provides the brain and the robot provides the muscle. And, you put the two together and you have amazing synergy. So, first of all, you provide the muscle with the robot it means you need to do a lot of coordination of many different motions.

You're creating sufficient amounts of autonomy for the two robots to carry a task, to control internal forces, to do all of that, and to move and interact with the human, to follow the guidance of the human, and now the human is just like touching slightly the robot to position and all the load is carried by the robot.

So, that was the concept of a robot that is assisting the human, but at very, very high level. And, that started our idea of human guided motion and human-robot interaction. So, we probably were among the few first groups to look into the problem of haptic rendering. There was the group of Ken Salisbury at MIT working on that problem and here at Stanford we developed a comprehensive approach to dealing with the problem of collision detection.

So, in haptics what you need to do is to represent your objects virtually in the machine graphically, and you want to find where your finger or hand is with respect to this object, so you need to do collision detection. And, objects are represented with polyhedrals, so you have a lot of things to check and that means you are going to spend a lot of time doing collision detection.

At the time, robotics and haptics came together through what I just was describing earlier about the elastic planning. So, the elastic planning started with the idea of what we called the bubble band or elastic band. So, the elastic band was this idea that we would like to model the free space around the path so we have a trajectory that we plan, but instead of just planning for one path we are going to plan for a tunnel of the free space.

And, this tunnel is essentially you can imagine spheres that are intersecting. So, from one sphere of free space to the next sphere if they are intersecting then you can have a passage. Now, the obstacles that we have are represented by sort of a very fast way of computing distances to those obstacles from where you are would be to represent those obstacles with a combination of spheres. So, you can put the whole world in one sphere and then you break it into two and break it into two and this sphere hierarchy lets you go from where you are doing the check down to the leaves that are in contact.

The same concept that we were using in motion planning, elastic planning, we applied to haptics. And, we developed collision detection very easily, one of the — I mean at the time was one of the fastest techniques to resolve the collision with a large number of objects and at the time that was amazing because we were doing — I mean as you know haptics requires very fast interaction.

Now, that technique became a part of other algorithms to do collision detection that are actually making use of combined sphere hierarchy and balding boxes and other things. The other problem in haptics is the problem of collision resolution and collision resolution is something that we know is difficult, how you resolve collision, how you simulate the collision.

Now, simulating collision between free objects is something that we studied, we know how to do, but working with the problem of collision or multi-collision between multiple links; that is at multiple points you are making the collision, is a problem that we don't really much how to solve because you have constraints. So, in fact, Barav, one of the researchers who looked at this problem in the context of graphics and simulation resolved it in a very interesting way where just before the collision you have the velocities before you remove the joints, now you have free body colliding.

You collide. After the collision, you put back the joints; that is you eliminate the constraints, you remove the directions that are not possible, which takes time eliminating the constraints.

Now, in our approach, again this is the interaction of robotics with haptics. In robotics, we know how to commute the effective mass at a given point on the robot because we are interested in controlling this point in contact. We need to be able to stabilize these points, so this is information that comes from projecting the dynamics along I can take this effective mass, I take this effective mass, and make them collide. I replace the robot with two masses.

Now, if I have multiple collisions I will have two masses here, two masses there, two masses there, and in addition I have the interaction between those masses. So, that led to a very, very effective solution in resolving multi-collision between multiple links. We saw the elastic planning progressing. A lot of people who were working in bi-pedal locomotion were depressed, I remember.

And, a lot of people actually became really energized and excited, robotics is moving. I think that was a really big challenge to the community coming from a company that built a machine with those capabilities. Now, Honda really produced one of the most remarkable mechanical systems and developed one of the most interesting machines to perform bi-ped walking, stable bi-ped walking.

But obviously, they needed much more to really do useful things with the robot, and one of the things that we explored with Honda, just in the following year, I remember in was to try to take the capabilities we developed for Romeo and Juliet to Asimo. So, in , we started the project and we've been working with Honda on pursuing implementing, developing the capabilities of a Honda robot or a humanoid robot to go beyond just walking, to enable a humanoid robot to interact with the world, to be useful, to do things.

And, that was the direction that our work has taken in the last almost 10 years, I mean 12 years. And, that was one of the major projects that we have and it is really remarkable that a company like Honda kept a long-term relationship with a laboratory to explore this direction.

So, what were the challenges? First of all, if you have been to an Asimo show you would hear the following. They would tell you with Asimo there is one rule and that is do not touch the robot. That is this robot is controlled to balance, but it is not controlled to be interrupted by any external force unless the controller is going to accommodate that and that requires a specific way of doing it.

So, Asimo is essentially position controlled robot. What we had to do, we had to first of all take our concept and models from the structure that we had with mobile manipulation of Romeo and Juliet, which was essentially a city[?

But then, we have all kinds of redundancy that remained so if the arm is fixed the body can still move. So, we developed a technique that makes use of the idea of similar to a human task, you're moving your hand, you're reaching with your hand, and your body is following, the posture.

You're controlling the posture, but the task has priority. We started building those priorities. Two students worked with me on this. First of all, Jaeheung Park and Lewis Sanders, both of them contributed to this work in exploring the contact and the strategies for dealing with the priority control of the different tasks and subtasks that we are controlling. I mean when you think about a humanoid robot you think you need to control the posture, but you have to be consistent with the tasks, do not interfere with the tasks.

At the same time, you have constraints, you have to be consistent with those constraints. You have joint limits. You have obstacles. You have to balance.

You have so many things to do and people very often go and say alright, okay, I'll build a controller to deal with this problem. I'll build a controller to deal with that problem and I'll put them together and they will start fighting each other. So, what we ended up doing is reprogramming everything and choreographing every motion to produce what we think is a robotic motion.

So, what we've been doing was to create a framework that explicitly from the beginning integrates contact, integrates constraints, integrates the posture control, task control, the tasks at multiple points. You're controlling center of mass. You're controlling pressure. You have all the degrees of freedom, all the constraints all together. So, your robot is inside a field of forces trying to move away from all these different constraints to perform a task. Now, how do we control this robot?

You said torque, torque control. Do you know a humanoid robot with torque control? Alright, we're back to square zero. So, what do we do? Well, so you develop the simulator. You develop the controller. We spent many, many years building SAI, our dynamic simulation system with collision detection resolution, so we are simulating the environment, we're simulating the interaction with the environment, we're reproducing all the contact forces, and we are controlling the robot using that framework and it works beautifully.

Now, taking it to Asimo, again, is a big challenge because you have to deal with the fact that Asimo is controlled in position control, but if you identify a problem. I mean in research the most difficult part is finding the problem to solve. Once you decided — when you set your mind to the problem the solution will come and you solve it.

So, we said alright we need to find a way to go around the servo loop of position control of Asimo. We came up with an idea which is called the position to torque transformer.

And, the position to torque transforming is essentially an idea to fool the servo controller to make it think that it's following a trajectory, but actually it's producing the right torques. So, it's like inverting the controller and driving it in a way to make it compliant, and we managed to make it compliant and we managed to render Asimo really, really almost like a robot with compliant joint to control. But, all of it is done in terms of thinking about the robot as a system submitted to all kinds of dynamic forces, external forces, so it is moving in force space.

And, through that mechanism we never talk about inverse kinematics whereas most industrial robotics, most robotics, is actually developed with the idea that alright I need to move like this in space.

Well, let me compute what joint motion I'm going to have and let me find the inverse kinematics and then let me control those joints. That worked very well for free space motions, but once you start making interaction and contact with the environment you are going to have a problem controlling joint motions and forces of those contacts. In a way, the task-oriented control unifies motion control and force control or contact control in the same space, in the work space.

And, this is also really interesting from the fact that this is almost like how we humans do it. We think in the workspace and we operate and control relationships in that space.

Yeah, yeah, exactly. And, by the way, I mean understanding models of what the hand dynamics is makes you say alright now my skill is related to the hands, I mean what the hands are doing. So, I can develop the skill independently of the robot, so when I'm going to do learning, when I'm going to learn human skills I'm not talking about a robot that is going to establish connection between motor actions and resulting motions so my learning will be dependent on that specific robot.

Rather, I'm going to just think about forces and moments applied to the hands and the resulting behavior on those objects, and then I can take that strategy and apply it to any robot, and the relationship is very simple. Forces and moments here are related to torques using what we call the Jacobian transpose. Jacobian transpose captures the moment arms of the different joint axis. So, this is basically the major direction is think how you can produce those tasks in their own representations and then you can transform it through the Jacobian to produce those forces and controls as needed for different robots just by modeling the dynamics, the kinematics of that robot.

Well I know you're going to ask me what did you do in the new millennium, right? You are going to ask me that? So, one thing beside Asimo that happened was really very interesting. So, when you have a humanoid robot and you're interested in understanding how can we better be inspired by the human, how we can go further in exploring human skills and building controllers that are proven with the human to work.

So, when we really considered that we realized that either we need a way to look at every task a human is doing and try to copy that task and reproduce it by the robot or ready to go and identify the way humans perform those tasks. I mean identify in the sense that we need to model, have a dynamic model of the human, kinematic model perform these tasks and look at the musculoskeletal control of the human and how humans are handling these different things.

So, we started studying that took us to the human, and what we discovered there was that a lot of the techniques, a lot of the algorithms, developed for robotics, for articulated body systems are amazingly powerful when applied to musculoskeletal systems, because we developed recursive algorithms, very efficient algorithms that make use of generalized coordinates in very efficient ways.

We were able to reproduce, reconstruct human motion very quickly in real time whereas a lot of the algorithms and software that the bio-mechanic community use are still far from the optimal characteristics and efficiency that is needed to create real time interactions.

A lot of the measures, a lot of the characteristics and metrics we developed in robotics can be applied to the human. So, we started applying robotics to human models and we started doing a very interesting analysis that led us to a lot of interesting conclusions, not anymore about robotics; that is there is always this idea that robotics is about robots. Actually robotics is sort of this body of science that we are developing with sort of like algorithms, models that efficiently allow us to explore articulated body systems, high-dimensional spaces, and take us through that analysis with insight coming from the physical world, but really taking us to those high-dimensional systems, and they are nonlinear interactive nature, while humans are more complicated than most robots; that we know, but we have been able to apply those techniques to robots.

I mean those robots, those techniques that we developed for robots to the human and the result is really, really interesting. So, here is the first result. The first result that we found was just related to something very basic, which is to say if I was going to push an object, so you want to push the table, what posture the body will take to push a table. If I'm going to lift or pull a weight, what would be the most appropriate posture for that specific direction. It turned out that if you are doing this repeatedly and if you really discover the proper posture to push or to pull or to apply a force, you are using a posture of your body and this posture would cause the force and the effort of your muscle to produce that force to vary depending on what posture you are taking.

So, if you're drinking a cup of coffee- drinking a cup of coffee, this is a very common posture. We never do it this way, we do not do it that way, we do it here. And, the reason for that, there is something special about this configuration, some 40 some, degrees, and this is the fact that at this configuration the effort associated with the muscles that you have is minimized.

So, it's sort of like — when we are building a machine what do we do? We build the machine and we use the machine in a way to make use of its mechanical advantage. The international journal of robotics research 14 1 , , International Journal of Humanoid Robotics 2 04 , , The International Journal of Robotics Research 21 12 , , International Journal of Humanoid Robotics 1 01 , , Articles 1—20 Show more.

Help Privacy Terms. Real-time obstacle avoidance for manipulators and mobile robots O Khatib Autonomous robot vehicles, , Springer handbook of robotics B Siciliano, O Khatib springer , Real-time obstacle avoidance for manipulators and mobile robots O Khatib Proceedings. High-speed navigation using the global dynamic window approach O Brock, O Khatib Proceedings ieee international conference on robotics and automation … , Inertial properties in robotic manipulation: An object-level framework O Khatib The international journal of robotics research 14 1 , , Furthermore, we found that neural activation related to stiffness-change and absolute stiffness can be localized to adjacent but disparate anatomical locations.

We also show that classical finger-tapping experiments activate a swath of cortex and are not suitable for localizing stiffness perception. Our results demonstrate that decorrelating motor and sensory neural activation is essential for characterizing somatosensory cortex, and establish particle-jamming haptics as an attractive low-cost method for fMRI experiments.

Neuroimaging artifacts in haptic functional magnetic resonance imaging Haptic fMRI experiments have the potential to induce spurious fMRI activation where there is none, or to make neural activation measurements appear correlated across brain regions when they are actually not.

Here, we demonstrate that performing three-dimensional goal-directed reaching motions while operating Haptic fMRI Interface HFI does not create confounding motion artifacts. To test for artifacts, we simultaneously scanned a subject's brain with a customized soft phantom placed a few centimeters away from the subject's left motor cortex. The phantom captured task-related motion and haptic noise, but did not contain associated neural activation measurements. We quantified the task-related information present in fMRI measurements taken from the brain and the phantom by using a linear max-margin classifier to predict whether raw time series data could differentiate between motion planning or reaching.

We also localized artifacts due to the haptic interface alone by scanning a stand-alone fBIRN phantom, while an operator performed haptic tasks outside the scanner's bore with the interface at the same location. The stand-alone phantom had lower temporal noise and had similar mean classification but a tighter distribution bootstrap Gaussian fit than the brain phantom.

A challenging problem in motor control neuroimaging studies is the inability to perform complex human motor tasks given the Magnetic Resonance Imaging MRI scanner's disruptive magnetic fields and confined workspace. The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria.

Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present i a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, ii a task-driven muscular effort minimization criterion and iii new human performance metrics for dynamic characterization of athletic skills.

Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance.



0コメント

  • 1000 / 1000