Briefly explain what role machine learning plays in the robotics field

I believe that in the first lesson of Wu Enda's Machine Learning Open Class, I was deeply impressed with the use of intensive learning to train and control robots and helicopters to learn new skills.

So, what applications does machine learning have in robots? This article will give a brief introduction to this issue.

Computer vision

Because "robot vision" involves not only computer algorithms, but some people think that the correct term is machine vision or robot vision. Robotics or engineers must also choose camera hardware to allow the robot to process physical data. Robot vision is closely related to machine vision, which is used to guide robotic guidance and automated inspection systems. Minor differences between them may be in kinematics applied to robot vision, including the ability to reference frame calibration and the physical impact of the robot on its environment.

The influx of large amounts of data, visual information available on the web, including annotated/marked photos and videos, is driving advances in computer vision, which in turn contributes to further machine learning-based structured predictive learning techniques that drive robotic vision applications. , such as the identification and sorting of objects. An example of a branch is anomaly detection of unsupervised learning, such as a building system that can find and evaluate silicon chip failures using a convolutional neural network, designed by researchers at BiomimeTIc Robotics and Machine Learning Labs, a non-profit organization, AssistenzroboTIk Part of the electronic volt in Munich. Super-sensing technologies such as radar, lidar and ultrasound have also driven the development of 360-degree vision systems for autonomous vehicles and drones.

Briefly explain what role machine learning plays in the robotics field

2. Imitation learning

Imitation learning is closely related to observational learning, which is the behavior of infants and young children. Imitation learning is also an overall category of reinforcement learning and the biggest challenge for agents to take action around the world. Bayesian or probabilistic models are a common feature of this machine learning approach. The question of whether mimicry learning can be applied to humanoid robots was assumed in 1999.

Imitation learning has become an integral part of on-site robotics, and the mobility characteristics of some of the plant's mobile features, such as construction, agriculture, search and rescue, and military, have made manual programming robotic solutions challenging. Examples include reverse optimization control methods, or “programming by demonstration (PbD)”. CMU and other organizations are used in the field of humanoid robots, leg motion and off-road rough terrain mobile navigation. Researchers at Arizona State University published the video two years ago, showing a humanoid robot that uses imitation learning to gain different mastery skills.

The Bayesian belief network is also applied to the forward learning model, in which the robot learns the motion system or the external environment without prior knowledge. This example is "motor babbling", organized by the Language Acquisition and Robotics Group at the University of Illinois at Urbana-Champaign (UIUC), Bert is the "iCub" humanoid robot.

3. Self-supervised learning

Self-supervised learning methods enable robots to generate their own training examples to improve performance; this includes using a priori training and data capture close-up to explain "remote ambiguous sensor data." It is incorporated into robots and optical devices to detect and exclude objects (such as dust and snow); to identify vegetables and obstacles in rugged terrain; and to analyze and model vehicle dynamics in 3D scenes.

Watch-Bot is a concrete example created by researchers at Cornell and Stanford who use 3D sensors (Kinect), cameras, laptops and laser pointers to detect "normal human activity", a model that learns by probabilistic methods. . Watch-Bot uses a laser pointer to alert the target (for example, milk left in the refrigerator). In the initial test, the robot was able to successfully remind humans 60% of the time (it didn't understand what it was doing or why), and the researchers extended the experiment by allowing their robots to learn from online video (called project RoboWatch).

Other examples of self-supervised learning methods applied to robotics include road detection algorithms in forward looking monocular cameras with Road Probability Distribution Model (RPDM) and Fuzzy Support Vector Machine (FSVM), for autonomous vehicles at MIT Design and other mobile robots on the road.

Autonomous learning is a variant of self-supervised learning involving deep learning and unsupervised methods, and is also applied to robotics and control tasks. A team at Imperial College London collaborates with researchers at the University of Cambridge and the University of Washington to create a new way to accelerate learning, incorporating learning model uncertainty (probability models) into long-term planning and controller learning to reduce impact learning. The model of the new skill is wrong.

4. Auxiliary and medical technology

Auxiliary robots are devices that can sense, process sensory information and perform behaviors that benefit both the disabled and the elderly (although smart assistive technologies are also suitable for the general population, such as driver assistance tools). Sports therapy robots provide diagnostic or therapeutic benefits. These are mostly (unfortunately) still limited to laboratory technology, as these technologies are still costly for most hospitals in the US and abroad.

Early examples of assistive technologies include DeVAR or desktop professional assistant robots developed by Stanford University and Palo Alto Veterans Affairs Rehabilitation Research and Development Corporation in the early 1990s. An example of the latest machine-based robotic assistive technology is being developed, including auxiliary machines that combine more autonomy, such as the CNIC robot arm (developed by Northwester University) through the Kinect Sensor. These effects are more complex, and smarter auxiliary robots can more easily adapt to user needs, but also require partial autonomy (ie, shared control between robots and people).

In the medical world, advances in robot learning methods are rapidly evolving, although not easily in many medical institutions. Through Cal-MR: Medical Robotics Automation and Learning Center, a network of researchers and doctors at multiple universities (in collaboration with researchers from universities and doctors) led to the creation of Intelligent Organizational Autonomous Robotics (STAR) through autonomous learning and With the innovation of 3D sensing technology, STAR is able to splicing “pig intestines” (used to replace human tissue) with better precision and reliability than the best human surgeons. Researchers and doctors have shown that STAR cannot replace surgeons – Who will deal with emergencies in the near future for the foreseeable future - but offers significant benefits in performing similar types of delicate surgery.

5. Multi-Agent Learning

Coordination and negotiation are key components of multi-agent learning. It involves machine-based robots (or agents, currently related to agent technology that have been widely used in games), adapts to other robot/agent transformation patterns, and finds examples of “balanced multi-agent learning methods including sparing no effort Learning tools, which mainly involve reinforcement learning algorithms, "strengthen" learning outcomes in multi-agent planning, and learning based on market-based distributed control systems.

A more specific example is an algorithm created by a distributed agent or robotic researcher, by the Massachusetts Institute of Technology's Information and Decision Systems Laboratory at the end of 2014. Robots collaborate to build a better, more inclusive learning model than a robot (smaller block processing, and then combine) to build a knowledge base based on the concept of exploring buildings and their room layout.

Each robot builds its own catalog and combines the datasets of other robots. Distributed algorithms outperform standard algorithms in creating this knowledge base. Although not a perfect system, this machine learning approach allows robots to compare catalogs or data sets, enhance mutual observation and correct omissions or over-generalization, and will undoubtedly play a recent role in several robotic applications, including multiple Autonomous land and airborne vehicles.

10 Mm Nano Tip

10 Mm Nano Tip,Smart Board Touch Screen Pen,Electronic White Board Pen,Infrared Touch Screen Pen

Shenzhen Ruidian Technology CO., Ltd , https://www.wisonens.com

This entry was posted in on