The year is 2015, MEMS (Microelectromechanical Systems) technology is a growing field that requires more automative tools to lower the cost of production. Current industry standards of tele-operated 3D manipulated MEMS parts to create new devices are labor intensive and expensive process. Using computer vision as a main feedback tool to recognize parts on a chip, it is possible to program a close loop system to instruct a computer to pick and assemble parts on the chip without the aid of a user. To make this process a viable means, new chip designs, robotic systems and computer vision algorithms working along side with motion controllers were developed. This work shows in detail the hardware, software and processes in place to make it possible.
One of the main and recent problems in Malaysian hospitals is the lack of surgeonsand specialists, especially in rural areas. Insufficient specialised surgeons in such regions particularly in the niche of orthopaedic causes more fatalities and amputees due to time constrain in attending the patients. Broken limbs due to accidents can be treated and recovered. But severed blood vessels results in blood loss and leads to amputation or even worst fatalities. A mobile robotic system known as OTOROB is designed and developed to aid orthopaedic surgeons to be virtually present at such areas for attending patients. The developed mobile robotic platform requires a flexible robotic arm vision system to be controlled remotely by the surgeon. To be present virtually is still insufficient if clearer view is not obtained. Thus, a flexible robotic arm with vision system as end effector is designed, developed and tested in real time. Fuzzy logic is implemented in the control system to provide safety for the robotic arm articulation. The safety systems of the robotic arm consist of Danger Monitoring System (DMS), Obstacle Avoidance System (OAS) and Fail Safe and Auto Recovery System (FSARS).
"Manipulation" refers to a variety of physical changes made to the world around us. Mechanics of Robotic Manipulation addresses one form of robotic manipulation, moving objects, and the various processes involved-grasping, carrying, pushing, dropping, throwing, and so on. Unlike most books on the subject, it focuses on manipulation rather than manipulators. This attention to processes rather than devices allows a more fundamental approach, leading to results that apply to a broad range of devices, not just robotic arms.The book draws both on classical mechanics and on classical planning, which introduces the element of imperfect information. The book does not propose a specific solution to the problem of manipulation, but rather outlines a path of inquiry.
The vision-based hand tracking and gesture recognition is an extremely challenging problem due to the intricate nature of hand gestures this is a reason that available computer vision algorithms are computationally complex. In this research work a new methodology for 3D human hand gestures detection and recognition is proposed, which can be used for natural and intuitive human-computer interaction and other robotic systems. The proposed method based on morphology approaches to solve the problem of human hand tracking and gesture recognition of 3D objects from a single silhouette image. This new proposed method was applied and tested on the simulated Manipulated Robotic System (UniMAP Robot Manipulator Simulation System) that allows this robotic system to act as an intelligent system to track a human hand in 3D space and estimate its orientation and position in real time with the goal of ultimately using the algorithm with a robotic spherical wrist system. During experiment, there was no need for continuous camera calibration, experimental result shows that proposed method is a robust, unlike other approaches that use costly leaning functions or generalization methods.
Suturing in laparoscopic surgery is a challenging and time-consuming task that presents haptic, motor and spatial constraints for the surgeon. As a result, there is variability in surgical outcome when performing basic suturing tasks such as knot tying, stitching and tissue dissection (as large as 50%). This goal of this thesis is to develop a standardized, proof-of-concept, automated robotic suturing system that performs a side-to-side anastomosis with image guidance and dynamic trajectory control. A passive alignment tool is created for rigidly constraining needle pose, and robust computer vision algorithms are used to track surface features and the suture needle. A robotic system integrates these components to autonomously pass a curved suture needle through sequential loops in a tissue pad phantom.
The technology, which is based on intelligence, is the future of science. A good intelligence system can be built with the smart sensing and a good knowledge base. Over the last decade, the face recognition is getting a high attention. Due to the natural behavior, face is the most meaningful part of the human body. It can be easily observed through same technique. Face localization method should be simple, efficient and accurate. Different face localization techniques are available right now.
Micro-electromechanical systems (MEMS) are micromachines that allow computation, sensing, mobility, and manipulation at small scales down to the size of microns. During the past decade, MEMS technology has allowed the development of many advanced devices that have found their way to the market. Ever increasing needs for MEMS are fueled by the exponential growth of markets such as cell phones, gaming, and communications and military applications. Both quality and price requirements put stringent specifications on the new MEMS devices. Feedback control techniques facilitate reliably meeting these specifications. This book provides the reader with control strategies and design techniques in a collection of practical examples, covering topics such as dynamical modeling of MEMS devices, dynamic control for performance improvement, and improved MEMS design based on control system analysis.
A wide selection of stereo matching algorithms have been evaluated for the purpose of creating a collision avoidance module. Varying greatly in the accuracy, a few of the algorithms were fast enough for further use. Two computer vision libraries, OpenCV and MRF, were evaluated for their implementations of various stereo matching algorithms. In addition OpenCV provides a wide variety of functions for creating sophisticated computer vision programs and were evaluated on this basis as well. Two low-power platforms, The Pandaboard and the Beaglebone Black, were evaluated as viable platforms for developing a computer vision module on top. In addition they were compared to an Intel platform as a reference. Based on the results gathered, a fast, but simple, collision detector could be made using the simple block matching algorithm found in OpenCV. A more advanced detector could be built using semi-global stereo matching. These were the only implementations that were fast enough. The other energy minimization algorithms (Graph cuts and belief propagation) did produce good disparity maps, but were too slow for any realistic collision detector.
This research report brings together present trends in advanced welding robots, robotic welding, artificial intelligent and automatic welding. It includes important technical subjects on welding robots such as intelligent technologies and systems, and design and analysis. Modeling, identification and control of the welding process are presented, as well as knowledge-based systems for welding and tele-robotic welding. Other topics covered are sensing and data fusion, computer vision and virtual-reality applications of the welding process. An overview of intelligent and flexible manufacturing systems is given in addition to artificial intelligent technologies for industrial processes.
Vision is a very important sense to humans and animals. It is mimicked for robots with cameras. However, conventional cameras have a small field of view, which implies that important difficulties might be encountered (such as motion ambiguity, occlusion and lack of information), particularly in robotics. This book is therefore dedicated to omnidirectional vision systems, especially catadioptric cameras, which permit these difficulties to be overcome. A catadioptric camera is a special type of omnidirectional system: it is composed of a camera and a convex mirror, permitting a great enlargement of the field of view and thus acquiring much more information from the environment. The objective of this book is to study the role of omnidirectional vision for robotic applications such as low-level feature extraction, target tracking, motion estimation and 3D reconstruction, while addressing the fundamental issues pertaining to omnidirectional vision. This book is accompanied with “OmniToolbox v2.1”, a Matlab toolbox dedicated to omnidirectional vision (freely available on the website of the authors), so that the readers of this book can start working with omnidirectional images directly.
Computer vision-based gender detection from facial image is a challenging and important task for computer vision-based researchers. The automatic gender detection from face image has potential applications in visual surveillance and human-computer interaction sys- tems (HCI). Human faces provide important visual information for gender perception. The system described in this book can automatically detect face from input images and the detected facial area is taken as region of interest (ROI). Some techniques and algorithm of Image Processing is applied on that ROI which identifies the gender of the face image.The experimental reseult described on chapter 4 in this book finds the accuracy of the system is more than 80%.
Computer and machine vision involve the automatic extraction, manipulation analysis and classification of images or image sequence, usually within special or general-purpose computering systems. The purpose of which is to obtain useful information about the word with a view to carrying out some task. This book concern about image analysis, image classification, histogram, feature extraction, machine vision techniques, N-tuple operators, segmentation and pattern recognition.
Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound sensors to capture hand position and orientation, and a stereoscopic display to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of hand gesture based applications, namely, virtual object manipulation and visualisation, direct sign writing, and finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to manipulate and visualise virtual objects in 3D space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using a range of complex hand gestures. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time.
Vision is perhaps the most important sense for humans. It consists of processing images of scenes so as to make explicit what needs to be known about them. Among the different complex tasks accomplished by the Human Visual System, the tasks of representing and understanding the content of an observed scene are fundamental; these tasks, indeed, allow to humans the interpretation of their surroundings. Computer vision aims to build robust and reusable vision systems that act taking into account the visual content of images and videos. Just as learning is an essential component of biological visual systems, the design of machine vision systems that learn and adapt represent an important challenge in modern computer vision research. This book focuses on some key ingredients useful to represent images for scene recognition, image retrieval and content based learning.