http://schema.org/InStock
Availability: In Stock

Price: $139.94
Ex Tax: $127.22

Available Options

* Options:
- +

The fundamentals of image processing and machine vision are covered in the beginning of this manual, followed by the correct selection of cameras, frame grabbers and vision appliances. Finally, you will be shown how to select and integrate all the varying components into a professional and working system. Despite the advances in technology, don't expect your machine vision to have the versatility and brilliance of a human...yet. But if you apply the key concepts in this manual to your machine vision application, you should have a reliable and effective solution.

Download Chapter List

Table of Contents

Chapter 1: Overview

1

OVERVIEW

 

Machine Vision is an evolving technology and has many applications in the manufacturing industry for improving quality and safety as well as enhancing the efficiency process.  Machine Vision specializes in the capturing, digitization, manipulation and analysis of images. In this initial chapter we will discuss the basic principles involved in Machine Vision as well as the historical developments that have led to its growth and use.

 

Learning objectives

  • To understand the importance of vision to life
  • Importance of vision in the manufacturing industry
  • Machine Vision definition
  • Basic diagram of a Machine Vision system
  • Technologies that have contributed to Machine Vision
  • Comparison of Machine Vision and Computer Vision
  • Importance of Machine Vision in the manufacturing industry
  • Comparison of Machine Vision and Human Vision
  • Application areas for Machine Vision systems

 

1.1       Importance of vision to life 

Sight or Vision is the most valuable of all the senses to human beings.  Vision has always been, and remains to be, the primary sensing facility that does not rely on physical contact.  The ability to see allows a living mobile organism to hunt for food, search for a mate, and look for a place to rest safely from predators and other potential dangers.  All this is achieved with a minimum expenditure of energy.   When one goes to market, one selects food items such as fruits, vegetables etc., relying visually on their appearance to assess their quality.   We visually inspect our clothes, cars and homes for dirt, grime and stains.  We decorate our homes, using wallpaper, curtains, and wall frames. We select flowers and ornaments to create a visual environment that suggests tranquility, elegance or excitement, whichever we think as most appropriate. We use our eyes to move around safely.  Many of the games we play rely heavily or entirely on our vision.  We conduct commerce almost exclusively by exchanging messages written on pieces of paper or on a computer screen.   Teaching, learning, earning, cooking, and purchasing: each activity involves the use of our eyes and our vision.  We do so many things with our eyes during the light of day that the threat of loss of eye sight is dreaded almost as much as death itself.  Vision is inherently clean, safe and hygienic, as it does not rely on physical contact between the observer and the specimen.  Vision is also very versatile: human beings are able to detect subtle changes of shape, texture, shade and color.  A human police inspector or crime scene investigator can examine a highly complex and ambiguous situation, and can almost always make appropriate decisions based on the visual evidence. 

Vision remains the most dominant human sense and visual inspection is an important activity in the manufacturing industry.  This mainly involves the design of an artificial vision system that attempts to emulate one particular human activity; the inspection and manipulation of highly variable natural objects and human artifacts.  As stated above, animal or human vision is fast, clear, safe, hygienic and versatile. Our aim is to design a system which can possess all of these advantages.  Our aim is to design a machine that can sense its environment visually, in order to perform some useful tasks.  Our research efforts have been aimed to build a sophisticated machine that can accomplish automated inspection, assembly, manufacture and other tasks, by sensing the environment optically: this is called MACHINE VISION.  Machine Vision evolves from an exotic technology, created out of the excellence of the human brain, into a technology that is of considerable practical and commercial value. This technology can provide excellent assistance over a wide area of the manufacturing industry. For the purpose of understanding and further reference, Machine Vision can be explained as follows:

 

Machine Vision:

Machine Vision deals mainly with the application of highly variable objects.  It is concerned primarily with the automation of the visual representation of natural materials and products which are characterized by an ill defined shape, size, color or texture.  Food manufacturing forms an important area of application where natural materials and other highly variable objects are processed. These include agriculture, fishing, horticulture, mining, catering, footwear, pharmaceuticals and clothing etc.  Many engineering manufacturing tasks require the handling and processing of highly variable objects.  The electronic industry deals with high precision components such as solder joints, flexible cables and flying lead wire connectors (resistors, capacitors, inductors, diodes etc.). For such precise application needs, Machine Vision can offer accuracy.  It is very obvious: we create certain difficulties when we apply Machine Vision where highly variable object handling or high precision vision is required.  Machine Vision has been applied in industry to tasks such as inspecting close tolerance engineering artifacts, during, or shortly after, manufacture.  This visual sensing enables the machines to perform a wide variety of tasks for the manufacturing industry. Such tasks include, inspecting, coating, grading, sorting, matching, locating, guiding, recognizing, identifying, reading, classifying, verifying, measuring, controlling, calibrating and monitoring. This vision can be used to examine raw materials, feed stocks, select tools, manufacturing, partially machined and finished products, coating, labels, packaging, transport system, waste materials and effluents.

When we apply Machine Vision techniques to the inspection/handling of manufactured goods, we are confident with regard to results and expectations. Certainly this is true for high precision objects, such as those from the plastic molding industry.  The same confidence applies when we deal with products that are mass produced and need to be similar to items of the “same type”, such as would apply in the pharmaceutical or food industry. Natural products often exhibit a degree of similarity, which is harder to define objectively.  This is not the case with the manufacturing industry when we take into account nature’s concept of similarity.  This is achieved by defining the “class”, which defines the similarity of one type of objects and the dissimilarities for different types of objects.  Broadening of the concept of “class” is possible with the help of allied subjects like Pattern Recognition, Artificial Intelligence, Cybernetics, and Robotics.  These all have achieved the greatest success with close tolerance products.  With the influence of all these advancements, Machine Vision has reached a clear and certain level of maturity. 

From the above discussion it becomes apparent that it is difficult to define Machine Vision satisfactorily since it is concerned with a diverse range of technologies and applications. Vision simulation entirely depends upon the application, but for the sake of reference we can define Machine Vision as follows:

Machine Vision:  Machine Vision is concerned with the engineering of integrated mechanical-optical-electronic-software systems for examining natural objects and materials, human artifacts and manufacturing processes. Machine Vision undergoes this operation in order to detect defects, improve quality, and maintain operating efficiency. Machine Vision is also concerned with the safety of both products and processes. It is also used to control the machines used in manufacturing.

Implicitly, the above definition clarifies that the Machine Vision is a multi disciplinary organization and necessarily involves designers in mechanical, optical, electrical, electronic (analog and digital), software engineering and mathematical analysis.  Integrating these various technologies to create a harmonious, unified system is of paramount importance to Machine Vision.  To make the machine highly efficient for vision, our designers keep the following foremost in their minds:

  • Our aim is to design the Machine Vision and not the Computer Vision.  Here it should be noted that Machine Vision is a result of computer engineering combined with other areas, such as mechanical, optical, electrical etc. Although we can say that these two areas create fundamentally different images on the human mind, the areas are clearly related to each other.
  • Designing ideas should be focused on industrial application.
  • Machine Vision is a branch of system engineering; it is not the science.
  • Machine Vision is more concerned with verification rather than identification. For example, Machine Vision answers questions such as: “Is it similar?” rather than questions such as: “What is it?”
  • The essence of designing the Machine Vision system is to keep Human Vision foremost in the mind:  “Which components play important roles in the human vision?”; “How does the vision create an image and make the task of verification and identification easier?”
  • It is clear that a successful Machine Vision system almost always requires integration and harmonization of several different areas of engineering technology.

 

Keeping these above things in mind, we may understand the concept of Machine Vision, but to understand the Machine Vision from an engineering point of view, one must understand each and every phenomenon that takes place after capturing the image.  For that, we can consider the basic block diagram of the Machine Vision system:

 

Figure 1.1

Basic Block diagram for the Machine Vision system

 

In the above diagram, we can definitely understand how many fields are interrelated in Machine Vision.  By observing the above blocks, it is clear that designers should have a knowledge of electronics, mechanical, optical, and software engineering.  Along with these, there are many important fields involved, which have their own research importance.  The first important thing to decide is where to mount the camera.  If an application requires a mobile camera, then we can implement the robot: obviously in this instance, one must know about robots and robotics. After capturing the image of the object, feedback must be given to the robot or to the other components. For this task, Cybernetics is required.  Feedback is necessary in every instance, and in order to extract feedback, the captured image has to be processed and identified; this is where Pattern Recognition comes into the picture.  This recognition is being done with the help of Artificial Intelligence.  For image and text display, computer vision as well as some software is required.  It is therefore obviously preferable to have some knowledge about all of these fields.

 

1.2       Role Played By Different Areas In Machine Vision

From the above figure it is clear that the Machine Vision is a specialized research area of system engineering. As well as Computer Vision, Machine Vision incorporates the essence of Cybernetics, Robotics, Pattern Recognition and Artificial Intelligence.  All these in combination form the origin of the Machine Vision.

Cybernetics:

Cybernetics originated from the Greek word for steersman, but can have several other meanings, including pilot, governor, rudder and, in a sense, government.  The aim is to derive common functional models for all industrial systems which assist in the realization that it is merely a science of an observed system, and cannot be separated from an observing system.  It originated from the interest in the observed system and the observing system.  Cybernetics describes language, art, performance or intelligence in a convenient way, rather than in a scientific way.  Implementation may involve software or hardware. For our purposes, we can define Cybernetics in the following terms:

Cybernetics is the study of communication and control, typically involving regulatory feedback involving humans or animals.

Efforts are made to apply this regulatory feedback system of communication and control to industry.  In other words, Cybernetics involves machines communicating with other machines, or in combination with humans. This states that machines are able to learn.  Automation involves the machines acting according to the received order and according to the selected objectives. It also involves the industrial progress where machines are able to control themselves as well as being able to communicate with other machines or human beings.

Theoretically, Cybernetics is based on variety, circularity, process and observation. Variety forms the fundamentals of information, communication and control theories and emphasizes multiplicity, alternatives, differences, choices, networks, and intelligence rather than force and singular necessity.  Circularity is involved when feedback is required. In recent times, feedback, in a broad way, was called recursion and iteration in computing involving self reference organization. Furthermore, in a general sense, feedback is an autonomous system of production.  It is this circular form which enables Cybernetics to explain systems from within; no making recurs to higher principles expressing no preferences.  All Cybernetics theories involve process and change from its notion of information, as the difference between two states of uncertainty from initialization up to the growth state.  A main feature of Cybernetics is that it explains such systems in the form of circular causality of feedback loops, while taking into account regulation processes and equilibrium conditions.  Observation includes decision making tasks for the given system.  It is merely information processing and computing.

Early contributions to Cybernetics were mainly technological, and gave rise to feedback control devices, communication technology, automation of production processes and computers.  Interest soon shifted to numerous sciences involving man applying Cybernetics to the processes of cognition to practical pursuits such as development of information and decision system, management, government and understanding complex forms of communication and computer networks.   These practical applications of Cybernetics helped in forming the noble idea of Machine Vision.  All four basic pillars are thoroughly involved in Machine Vision also. 

 

Artificial Intelligence:

Artificial Intelligence is predicted on the presumption that knowledge is a commodity that can be stored inside a machine, and that the application of such stored knowledge to the real world constitutes intelligence.  Artificial Intelligence is usually defined as the science of making machines do things that require intelligence when done by a human.  Artificial Intelligence is said to be succeeded in limited domain when the machines are made intelligent with the help of computers.  It is merely a process whereby computers are trained to become intelligent.  Artificial Intelligence is nothing but an attempt to discover and describe aspects of human intelligence that can be simulated by machines.  Early machines were able to do things such as play games, identify visual or auditory patterns and solve mathematical theorems.  The extent to which a machine can do these things is still limited i.e. the intelligence of a machine is still limited.  Artificial Intelligence is simply an attempt to develop a mathematical theory to describe the abilities and actions of things exhibiting intelligent behavior, and should serve as an intelligent machine. 

It is always best to begin with questions such as: “What is intelligence?”; “How can the machines and their behavior be described mathematically?” “In which way can the machine be made intelligent?” And finally: “When can the machine be called an intelligent machine?”

Intelligence is simply the ability to adapt one’s behavior and implement it in a suitable form for different circumstances. For example, human intelligence is not a single ability or a cognitive process. It should be thought of as an array of separate components.  Artificial intelligence mainly focuses on learning, reasoning, problem solving, perception and language understanding. 

Learning can be described in a number of ways. One of them is trial and error.  Learning is relatively easy to implement on a computer.  More challenging is the problem of implementing what is called generalization.  Reasoning involves drawing inferences appropriate to the situation in hand. A program cannot be said to reason simply because it has the ability to draw inferences.  In perception the environment is scanned by various scene organs which may be real or artificial and processes internal to the perceiver analyze the scene into the objects and their feature and relationships.  Language understanding is merely a system of signs having meaning by convection.  Problem solving is the heart of Artificial Intelligence.

Problem solving is simply a representation which can be easily manipulated by a computer.  Such a representation reduces the problem to a model within the data structure of the computer.  A problem is represented by constructing a model that is analogous to the original problem.  Thus the idea is to solve the problem by solving its model representation.  Model representation is used as it reduces the original problem to a set of states that are easier to understand and manipulate.  As an example, we can consider a problem of explaining the function of a diode to a non-technical person. Even if we start with solid state electronics, we may not be sure of their comprehension even after explaining the whole operation. However, if instead you use a switch instead of a diode that person can easily understand the same operation.  Problem solving in Artificial intelligence works in a same way.  The problem is represented by workable states and these states are then manipulated by operators according to the control strategy.  There are two methods of problem representation: state space representation (graph or search tree) and problem reduction representation.  In state space representation, taking the initial state to a final state is called forward reasoning. Solving a problem by solving sub- problems i.e. from final to initial is called backward reasoning.  These techniques can be fitted into two categories: blind searches and heuristic searches.

From the above discussion it is clear that the Artificial Intelligence has a strong science fiction connotation.  It forms a vital branch of computer science, dealing with intelligent behavior as well as learning and adaptation in machines.  Artificial intelligence is concerned with producing machines to automate tasks requiring intelligent behavior.  

At this stage one question must come to mind: “Are Cybernetics and Artificial Intelligence not the same things?” Another valid question: “Is one about robots and the other about computers?”  But, it is not as straight forward as that.  Artificial intelligence uses computers to strive towards the goal of machine intelligence and considers implementation as the important result.  Cybernetics uses the limits of how we know and what we know to understand the constraints of problems, and considers most powerful information as the important result. 

Both Artificial intelligence and Cybernetics have, in the past, fallen from the headlines, but have returned once again when the topic of machine intelligence became topical.  Cybernetics started in advance compared to Artificial Intelligence.  But for some time Artificial Intelligence dominated Cybernetics: repeated failures of achieving intelligent machines finally caught up with it.  These difficulties in Artificial Intelligence led to renewed search for solutions that mirror prior approaches of Cybernetics.

 

Robotics:

Robotics brings together several branches of engineering.  Robotics is the study of technology associated with the design, fabrication, theory and application of robots.  Robotics is the art, knowledge base, and know–how of designing, applying, and using robots in human endeavors.  Robotic system consists of not only robots, but also other devices and systems that are used together with the robots to perform the necessary tasks.  Robots may be used in manufacturing environments, in underwater and space exploration, for aiding the disabled, or even for fun.  In any capacity, robots can be useful but need to be programmed and controlled.  Robotics is an inter-disciplinary subject that benefits from mechanical engineering, electrical and electronic engineering, computer science, biology, and many other disciplines.

The robot is used to describe an intelligent mechanical device in the form of a human.  A robot is a mechanical device that can perform physical tasks.  Robots have a number of links attached serially to each other with joints, where each joint can be moved by some type of actuator.  A robot may perform any task under the direct control of a human or it can be controlled by the computer which is pre programmed for the specific task.  Robots can be programmed simply by changing the program of the computer.  Robots can be referred to as a wide range of machines, the common feature being that they are all capable of movement and can be used to perform physical tasks.  Robots are available in many forms. These forms include the humanoid robot, which mimics the human form and methods of movement, and the industrial robot, whose appearance is dictated by the function they are to perform.  Robots can be easily grouped into three classes: mobile robots, industrial robots and self reconfigurable robots, which possess the ability to configure themselves to the task at hand.

Manipulator or rover, end effecter, actuator, sensor, controller, processor and software, are the components of all the robots.  The main components are sensors which detect the state of the environment; the actuator modifies the state of the environment; the control system controls the actuators based on the environment as depicted by the sensors.  The appearance and capabilities of all the robots vary vastly: all the robots share the feature of a mechanical moveable structure under some form of control.  Various tasks need to be completed while designing the robot.  Firstly, one must select the appropriate material for the body from wood, metal or plastic.  Then, attention must be paid to the mechanics which are needed to mount the wheel on axles, connecting it to the motor which should keep the body in balance.  The next step involves the selection of electronics or electrical mediums, as power is needed for the motors which connect the sensors to the micro controller.  Finally, we need software to understand the sensors and to drive the robot.

All types of robots are growing in complexity and their use in industry and in the home is also increasing.  The main use for industrial robots has so far been in the automation of mass production industries, where the same definable task needs to be performed repeatedly in exactly the same fashion.  Robots are very suited to such tasks, because the task can be accurately defined and must be performed in the same way every time, with little need for feedback to control the exact process being performed.  Robots are also useful in environments which are unpleasant or dangerous to work in: for example, in areas where humans need protection (e.g. cleaning of acidic waste), bomb disposal, in space, in mining, or working under water.

Robot designers should optimize the selection of the ideal sensor according to the application: this is the final element of every Machine Vision system in its external controller, which interacts with the robot for guidance and control to connect for inspection. Machine Vision can be considered as an extension of Robotics or robot sensing.  This is the most important part of the system, as was illustrated in the basic diagram of the Machine Vision system camera.  For the camera to work in a faithful way, pixel by pixel, scanning is required. This scanning should be performed in single array or in two dimensional arrays, depending upon the desired robotic application.

 

Pattern Recognition:

Each living organism possesses the quality of pattern recognition.  The process of recognition differs between different living organisms.  Humans are able to recognize other humans by sight, voice, or other specific habits. Dogs and other animals can recognize each other by smell, even from long distances – a skill which eludes humans!  A blind person recognizes items by hearing, smell, and the sense of touch. Recognition is not the only method of identification of objects: for example, if we recall an object or instance from the past and relate it to a similar object in the present – this association is also a recognition process performed by the brain.  At this juncture, one may feel the need for a definition of a Pattern Recognition.  Pattern Recognition is the field within the area of machine learning.  Pattern Recognition is the scientific discipline whose aim is the classification of objects into a number of categories or classes.  It can be defined as the act of taking raw data and taking an action based on the category of the data.  The object which is inspected for “recognition” is called a pattern.  Depending upon the requirement of the given application, these objects can be images or signal waveforms, or any measurement or observation data. They can be a set of points defining multidimensional spaces which need to be classified.  Generally we refer to these objects as a pattern.

Pattern recognition strives to classify data: this may be a pattern based on either prior knowledge, or based on the statistical information available from the patterns.  In the main, the Pattern Recognition problem is one of classification between differences in population; take, for example, a group of girls.  One may desire to classify these girls into different age groups (5-8 years, 8-10 years, 10-12 years and 12-15 years). By grouping in this way, the recognition process finally ends up in a classification process.  The trend of information handling and retrieving is becoming important in industrial and post industrial phases, as well as in the area of automation of industrial production. This trend has given rise to new research into Pattern Recognition.  A Pattern Recognition system consists of sensors which gives information about the object for the required classification. The system also consists of a feature extraction mechanism, which is accomplished by the associated computer depending upon the statistical or symbolic data available from the sensors. Finally, the classification is achieved by the scheme which is implemented in the system that performs the actual job of recognition or classification.  The classification or description scheme is usually based on the availability or set of patterns that have already been classified or described by the sensor in the statistical form.  This set of patterns is termed the training set, and the resulting learning strategy or classification scheme is characterized as the supervised learning.  Learning can also be unsupervised when the system is not given prior knowledge of patterns; instead it classifies the objects depending upon the regularities or similarities available in sequentially occurring objects.  There are two main approaches for the classification of Pattern Recognition: one is statistical and the other is structural.  Statistical Pattern Recognition is based on statistical characteristics of the patterns, assuming that the characteristics are generated in the probabilistic manner.  Structural Pattern Recognition is based on structural interrelationships of the specific feature.

Pattern Recognition is an intelligent part in most of the machine intelligent systems which are implemented for the decision making.  Machine Vision is an area in which Pattern Recognition is of importance.  A Machine Vision system captures images via a camera and analyzes them to produce a description of what is imaged.  A typical application of Machine Vision is in the manufacturing industry, either for automated visual inspection or for automation in the assembly line.  For example, manufactured objects on a moving conveyor pass the inspection station where the camera is situated in order to detect defects. These images are then analyzed on a computer, where the Pattern Recognition system classifies the object according to the pattern.

 

Computer Vision:

The basic idea behind the use of Computer Vision is the ability for several application computers to be instructed more naturally than humans by using a keyboard and a mouse.  Indexing the huge collections of a large number of images by hand is a task that is both labor intensive and expensive. Any analysis of text or images requires a combination of high level concept creations, as well as processing and interpretation of inherent visual features.  In the area of intellectual access to visual information, the interplay between human and machine image indexing methods, influences the Computer Vision.  The aim of Computer Vision is to provide the computers with functions possessing the characteristics of human vision.  The aim is also to provide computers with human-like perception capabilities, so that they can sense the environment, understand the sense data, take appropriate actions, and learn from the experience in order to enhance future performance.

Computer Vision is the study and application of methods which allow computers to understand the image content or content of multidimensional data in general.  Computer Vision is the idea which complements the biological vision: it extracts useful information from the image data for a specific purpose.  Next, it is presented to the human expert, or undergoes further processing for the controlling process.  The data which is given to the Computer Vision is normally the digital grey scale or color image, or it can be in the form of two or more images.  Computer Vision studies are originated from different fields, so there is no standard formulation for solving any Computer Vision problems. There exists an abundance of methods for solving various Computer Vision tasks, where the methods are very task specific and can be generalized over a wide range of applications.  Computer Vision is a subfield of artificial intelligence where image data is being fed into a system as an alternative to text based input for controlling the behavior of a system.  Some of the learning methods which are used in Computer Vision are based on learning techniques developed within artificial intelligence. 

Many times it is possible to extract information about motion and its signal waveforms by analyzing images of these different phenomena by Computer Vision.  This involves the extensive study of eyes, as well as the neurons and brain structures devoted to the processing of visual stimuli in both humans and animals.  The many existing methods of processing one variable signal, (typically temporal signals) can be extended in a natural way to the processing of two variable signals (or multi variable signals) in Computer Vision.  In Computer Vision many of the research topics can be studied from a mathematical point of view.  Therefore, Computer Vision can be considered as an extension of physics, neurobiology, signal processing, statistics and mathematics etc.  Computer Vision and image processing are related fields.  They do not have a clear distinction between them.  Computer Vision makes use of many methods of image processing. One difference that does exist between the two is that image processing deals with the transformation of images by means of producing one image from another image, or producing low level images, such as boundary lined images from a main image.  Computer Vision uses models and assumptions about the images to extract the information.

The Computer Vision system can be divided into four parts: image acquisition, processing, feature extraction and registration.  Image acquisition is simply acquiring an image or image sequence into the imaging system. Imaging system should be formed previously before using.  In processing, the image is treated with low level operators.  Here, noise reduction is utilized so that the size of the data is decreased.  This is accomplished by employing different methods of image processing such as sampling, filtering and segmenting.  Feature extraction involves the dividing of the data into different sets according to the features which ought to be invariant to disturbances such as lighting conditions, camera positions, noise and distortion.  Registration process is concerned with establishing correspondence between the features in the acquired set and the features of the known objects in the model database.  Further Computer Vision tasks are as follows and can be divided into 5 steps: object recognition, optical character recognition, tacking, scene interpretation, and ego motion.  Object recognition is the detection of objects or living beings which appear in the image while estimating the location of the object.  Optical character recognition takes pictures of printed or hand written text and converts it into computer readable text which is acceptable for the computer.  Tracking involves the detection of an object from the image sequence.  Scene interpretation is the creation of a model from an image.  Finally, ego motion determines the motion of the camera itself. 

Computer Vision covers a vast area of applications, such as microscopy imaging, x-ray, and angiography (i.e. different branches of medical science, in space vehicles etc.).  Another application is in industry. Also, information is extracted for the purpose of supporting the manufacturing process.  The main purpose is quality control, or automatic inspection for defects.  For these kinds of tasks Computer Vision is installed in combination with Machine Vision.

 

1.3       Machine Vision vs Computer Vision:

It has been stated above that Machine Vision and Computer Vision are not the same, even though they may create that impression.   In the main, Machine Vision is always simulated by the use of hardware as well as software, whereas, in many instances Computer Vision is a result of software only.  For a better understanding of the differences, refer to the table below:

Table 1.1

Difference between Machine Vision and Computer Vision

Serial numbers

Machine

Vision

Computer

Vision

1

The inspiration behind the research of Machine Vision is from industrial experience.  It is the result of practical requirement.

 

It is motivated from the research or progress in computer science.  It can be stated that it is the result of acadamic progress.

2

Comparing theoritical and practical approches towards Machine vision, one finds the practical approach dominates in each and every sense.

 

In the case of Computer Vision, a theoritical approach is more useful in many instances, as any Computer Vision system requires much mathematical calculation.

3

To implement any Machine Vision system, in almost all  of the cases, a designer has to implement harware.  In this case hardware is much more important.

From the definition itself, it is clear that dedicated hardware is not required.

4

From the definition it is clear that one can achive Machine Vision by simply implementing harware in order to avoid software and algorithms.

It is the result of acadamic efforts: algorithms are the strengthning pillars of the Computer Vision.  Also it involves much mathematical calculation which can be based on algorithms.

5

One Machine Vision system can accommodate new products by small changes if it is installed for one product.

Computer Vision system is  algorithm based. Therefore, one has to build a new system or algorithm if the product is chagned, i.e. he has to built a new software.

6

Machie Vision can handle any type of data: human artifacts, such as plastic, wood, metal, glass etc.

In practice it can be any kind of computer data file, which should be acceptable for the computer.   The Computer Vision designer is rarely able to control the image acquisition environment, or redefine the application to make it tractable.

7

Machine Vision cannot model human vision very well.

Computer Vision can model human vision very well.

8

Attractive features of Machine Visions: it is very easy to use once it is implemented.  It can accommodate new products so it becomes very cost effective.  If it is installed faithfully then it becomes constistent and reliable for the manufacturer.  Compared to human vision, it is much faster.

It is algorithm based so it is not easy to use for non- technical people.  If it is installed faithfully then it is very easy to use, but difficult to detect faults.  Its consistency and efficiency is judged in a specific way like accuracy of measurement, probability of recognising critical features.  It is also fast compared to human vision but we cannot say it is faster than Machine Vision because it depends upon the speed of the Computer.

9

It is multi disciplinary.

It is not multi disciplinary.

10

It gives satisfactory and faithful results.  For the replacement of human vision it provides a good solution.

Depending upon the product requirement, it provides optimum results and the best solution for the replacement of human vision.

11

It is a brach of system engineering.

It has the background of  mathematical and computer science.  Often it uses the theoritical and academic approach of both.

12

It can be a human interactive system.  For an interactive system, an experienced vision engineer is required.  For the target system in the factory during set up, a low skill technical person  is required. Autonomous in inspection mode.

Skilled operator is required.  It relies on a user of specialist skill.  Depending upon the application area, skilled persons can be chosen.

13

For interactive system, the operator should be skilled.  Depending upon the application his skill level may vary from medium to high.  For the target system in the factory, it should cope with low skill level.

Operator skill level wholly depends upon application and of course, it also depends upon the user.

14

The output of any Machine Vision system is available in the simple signal form to control external equipment or to do the specific task for which Machine Vision system is implemented. For example simple accept or reject signal or signal to robot.

It is the complex signal for the human.  As the Computer Vision system accepts data which is acceptable for the computer, accordingly it gives output in the form which a computer understands and does the specific task.  That same signal is difficult for humans to understand.

15

Processing speed of the Machine Vision is determinded by the type of the system.  If it is a human interacting system, then the speed is decided by the human. For the target manufacturing industry, speed is decided by the speed of the production.

Here speed is of less importance.  Mainly, it depends upon the user and of course, on his skills.  Therefore human interaction plays an important role in deciding the speed. The speed of the Computer Vision system depends upon the speed of the computer and on its memory.

16

The cost of the Machine Vision system is of prime importance.  Cost mainly depends upon the application and the manufacturing industry.

The cost of the Computer Vision system is of less importance, because the system remains the same for most of the system.  Software and algorithm are the parts which change application by application.

 

1.4       Why do we need Machine Vision systems?

The difference between Computer Vision and Machine Vision states that the Machine Vision system can be changed depending upon the product requirement.  In other words, the same system can be in use for different products.  The industrial applications like spray painting and inspection are the same kind of mechanical work which one has to perform.  Once the task is specified, human interference is not required.  This kind of task can be specified to the Machine Vision system.  By applying Machine Vision we can save money as well as time, because if we install a non-interactive Machine Vision system, then the speed will be much faster than that of a human. 

Consider the food processing factory, where the manufacturer was manufacturing nut bags.  Usually nuts are green or various shades of green.  The Machine Vision system designer set those colors and installed them into the system.  The system rejected all nuts of a slightly brown shade.  In this instance, one cannot say that the system failed because it can be reinstalled by changing the color system of that system.  The lesson learned is that the designer has to keep each and every aspect of the Machine Vision system in mind when he needs to design a system for the food industry.  The materials which are biological in origin change with time.  In this kind of industry, if one does not wish to install the Machine Vision system, then, in order to inspect each item, one dedicated worker is required.  This will take much more time than the vision system as well as requiring high levels of energy. 

There are situations where the objects are very complicated and hard to handle and where fluid or semi fluid has spilt. Some objects are of different textures, or have different shapes and sizes, and some have different color settings.  In all of these applications, the Machine Vision system can do a better job because it has several features such as classification, identification etc.  Once it is designed, it will identify the appropriate item for which it is installed. 

 

1.5       Does Machine Vision simulate Human Vision :

Machine Vision is not about understanding or implementing human or animal vision, because they both are very different and therefore easy to distinguish from each other.  The Vision system designer should keep in mind that he is not supposed to take any clue from the natural vision system i.e. he is not supposed to implement the phenomena of human or animal sight while designing the image processing algorithms.  Usually people see the world as they want to see it, according to their thought process.  Therefore, everybody who has the ability to see is an expert in the vision system.  Here, a main distinction comes into the picture: the Machine Vision system should be designed by keeping its application in mind and not by keeping the human vision system in mind.  Human vision models are based on introspective self analysis. 

Scientific studies of human vision have not yet provided any information that enables us to achieve significant improvements in the design of a factory floor inspection system.  Primate vision is extremely subtle and cannot yet be emulated by a machine.  As an example, any human can recognize a beautiful lady, but the machine has not been invented yet which can identify ladies, so the characteristic of being beautiful is impossible to identify.  Vision occupies 70% of the data available in the nervous system.  The human visual system is the refined system of billions of nerve cells.  No machine that we can conceive of building, could match the connectivity that exists in the computing elements of human neurons.  The main differences between human vision and Machine Vision are as follows:

Table 1.2

Difference between Machine Vision and Human Vision

Serial numbers

Machine

Vision

Human

Vision

1

Machine Vision can go from gamma to microwave range.

Human vision is able to identify in the visible light spectrum.

2

Machine Vision systems are available with 4.10pixels and 8192 scan lines.

4000X4000 pixels approximately.

3

The sensor for this system is very small.

Sensor is very large.

4

It is able to perform quantitative task fulfillment also.  It can measure size, length and area very precisely.

For quantitative measurements, human vision is not preferred because the fear of human error is always present.

5

If the data is not available, the Machine Vision system is not able to predict any decision.  It can cope with unseen objects.

When the data is not available human vision can predict results with its judgment.  It can cope with unseen objects very well.

6

Once the task is specified, the Machine Vision system can do the same task thousands of time with the same accuracy. 

The quality of doing repetitive tasks is poor because of fatigue, boredom i.e. human error

7

As per definition, the device is concerned with many branches of engineering but it does not have intelligence.  We can say it has low intelligence.

It does everything depending upon its intelligence.  We can say it has high intelligence.

8

Lighting level flexibility is closely controlled - we can say that it is almost closed.

Lighting capabilities of human vision are highly flexible - depending upon the requirement it is highly variable.

9

Minimum lighting level is equal to cloudy moonless night.

Minimum lighting level is equal to quarter moon light level.  If dark adaptation is extended then it is still larger.

10

Strobe lighting and laser application is possible in this case but for safety purposes good screening is required.

It is very much unsafe if one wishes to work in strobe lighting and lasers.

11

Consistency of doing any work several times and quantitative work is good.

Consistency of doing quantitative work and repetitive work is very poor.

12

Economically this is moderate compared to human vision.  Also, cost depends upon the application.

In any application this is cheap compared to Machine Vision.

13

Running cost is low.

Running cost is high.

14

Inspection cost per unit is low.

Inspection cost per unit is high.

15

Ability to program is limited.  Special other interfaces makes task easier.

In any application speech is much more effective.

16

It is versatile as it has the ability to cope with multiple views in space and time.

Human vision is not versatile. The ability to cope with different views in space is limited.

17

Able to work in hazardous environment.

It is not easy to implement human vision in hazardous environment.  Human vision needs protection.

18

Non-standard scanning methods like line scan, circular scan, random scan, spiral scan, radial scan are possible to implement.

Standard or non-standard scanning methods are not possible to implement.

19

Storage in its own memory without back up is possible.  It has very good image storing capacity.

Image storing capacity is very poor.  It can be increased with recent aids like photography or digital image stores.

20

There are numerous optical aids are available.

Optical aids are limited compared to Machine vision.

 

From this discussion we can come to the conclusion that Machine Vision must be designed on engineering principles and not by slavishly following the precedents found in human vision.

The main problem in the Machine Vision system involves the provision of a link between seeing and understanding.  The concept of knowing what to expect solves this problem.  Here we can say Machine Vision does not set out to emulate human vision.  In the future it will possible to build a machine which can see like a human.  An industrial Machine Vision engineer is likely to regard any new understanding that biologists obtain about human or animal vision interesting, but largely useless.

 

1.6       Application Areas Of Machine Vision :

The principal industrial applications of Machine Vision are inspection, robot guidance, process monitoring and control. 

 

Inspection

Detecting faults in piece parts and continuous materials made in strips or web form are two of the prime areas for applying Machine Vision systems.  Fault may occur in length, size, shape, area, volume, color, texture, scratches, cracks, labeling, chemical composition, oil, staining, water damage, proportions, mass assembly, foreign body, material defect, incomplete machining, coating etc.  The task of inspection comes in two forms: specific task of detecting faults (as stated above) and in a general sense to include a range of applications such as counting, grading, sorting, calibrating etc.  Here it is necessary to note that there is no specific distinction between Automated Visual Inspection and Robot Vision, since many inspection tasks require parts manipulation and many object handling applications require identification and verification.

 

Robot Vision

For our purposes, we can regard the robot as any machine that can move around safely and is well equipped with a camera, gripper and other tools under the control of the computer.  Within the robot, its equipment also should be controlled by the computer.  According to this definition, numerically controlled milling machines, lathe, drill, electronic component insertion machine, graph plotter, gantry robot, and robotic arm can all be considered to be robots.  Earlier industrial robots were blind, but were useful in repetitive tasks where the form and posture of the work piece was predictable.  They were not able to handle unknown situations or in other words, their ability to handle circumstances was limited.  The positional accuracy was very high.  It may be expected to operate in an environment where the parts delivered to it are in an unknown position or posture, or may be variable in size or shape.  Vision has proven ideal for locating objects or directing robots towards the objects.  Without sensors, robots were dangerous to the environment and themselves.  Vision is one of the sensing methods.  Visually guided robots first see, and then decide whether it is safe to move or not. An intelligent visually guided robot can pick up items which randomly come in its path, even though this is not its task. 

 

Process Monitoring and Visual Control

Process control is possible with the help of any of the sensing methods by the way of feed back or feed forward.  A camera can be placed anywhere so that it can observe the product or process.  In this way, visual sensing can provide process control.  With this same controlling of the accept/reject mechanism, two other outputs are: feed back to adjust the operating parameters of manufacturing process based on machine located up stream, and feed forward for down stream.  In any of these cases, the camera is placed so that it observes partially made products between separate stages of process.  With the help of the Machine Vision system, it is possible to observe the manufacturing process taking place, including such areas as: tools, waste materials during the process of manufacturing, like turning, milling, welding, grinding, mixing of fluids, suspensions, fluid flow, filling of pies, smoke plumes, raw or waste material flow, drill, welding electrodes, thermal imaging of manufacturing plants, etc.

There is a far greater potential for applying Machine Vision in this way than has hitherto been realized.

 

Engineering Institute of Technology - Latest News