Technical Paper on Brain Machine Interface

“No technology is superior if it tends to overrule human faculty. In fact, it should be other way around”

Imagine that you are somewhere else and you have to control a machine which is in a remote area, where human can’t withstand for a long time. In such a condition we can move to this BRAIN -MACHINE INTERFACE. It is similar to robotics but it is not exactly a robot. In the robot the interface has a sensor with controller but here the interface with human and machine. In the present wheel chair movements are done according to the patient by controlling the joystick with only up, reverse, left and right movements are possible. But if the patient is a paralyzed person, then it is a critical for the patient to take movements. Such a condition can be recovered by this approach.

The main objective of this paper is to interface the human and machine, by doing this several objects can be controlled. This paper enumerates how Human and Machine can be interfaced and researches undergone on recovery of paralyzed person in their mind.

Introduction:

The core of this paper is that to operate machines from a remote area . In the given BMI DEVELOPMENT SYSTEMS the brain is connected to client interface node through a neural interface nodes . The client interface node connected to a BMI SERVER which controls remote ROBOTS through a host control.

Brain Study:

In the previous research, it has been shown that a rat wired into an artificial neural system can make a robotic water feeder move just by willing it. But the latest work sets new benchmarks because it shows how to process more neural information at a faster speed to produce more sophisticated robotic movements. That the system can be made to work using a primate is also an important proof of principle.

Scientists have used the brain signals from a monkey to drive a robotic arm. As the animal stuck out its hand to pick up some food off a tray, an artificial neural system linked into the animal's head mimicked activity in the mechanical limb.

It was an amazing sight to see the robot in my lab move, knowing that it was being driven by signals from a monkey brain. It was as if the monkey had a 600-mile- (950-km-) long virtual arm. The rhesus monkeys consciously controls the movement of a robot arm in real time, using only signals from their brains and visual feedback on a video screen. It is said that the animals appeared to operate the robot arm as if it were their own limb. The technologies achievement represents an important step toward technology that could enable paralyzed people to control "neuroprosthetic" limbs, and even free-roaming "neurorobots" using brain signals. Importantly, the technology that developed for analyzing brain signals from behaving animals could also greatly improve rehabilitation of people with brain and spinal cord damage from stroke, disease or trauma.

By understanding the biological factors that control the brain's adaptability.
The clinicians could develop improved drugs and rehabilitation methods for people with such damage. The latest work is the first to demonstrate that monkeys can learn to use only visual feedback and brain signals, without resort to any muscle movement, to control a mechanical robot arm including both reaching and grasping movements.

Signal Analysis using Electrodes:

A brain-signal recording and analysis system that enabled to decipher brain signals from monkeys in order to control the movement of a robot arm .In the xperiments, an array of microelectrodes each smaller than the diameter of a human hair into the frontal and parietal lobes of the brains of wo female rhesus macaque monkeys. They implanted 96 electrodes in one animal and 320 in the other. The researchers reported their technology of implanting arrays of hundreds of electrodes and recording from them over long periods.

The frontal and parietal areas of the brain are chosen because they are known to be involved in producing multiple output commands to control complex muscle movement.

The faint signals from the electrode arrays were detected and analyzed by the computer system and developed to recognize patterns of signals that represented particular movements by an animal's arm.

Experiments:

The experiments conducted for Brain-Machine Interface are:

Monkey Experiment:

The goal of the project is to control a hexapod robot (RHEX) using neural signals from monkeys at remote location. To explore the optimal mapping of cortical signals to Rhex’s movement parameters, a model of Rhex’s movements has been generated and human arm control is used to approximate cortical control. In preliminary investigations, the objective was to explore different possible mappings or control strategies for Rhex. Both kinematic (position, velocity) and dynamic (force, torque) mappings from hand space were explored and optimal control strategies were determined. These mappings will be tested in the next phases of the experiment to ascertain the maximal control capabilities of prefrontal and parietal cortices.

In the initial, output signals from the monkeys' brains were analyzed and recorded as the animals were taught to use a joystick to both position a cursor over a target on a video screen and to grasp the joystick with a specified force. After the animal’s initial training, however the cursor was made a simple display – now incorporating into its movement the dynamics, such as inertia and momentum, of a robot arm functioning in another room. While the animal’s performance initially declined when the robot arm was included in the feedback loop, they quickly learned to allow for these dynamics and became proficient in manipulating the robot-reflecting cursor The joystick was then removed, after which the monkeys continued to move their arms in mid-air to manipulate and "grab" the cursor, thus controlling the robot arm.

After a series of psychometric tests on human volunteers, the strategy of controlling a model of Rhex depicted above using the human hand was determined to be the easiest to use and fastest to learn. The flexion/extension of the wrist is mapped to angular velocity and the linear translation of the hand is mapped to linear (fore/aft) velocity. The monkeys are being trained to use this technique to control a virtual model of Rhex

The most amazing result, though, was that after only a few days of playing with the robot in this way, the monkey suddenly realized that it didn't need to move her arm at all. "The arm muscles went completely quiet, it kept the arm at side and controlled the robot arm using only its brain and visual feedback.

Our analyses of the brain signals showed that the animal learned to assimilate the robot arm into her brain as if it was her own arm." Importantly the experiments included both reaching and grasping movements, but derived from the same sets of electrodes.

The neurons from which we were recording could encode different kinds of information. It was surprised to see that the animal can learn to time the activity of the neurons to basically control different types of parameters sequentially. For example, after using a group of neurons to move the robot to a certain point, these same cells would then produce the force output that the animals need to hold an object.

Analysis of the signals from the animal’s brain as they learned revealed that the brain circuitry was actively reorganizing itself to adapt.

Analysis of Outputs:

It was extraordinary to see that when we switched the animal from joystick control to brain control, the physiological properties of the brain cells changed immediately. And when we switched the animal back to joystick control the very next day, the properties changed again.

Such findings tell us that the brain is so amazingly adaptable that it can incorporate an external device into its own 'neuronal space' as a natural extension of the body , actually, we see this every day, when we use any tool, from a pencil to a car. As a part of that we incorporate the properties of that tool into our brain, which makes us proficient in using it, such findings of brain plasticity in mature animals and humans are in sharp contrast to traditional views that only in childhood is the brain plastic enough to allow for such adaptation.

The finding that their brain-machine interface system can work in animals will have direct application to clinical development of neuroprosthetic devices for paralyzed people.

There is certainly a great deal of science and engineering to be done to develop this technology and to create systems that can be used safely in humans. However, the results so far lead us to believe that these brain-machine interfaces hold enormous promise for restoring function to paralyzed people.

The researchers are already conducting preliminary studies of human subjects, in which they are performing analysis of brain signals to determine whether those signals correlate with those seen in the animal models. They are also exploring techniques to increase the longevity of the electrodes beyond the two years they have currently achieved in animal studies. To miniaturize the components, to create wireless interfaces and to develop different grippers, wrists and other mechanical components of a neuroprosthetic device.

And in their animal studies, proceeding to add an additional source of feedback to the system in the form of a small vibrating device placed on the animal's side that will tell the animal about another property of the robot. Beyond the promise of neuroprosthetic devices, the technology for recording and analyzing signals from large electrode arrays in the brain will offer an unprecedented insight into brain function and plasticity.

We have learned in our studies that this approach will offer important insights into how the large-scale circuitry of the brain works .Since we have total control of the system, for example, we can change the properties of the robot arm and watch in real time how the brain adapts.

Brain Machine Interface in Human beings:

The approach of this paper is to control the operations of a robot by means of an human brain without any links .

The brain signals are taken by electrodes from the frontal and parietal lobes .The signals are conveyed with means of electrodes and processed by the unit .The unit has a BMI development system . The brain is connected to (i.e. the microelectrodes are connected to the frontal and parietal lobes) client interface through neural interface nodes which in turn is linked with BMI server which controls the host device .

In the present wheel chair, movements are done according to the patient by controlling the joystick with only up, reverse, left and right movements which are only possible. But if the patient is a paralyzed person, then it is a critical for the patient to take movements because he is unable to control the wheel-chair. So this technology is a marvelous gift to help them.

Conclusion:

Thus this technology is a boon to this world. By this adaptation many Bio-medical difficulties can be overtaken and many of our dreams will come true .

References:

Bio-medical Engineering by Dr. Dan Koditschek. Neural Engineering by Karen Coulter and Rahul Bagdia Neural Networks by Patrick Davalo and Erick Naim.

Technical Paper on Beam Robotics & Nervous Networks

The field of ROBOTICS has been a fascination since the advent of computational technologies. To induce life into the robos, complex and powerful electronic components are required. Hence advance knowledge and great funds are required to build even small robots. These create hurdles to the beginners in this field. These hurdles can be overcome by adopting a new philosophy called BEAM ROBOTICS formulated by Mark. W. Tilden. Here minimal electronics are used and using solar power, miniature creatures are created first from which new prototypes can be evolved. Unlike conventional robos, which use costly microprocessor controlled architecture, these have interconnection of elementary circuits called NERVOUS NETWORKS. Here a reconfigurable central network oscillator is utilized for autonomous and independent operation of components. Further it favors development of legged robos. The nervous technology provides (1) pulse delay circuits (neurons), interconnected in closed loops, which generate square waves and (2) pulse neutralization circuits. The central sequencing network and limb circuits control the direction of the motor thereby the motion of their legs. The advantage is that the use of microprocessors and costly components is eliminated and the processes are localized and self-sustaining. Thus beginners and students can implement the innovative ideas without having high knowledge, skill and fund.

Introduction:

Today the field of ROBTICS is a fascination for the men of science .The final zeal of any robo scientist is to create robos those have ability to think and act accordingly by themselves.To achieve this,use mechanical components that are stimulated by powerful electronic circuits and computer chips that store the programs and control anything are required.For beginners to enter this arena there are many stumbling blocks.A deep knowledge of the subject,great deal of research and financial support are required as the complex circuits and microprocessors cost a fortune.Mark W.Tilden formulated a new philosophy that enables even children to enter this fascinating arena.BEAM Robotics is a brainchild of this man. It is a new field in robotics. It uses minimalist electronics to create elegant mechanical creatures. BEAM devices come in infinite shapes and sizes.The brains used to control BEAM "life forms" are nervous networks which are very simple containing no microprocessors. By wiring in basic sensors to influence the nervous network, we can control how the robot behaves These sensors include light detectors, touch feelers, heat sensors and just about anything you can think of.

The nervous network is an interconnection of basic elemental circuits called the ‘pulse delay circuits’,acting like a neuron,generating a square wave and hence functioning as an oscillator.The most significant characteristic of nervous network is the absence of microprocessors and other complicated circuitry to enable locomotion.The nervous network in robotic limb control is simple and autonomous, and any incorporated complex circuitry in the robot can be fully dedicated to the actual purpose of the robot rather than its locomotion.

Problems of Conventional Robots:

Robots are particularly useful in applications which pose a hazard to living beings, for example in security functions, dealing with toxic materials, working in hazardous environments, and so on.To date the most successful designs have involved wheeled devices. However, wheeled devices have very limited utility in many environments for example in rough or soft terrain. Moreover, any wheeled device is restricted to largely horizontal travel, since traction relies entirely on the force of gravity.

On the other hand, legged devices are capable of traveling on virtually any type of terrain. and if properly equipped are able to climb vertically.
  • Autonomous legged creatures,to move and react effectively within their environment,they require precise synchronizing control circuitry and the ability to adapt to new conditions as they arise.
  • Until now, all attempts to create such a device have involved elaborate arrangements of feedback systems utilizing complex sensor inputs and extensive control and sequencing circuitry hard-wired to one or more central processors.
  • Such a robot is extremely complex and expensive to build, even to accomplish very simple tasks. Moreover, due to the complexity of such a device and its heavy reliance on a central processing system power requirements are enormous, and a relatively minor problem, such as injury to a limb, is likely to cause total system failure. Such walking devices are accordingly impractical for other than experimental or educational uses. 

Solution:

The nervous technology overcomes the given problems and other disadvantages by providing a completely different control system approach. Rather than utilizing a central processor to process sensor rmation and vely ve all mechanical processes, the device of the robot utilizes a reconfigurable central network oscillator to sequence the processes of the devices limbs, each of which is it autonomous. Once activated, each limb sequentially executes its processes independent of the central sequencer.



The nervous technology further provides a pulse delay circuit, with a delay of variable duration, which connected to a second pulse delay circuit acts as an artificial "neuron". The central and limb-actuating processes are achieved by a number of such "neurons" connected in series. The delay duration is determined merely by an analog bias input to one or more "neurons", which may be controlled remotely or in response to local sensor stimulation.

The nervous network is made of basic elemental circuit called the pulse delay circuit (neuron). The neuron diagram is given below.

It is made of simple electronic components like the resistor, capacitor and inverter. The capacitor forms a "differentiating element" in a circuit and responds to changes in input voltage. The inverter gives an output, which is the exact opposite of the input. So if a high input is given a low output is obtained and vice-versa. The resistor and the capacitor induce a time delay between the input and the output, and the delay is determined by the time constant RC. Hence the delay can be controlled by varying the value of the resistance and the capacitance.

Preferred value of capacitance is 0.1μF and resistance is between 5 MΩ to 10 KΩ for a propagation delay of 0.25 to 1 sec. Low value of capacitance increases efficiency.



Similarly if many neurons are connected in series with one another, the output of the last neuron connected to the input of the first, it forms a closed loop oscillator in which the alternate neurons have similar states. The output of the circuit goes high and low repeatedly. This is one type of a nervous network. Many more complex nervous networks exist.

The Pulse Delay Circuit is Shown.

Certain additions to the basic neuron have been made. The resistors R1 may be referenced to ground, as shown in the figure in which case the PDC's will respond only to positive logic data and will be triggered by the leading edge of a pulse at the input of the inverter. Alternatively, resistor R1 may be referenced to the source voltage, in which case the PDC's will respond only to negative logic data and will be triggered by the trailing edge of a pulse at the input of the inverter. Below the inverter is the output waveform of the nervous neuron. It is a square wave.

This wave essentially takes on a life of its own, and is often called a PROCESS. Depending on the network's initialization circuitry, we can have one or more active processes running around in it. The native state for a "raw" Nervous Net at power up is saturation -- here, there are half as many active processes as there are Nervous (alternate Nervous are active at any given time).

Another elemental component of the nervous network is the pulse neutralization circuit. The diagram of the Pulse Neutralization Circuit (PNC) is shown.

It is different from the pulse delay circuit in that the position of the resistor and the capacitor has been interchanged. This is actually a neural neuron and it is here that the nervous network incorporates features from the neural network. The circuit is a modified low pass filter permitting signals of only low frequency i.e. signals of longer duration to pass through. The PNC can take any of the three configurations shown in the diagram. It is an effective circuit for controlling the introduction of pulses to the central sequencing loop.

These are the two principal circuits used in a legged robot built on nervous technology. Explained below is the implementation of the nervous technology in a four-legged robot.

The robot has two main nervous networks the first one being a central sequencing loop and the second a limb control circuit.

The diagram of the central sequencing loop is shown below.

The central sequencing loop has four neurons forming a closed loop. The signal input is given to the first neuron C1. The biasing resistor is connected to the second neuron C2. Between the third neuron C3 and the fourth neuron C4 is connected the pulse neutralization circuit. As mentioned earlier the signal goes high and low at the output of every neuron. This signal output can be given as the input to every limb control circuit connected between the neurons.
The limb control circuit is given below.
The limb circuit has four neurons N1-N4 connected in series. The input from the central sequencing loop is given to the first neuron. The four-neuron limb can run two motors, one for horizontal movement and the other for vertical movement. The motor is driven by a motor driver which is a buffer chip providing amplified output to the motor. Also the motor driver is a XOR gate and hence activates the motor only if the inputs at the two terminals of the driver are opposite. Which input is high determines the direction of rotation. The motor is connected as shown.



Consider a signal from the central sequencing loop. When it reaches the output of neuron N1 after a time delay, the junction J1 is high and junction J3 is low. Hence the driver turns the motor 1 in the forward direction. When the signal reaches the output of neuron N2 after a time delay, the junction J2 is high and junction J4 is low. Hence the driver turns the motor 2 in the forward direction. When the signal reaches the output of neuron N3 after a time delay, the junction J3 is high and junction J1 is low. Hence the driver turns the motor 1 in the reverse direction. When the signal reaches the output of neuron N4 after a time delay, the junction J4 is high and junction J2 is low. Hence the driver turns the motor 2 in the reverse direction.

By this sequence of turns the motors would have moved the limb back, lifted it, moved the limb forward and then dropped it. This is basically how the limb moves. If every limb makes this pattern with a time delay the robot basically walks or even runs if the time delay is less. The above two circuits are built into the robot whose basic top view is given below.

The central sequencing loop along with the limb circuit and the PNC forms the overall control circuit of the robot, which is given above.

The central sequencing loop has the four neurons C1-C4. Between every two neurons of the central sequencing loop is the limb circuit with four neurons and motors with their drivers. The PNC is connected to the central loop. A sensor-stimulated pulse of any duration less than the time constant of the PNC will have no effect, while a sustained stimulation will introduce a single pulse to the loop. If the sensor is stimulated for a sufficiently long time the PNC will activate and neutralize all pulses in the central sequencing loop. This is a simple remote control activator circuit operating by infrared emitter. It will be apparent that other kinds of remote or local sensors can be employed in a similar fashion. It is preferable to have the source potential applied to the input of an inverter in the central sequencing loop while powering up, for some short period of time. Thus, upon power up the device executes one full cycle of its processes, essentially "settling in" to a ready mode, before all pulses are neutralized. A pulse may then be injected into the central sequencing loop through a sensor-controlled PNC or directly from the source potential, at the input of any PDC in the loop, initiating all processes.

By biasing the PDC's in the central sequencing loop to fire at predetermined intervals, movement of each limb is initiated at the appropriate time. The speed of the firing sequence down the chain of each limb control circuit is similarly determined. However, except for the timing of the initiating pulse at the input of the proximal PDC, each limb control circuit operates completely independently of the central loop. In the central sequencing loop, these pulses can be neutralized to stop all motion by applying the source potential directly to the output of any inverter in the loop; this prevents the capacitor from discharging and effectively breaks the firing chain to the next PDC. A single pulse can be generated by applying the source potential directly to the input of any inverter in the loop; this drains the next following capacitor, which, upon charging when the source connection is removed, will fire the next following PDC to start the pulse propagation sequence. Once a pulse is propagating around the central sequencing loop, the limb control circuits are initiated automatically in the manner described above. Through remote or local control, source applied to any inverter input will start the device, and source applied to any inverter output will stop it.
It is easy to realize that for every limb just 2 neurons are needed and hence many limbs can be added easily using the 74xx240 or the 72xx14 inverter IC’s. No additional complex circuitry is required for further legs. The motor driver can be a 74xx245 octal buffer chip. Also the need for external feedback is eliminated, though the PNC can be incorporated into the limb to provide feedback in complex terrain handling. Internal feedback exists between the motors depending on the load. This is also called impex feedback.

This is the incorporation of nervous networks into robotics.

Results:

  • The application of nervous networks in robotics to nullify complex circuitry in the control of the locomotion of the robot has been achieved.
  • A judicious distribution of the various types of PNC's throughout the central sequencing circuit and the limb control circuits will integrate the various limb processes for smoother performance, and will facilitate the use and effects of many different types of sensors to render the device fully autonomous.
  • It will be apparent that a walking device embodying the nervous network will have applications in many industries.
    • Such a walking device could patrol secured premises with a video camera transmitting signals to a remote recorder; could carry out cleaning and maintenance functions in inaccessible areas such as pipes, or in hazardous areas such as nuclear reactors.
    • Equipped with a brush it could perform simple household chores such as dusting and cleaning floors.Because of its versatility and low cost the potential applications are unlimited.
  • The number of combinations and permutations of the circuits described here in are believed to be infinite,but the principles involved will remain the same.

Advantages of this Technology:

  • The pulse delay circuit is very inexpensive and all components are presently available "off the shelf". Power requirements are very small.
  • The control circuits simplify mechanical process controls to mere pulse trains, requiring no microprocessor, so that if a microprocessor is utilized it can be virtually entirely dedicated to task planning and information retrieval.
  • The process controllers are self-stabilizing, and since each limb is essentially autonomous it is unnecessary to hardwire all actuators and sensors to the central torso; moreover, if a limb is damaged or malfunctions it can be removed from the sequence automatically, without affecting the central sequencing processes or the operation of any other limb.

Future of Beam Robotics (Conclusion):

The future of BEAM is brighter as a clear sky sun. From a state of hobby it will emerge as a branch of study. New walking mechanisms touch and vision systems, and encrusted robots with photodiode scales are some recent innovations. Eventually, the B.E.A.M. robot cists hope to see all sorts of tiny robotic creatures lurking in the shadows of our lives, performing menial and repetitive tasks with hive-like efficiency. Swarms of B.E.A.M. bots could cut, vacuum your home and workplace (picture a colony of dung beetles wrangling dust bunnies), scrub out toxic chemical tanks, hunt down insect pests, re-seed the rain forest, and terrify the cats, dogs, and kids in your neighborhood. The possibilities are endless.

Resources:

  • Robotics & Automation - Mark. W. Tilden.
  • Pulse Digital Circuits and Switching Waveforms – Millman and Taub.
  • Linear Integrated Circuits – Ramakant A. Gayakwad
  • www.solarbotics.com
  • www.beam-online.com
  • www.beam-india.solarbotics.net

Technical Paper on Artificial Photosynthesis

Photosynthesis is the process by which plant survives and acts as a greatest sink for carbon dioxide. This paper shows that how a photosynthesis reaction in plants takes place artificially with the help of electronic components. This paper starts with introducing the basic reactions that occurs in plant in the process of photosynthesis. Next it gives information about photovoltaic cell. This paper also deals four steps describing the process of Artificial photosynthesis (AFP). First is the chlorophyll in a leaf is replaced with a photovoltaic cell which release electrons when exited by solar energy. Second step is to split water molecule where as artificially it is done with biomimetric engineering by fabricating new core with new geometry for splitting of water molecule. The third step that bio-energy transfer that occurs because of adenosine triphosphate (ATP) and sodium diphosphate (NADPH). The last process is to convert CO2 to other organic components. Then it goes to micro fabrication of AFP.

Introduction:

It is still unclear where most of our energy will come from in the longer-term future. Solar power cannot produce industrial quantities of electricity, while the tide is turning against wind turbines because they spoil the landscape and too many would be needed to replace conventional generators. Nuclear energy remains in the doldrums. Fossil fuels continue to threaten global warming.
But a promising new contender is emerging: the harnessing of photosynthesis, the mechanism by which plants derive their energy. The idea is to create artificial systems that exploit the basic chemistry of photosynthesis in order to produce hydrogen or other fuels both for engines and electricity. Hydrogen burns cleanly, yielding just water and energy. There is also the additional benefit that AFP could mop up any excess carbon dioxide left over from our present era of profligate fossil fuel consumption.

As we learned in school, photosynthesis is the process by which plants extract energy from sunlight to produce carbohydrates and ultimately proteins and fats from carbon dioxide and water, releasing oxygen into the atmosphere as a by-product. The evolution of photosynthesis in its current form made animal life possible by producing the oxygen we breathe and the carbon-based foods we eat. Photosynthesis does this on a massive scale, converting about 1,000bn metric tons of Carbon dioxide into organic matter each year, yielding about 700bn metric tons of oxygen

What is Photosynthesis:

Photosynthesis is the process of converting light energy to chemical energy and storing it in the bonds of sugar. This process occurs in plants and some algae. Plants need only light energy, CO2, and H2O to make sugar. The process of photosynthesis takes place in the chloroplasts, specifically using chlorophyll, the green pigment involved in photosynthesis. Chlorophyll looks green because it absorbs red and blue light, making these colors unavailable to be seen by our eyes.

Natural photosynthesis carries out the following overall reaction in the carbon fixation process:
CO2 + H2O + [Light Energy] => O2 + Carbohydrate (This is the source of the O2 we breathe.)
When a pigment absorbs light energy, the energy can either be dissipated as heat, emitted at a longer wavelength as fluorescence, or it can trigger a chemical reaction. Certain membranes and structures in photosynthetic organisms serve as the structural units of photosynthesis because chlorophyll will only participate in chemical reactions when the molecule is associated with proteins embedded in a membrane. Photosynthesis is a two-stage process, and in organisms that have chloroplasts, two different areas of these structures house the individual processes. A light-dependent process (often termed light reactions) takes place in the grana, while a second light-independent process (dark reactions) subsequently occurs in the stroma of chloroplasts. It is believed that the dark reactions can take place in the absence of light as long as the energy carriers developed in the light reactions are present.

The first stage of photosynthesis occurs when the energy from light is directly utilized to produce energy carrier molecules, such as ATP and NADPH. However, an important protein in that process is Rubisco. In this stage, water is split into its components, and oxygen is released as a by-product. The energized transportation vehicles are subsequently utilized in the second and most fundamental stage of the photosynthetic process: production of carbon-to-carbon covalent bonds. The second stage does not require illumination (a dark process), and is responsible for providing the basic nutrition for the plant cell, as well as building materials for cell walls and other components. In the process, carbon dioxide is fixed along with hydrogen to form carbohydrates, a family of biochemical’s that contain equal numbers of carbon atoms and water molecules. Overall, the photosynthetic process does not allow living organisms to directly utilize light energy, but instead involves energy capture in the first stage followed by a second stage of complex biochemical reactions that converts the energy into chemical bonds.

What is AFP?


Figure 1 gives an overview of the process of AFP. Energy in the form of light is collected by a series of chromophores that absorb light of progressively longer wavelength (lower energy) at each successive level. A large number of chromophores at each energy level increase the probability of light absorption and with proper placement of the chromophores, the excitation energy, where it is collected at a single spot, the reaction centre. Electron transfer to an electron acceptor (A) creates an initial charge separation. Subsequent transfer of an electron from an electron donor (D) to the reaction centre creates the final charge-separated state. The electron and corresponding "hole" formed by the loss of an electron may then be used for chemical reactions, be it the production of ATP and O2 in natural systems, or H2 and O2 in artificial systems. The benefits of both natural and artificial systems are clear: sunlight is converted into useful forms of energy.

Chemical reactions need energy in the form of electrons moving at high speeds to power them, in other words an electrical potential or voltage. Plants are in effect solar cells converting light into electrical energy. But for this to be sustainable, plants need a constant source of electrons, and this has to be an element or compound already present in the plant.

It takes about 2.5 volts to break a single water molecule down into oxygen along with negatively charged electrons and positively charged protons. It is the extraction and separation of these oppositely charged electrons and protons from water molecules that provide the electric power. In plants, chlorophylls evolved to harvest light, and a complex labyrinth of proteins to conduct the photons to a suitable centre where this crucial water-splitting takes place. In plants, oxygen is the only by-product of this process, but researchers realized some years ago that the reaction could be tweaked to produce hydrogen as well. Still, tweaking photosynthesis to produce hydrogen rather than electrical energy is the easy bit.

Thus the requirement of potential energy as well as negative and positive electrons we need a device that fulfills these requirements which are satisfied through photo voltaic cell. Organic cells are much preferred as they are light in weight and thin in structure helping to make nano pieces.

PhotoVoltaic Cell:

A photovoltaic cell uses semiconductor material to transform light into electrical energy. Photons from light hitting the material excite electrons, releasing them from their atoms into the material. Once electrons are excited, they are able to move freely within the material. The semi-conductor then serves to force the electrons in the desired directions. By creating a junction of a p and n type semiconductor, an electrical potential is created. The electrons move from the n-type to the p-type. Meanwhile, the positively charged atoms move from the p-type to the n-type. As a result, the n-type material gains a positive charge and the p-type gains a negative charge. When an electrical circuit connects the p-type and n-type ends, difference in electrical potential is created which results in current. Figure 2 shows the operation of photovoltaic cell.

This type of cell can be manufactured in many different ways. A monocrystalline semi conductor is much like the ideal type described above. It has a pure p type crystal placed on a pure n type semiconductor crystal. This type of cell is the most efficient in terms of turning energy into electricity. But, it is expensive to manufacture because it is costly to produce large crystals of semiconductor material. A far more cost effective material to produce is polycrystalline cells. These consist of small grains of crystals randomly oriented to each other. Because the smaller crystals, much easier to manufacture, are simply placed together it is much cheaper. However, energy is lost as electrons must maneuver between the different crystals. This form of cells results in a lower efficiency. However, because it is the most economically efficient, it is used today.

Process to do AFP:

  1. In nature, photosynthesis is the process by which plants take light, water, and carbon dioxide, and transform them into energy and food. There are four main steps needed to be mimicked in order for AFP to work. First, a way to harvest the solar energy, or light from the sun, must be found. Currently, there seems to be two major rivaling processes: silicon technology versus organic photovoltaics. The latter refers to a process that would imitate the natural process by using material analogous to chlorophyll. Basically, this material would be a thing membrane that captures the light and then passes the photons on to the next step. At the present, the silicon technology produces up to 33% efficiency in converting the sun into electricity [4]. This process basically uses micro solar panels. Even though they have only been able to reach as high as 8% with organic photovoltaics its potential efficiency is better than what can be done with silicon technology. Indeed it is more efficient for light absorption to have these thin layers of organic photovoltaics, what better way to paint them on than with a digital fabrication technique such as a nano form of continuous deposition. The digital data would provide the accuracy needed for a uniform thickness while fabbing the appropriate mixture of materials to mimic chlorophyll. This process of course would not become available until digital fabrication was able to perform on the nanoscale.
  2. The second process needed to be mimicked is modifying the process by which the plant uses the photon to split the water molecule into hydrogen and oxygen. Up until recently, attempts at replicating this process in the laboratory had failed because it has to be extremely balanced in order to get the geometry just right. In an original plant there is a “complex labyrinth of proteins to conduct the photons to a suitable centre where this crucial water-splitting takes place [5]. A recent breakthrough discovered the precise location and arrangement of molecules that made this water molecule location of just a few splitting possible. It is identified the precise critical molecules of manganese, oxygen and calcium within the core of the plant’s photosynthesis engine where the water-splitting is performed. Hence to perform this artificially there is a need to fabricate the whole environment on a chip which can be done through nanotechnology. Fabricating the films and paths for the photons to pass through and follow is a crucial step before it reaches the core, but being able to fabricate perfect geometry of the core would be invaluable which is known as “biomimetric”. Nano digital fabrication would be a major break through on this level as can be seen by the advancement in the implementation of AF that would takes place.
  3. The third step that needs to be mimicked is the “bio-energy” transfer that occurs because of ATP and NADPH, which has had slow progress. However, an important protein in that process is Rubisco, which actually waterproofs the reaction site which is selective for carbon dioxide. If that part of the protein which is there solely to waterproof the site could be artificially fabricated allowing the rest of the protein to be downsized in order to speed up the process, it would be a major breakthrough. So far, no process is accurate or efficient enough to accomplish this. Could digital fabrication of this area be the answer? It would have to occur for each reaction of carbon dioxide. This would not be something that would be able to be fabricated into an object or circuit as mentioned before, but would rather have to be done constantly under a nano-fabber. This does not mean that this process is any less important as stated before the results would be astounding. The process would flow in perhaps greenhouse labs where solar panels would collect the sun light, transfer the protons, split the water molecule, and when the time came for the Rubisco protein to surround the carbon dioxide a special organic membrane could be fabbed around the reaction site as a genetically altered Rubisco catalyzed the process now more efficiently.
  4. Finally, the last process is converting the carbon dioxide into other products. This area has a long way to go, but when accomplished will be able to produce proteins for food, alcohols for fuel, and chemical intermediates. The applications of AFP are vast and carry life changing benefits, but it will take digital fabrication to bring them about.

Fabbing AFP into MicroCircuits:

So for this process of AFP, digital fabrication could build the “antenna” structure that captures the light (similar to a solar panel), and already coat it with the artificial material that would replace the chlorophyll. As stated before these organic photovoltaics mimicking chlorophyll would need to be “painted” on in thin layers onto these antennas. An example of how digital fabrication would perform this can be seen from its example of building a couch. Original manufacturing process of a couch would flow from building the frame, then attaching cushy material, and then covering that material with choice fabric. However, digital manufacturing of a couch would not flow the same. It would not build it part by part, but rather it would start from the ground and work its way up simply changing the material it was using to build as it went along. For example, it might start with wood for the frame, then when it reached what would be the cushion material it would simply change the material or material property that it was building with one that had squishable yet resilient properties. 

Likewise when it came to what would be the fabric to cover the chair cushion it would switch to a material that could enclose and protect the previous material, with properties that were tough and resistant, yet comfortable. Now, analogous to this process would be many processes on the nano and micro level such as the antenna and the organic photovoltaic. The nano digital fabricator might start out with a base material for the antenna and then switch to the organic photovoltaic material that would in essence be like being painted on to any surface needed.

Applications:

  • The main aim behind developing this process of AFP is that where the photosynthesis is process of preparing food with CO2, sunlight and water thus AFP is a mimic to natural one and prepares food from sunlight, water and CO2 available in atmosphere thus reducing the content of CO2 in atmosphere. Thus becoming as an artificial sinks and saves from Green house effect.
  • Byproducts of the process could be a valuable alternative fuel, methane, or even food in the form of starches and sugars.
  • If a digitally manufactured product that could perform photosynthesis existed, it would not only help to reduce the Greenhouse effect, but would also reduce pollution as well as create alternative food and fuel sources.
  • In the process of AFP, digital manufacturing would not only have to fab the artificial materials necessary for the process, but also in essence program them to perform this process by working together. For example, the fabber would need to manufacture the nanotube [8] that sucks in the air, the nano solar panel that takes in the light, then the chemical dispenser that is added to the mix for a chemical reaction of CO2 and O2 and energy or food, and an output tube, and then set it all up before finally hitting the run button. Furthermore, microcircuits will need to be fabed on all the nano photosynthesis plants in order to drive the photosynthesis process by instruction the process of artificial.
  • Using nanotechnology to create natural processes such as photosynthesis is of critical significance to the future of our world. Since photosynthesis is the basis of our food and energy supply; a supply which is running increasingly short in our “space-age” world, finding ways to artificially create this process using digital manufacturing should be of utmost importance. This, together with novel uses of photosynthetic principles for other purposes, make it likely that photosynthesis and its applications will help to shape an increasingly broad area of exciting discoveries and innovative ideas.
  • But the harnessing of photosynthesis, the mechanism by which plants derive their energy, shows much potential in solving this energy problem. By creating artificial systems that exploit the basic chemistry of photosynthesis, additional processes will thus be made in order to produce hydrogen or other fuels both for engines and electricity. Since hydrogen burns cleanly, it yields only water and energy. AFP could also “mop up” any excess carbon dioxide left over from our present era of “profligate fossil fuel consumption.
  • If the Rubisco molecule could be made smaller, more efficient, and faster the benefits would be incredible,“…if we can improve the efficiency of Rubisco from 1.5% to 1.6% and introduce this into crops just the benefits to mankind in terms of higher production, better turnover rate in agriculture, would be absolutely mind-boggling.
  • A major advancement in digital manufacturing technology is the actual fabrication of food one day

Conclusion:

Could micro or nano digital fabrication be the break through process by which these artificial photosynthesis reaction centers will finally be able to be implemented into electronic circuits? Currently, the process of artificial photosynthesis can only be performed in the laboratory, but it is hypothesized that digital fabrication will be the process that moves it from the laboratory to the rest of the world, and then the sky is the limit. Nano digital fabrication might not solve the problem on how to make the ATP and NADPH storage molecules, or how to artificially convert carbon dioxide to other carbon products (both processes yet to be successfully accomplished), but when those problems are solved, digital fabrication will be the key to implementing them into electronic circuits, devices, and other process that will greatly improve the quality of life for everyone.

References:

  • Prospect Magazine by Philip Hunter or www.futurehi.net.
  • www.mapageweb.com
  • www.eere.energy.gov/solar/photovoltaics.html
  • www.abc.net.au/rn/science/buzz/stories.html
  • www.futurehi.net/archieves/000159.html
  • www.geocites.com/flipy_nicki.html
  • www.ioffe.rssi.com
  • www.personal.rdg.ac.uk
  • www.vafps.org/e-commercefuzzy.html

Technical Paper on Finger Print Recognizer using Fuzzy Evolutionary Programming

A fingerprint recognizing system is built with two principal components: the fingerprint administrator and the fingerprint recognizer. Fingerprints are identified by their special features such as ridge endings, ridge bifurcation, short ridges, and ridge enclosures, which are collectively called the minutiae. It explains the finger print characteristics that are used to identify individuals and the process of minutiae extraction. The fingerprint administrator uses the method of gray scale ridge tracing backed up by a validating procedure to extract the minutiae of fingerprints. The fingerprint recognizer employs the technique of fuzzy evolutionary programming to match the minutiae of an input fingerprint with those from a database.

Introduction:

Fingerprints of an individual are unique and are normally unchanged during the whole life. This method has been widely used in criminal identification, access authority verification, financial transferring confirmation, and many other civilian applications. In the old days, fingerprint recognition was done manually by professional experts. But this task has become more difficult and time consuming In this paper, we explain the method of direct gray scale minutiae detection proposed in improved by a backup validating procedure to eliminate false minutiae. As for minutiae matching, we employed the technique of fuzzy evolutionary programming, which has been used successfully in speaker identification, images clustering, and fuzzy algebraic operations.

Finger Print Characteristics:

A fingerprint is a textural image containing a large number of ridges that form groups of almost parallel curves (Figure 1). It has been established that fingerprint's ridges are individually unique and
are unlikely to change during the whole life.

Although the structure of ridges in a fingerprint is fairly complex, it is well known that a fingerprint can be identified by its special features such as: 

Ridge endings: The ending of the ridges takes place at the middle as shown in fig 2(a) .
Ridge bifurcation: The division of the ridges in the middle as shown in fig 2(b).
Short ridges: The small lines present in between two ridges as shown in fig2(c). and
Ridge enclosures: These are the loops formed between the ridges as shown in fig 2(d).

These ridge features are collectively called the minutiae of the fingerprint. A full fingerprint normally contains 50 to 80 minutiae. According to the Federal Bureau of Investigation, it suffices to identify a fingerprint by matching 12 minutiae.

Minutae Extraction:

For convenience, we represent a fingerprint image in reverse gray scale. That is, the dark pixels of the ridges are assigned high values where as the light pixels of the valleys are given low values. Figure 3 shows a section of ridges in this representation.

In a fingerprint, each minutia is represented by its location (x, y) and the local ridge direction Figure 4 shows the attributes of a fingerprint's minutia. The process of minutiae detection starts with finding a summit point on a ridge, and then continues by tracing the ridge until a minutia, which can be either a ridge ending or bifurcation, is encountered.

Finger Print Recongnition:

The primary purpose of our fingerprint recognizing system is to calculate the matching degree of the target fingerprint with the images in a database and to decide if it belongs to a particular individual. A fingerprint is said to match one image in the database if the degree of matching between its minutiae and that of the image in the database is higher than some prespecified acceptance level. The method of calculating this matching degree is based on our fuzzy evolutionary programming technique.

Conclusion:

We have pre nizing system that uses the method of gray scale ridge tracing backed up by a validating procedure to detect fingerprint's minutiae and that employs the technique of fuzzy evolutionary programming to match two sets of minutiae in order to identify a fingerprint. The experimental results show that the system is highly effective with relatively clean fingerprints. However, for poorly linked and badly damaged fingerprints, the system appears to be not so successful. 

In order to handle those bad types of fingerprints, the addition of a preprocessing component that also adopts the fuzzy evolutionary approach to reconstruct and enhance the fingerprints before they are processed by the system. Also, it is possible to connect the system with a live fingerprint scanner that obtains a person's fingerprint directly and sends it to the system for identification.

References:

  1. Arcelli, C., and Baja, G.S.D "A Width Independent Fast Thinning Algorithm", IEEE Trans. Pattern Analysis Machine
  2. Baruch, O."Line Thinning by Line Following", Pattern Recognition Letters, Vol. 8, No. 4, 1988, pp. 271-276.




Technical Paper on Thresholding Digital Image Processing for Vision Systems

Robotic systems are gaining popularity among the industries today, owing to the better precision, speed and quality achieved by them. The most important element of these systems is their vision system. Several techniques are being used to assist the vision systems in improving their performance. This paper presents one such technique thresholding to digitize a given image for being processed by the vision computer.Thresholding is the most popular technique with the industrial robotics application owing to the low cost and ease of establishment and also since in most of the industrial applications the lighting at the scene of interest can be controlled. This paper consists of the background information needed to understand the concept of image analysis using vision systems and deals with the image processing techniques and thresholding.

Keywords:

 CCD camera, vision, thresholding, gray level, image processing.

Introduction:

Machine vision has now been an active area of research for more than 35 years and it is gradually being introduced for real world application. Most of the applications which were
developed and built in the seventies and the eighties were based on dedicated methods combined with use of specific application knowledge. I.e., in a typical application special sensory equipment such as laser range cameras was used often in combination with well-controlled lighting (i.e., artificial or structured lighting). For description of geometric information or motion it was often assumed that the environment was constrained to a limited number of well-defined objects which were modeled a priori and most of their characteristics were utilized in the image processing and analysis.Little insight into the general problem of image based scene description and interpretation was gained from these applications, as the applications to a large extent were based directly on image-derived features. The approaches generally lacked robustness and often became ill posed for even a slight variation in the conditions of the original application.Very little in the way of general “high level” algorithms came out of this. The reconstruction approach set forward by Marr in his now famous book “Vision” thus received little attention in terms of use in industry.Recent research has, however, indicated that some of the robustness and the ill-posed problem may be eliminated if the algorithms are applied in the context of a controllable  sensor system. The explicit control of the sensory system to improve robustness and eliminate ill-posed conditions is often termed “active vision” .In addition it has been suggested that the general machine vision problem of providing a full 3-D symbolic description of the environment without any prior knowledge is much too hard to be solved at this time and that robust solutions may be found provided that task specific knowledge is utilized in the design and processing for a specific application. Such an approach to machine vision is termed “purposive vision”. The aim of purposive vision is not to default to the strategy for construction of application, which was adopted in the seventies and the eighties, but rather to complement “general” machine vision techniques with domain specific information to facilitate control of the entire system so as to provide the needed robustness. Control is thus a significant issue in purposive vision. A significant application area for machine vision is in robotics. Much of the work in robotics has been based on use of sensory modalities such as ultrasonic sonars, as it has been difficult to obtain sufficiently good depth data using “shape from X” techniques.The introduction of a priori information may, however, change this situation. For use in well-known scenarios, it is possible to construct a model of the environment and subsequently compare sensor readings with predictions obtained from the model  environment. Progress in areas such as CAD modeling has implied that it today is possible to integrate CAD systems into the control of robots. The introduction of such models implies at the same time that it is possible to exploit machine vision methods, as the needed a priori information may be extracted from the CAD model .To ensure that the systems constructed may be used not only for one specific application but also rather for a variety of applications the trend is towards use of layered control. In layered control the hardware of the robot is interfaced to the rest of the system through device level software.This software transforms robot-specific commands and feedback into a standard representation which may be shared by several different platforms. It thus becomes simple to change robots without a need for a complete redesign of all the software. Above the device level is a set of control layers which handle path planning, control and the associated perception. In robot control, at least for mobile robots, there is a trend towards use of a set of different layers. Each layer is responsible for a specific task for the robot. For example, one layer may be responsible for “survival” and be responsible to make sure that the robot does not bump into objects in the environment and that it moves if it is on a collision course with another object in the environment. Another layer might be responsible for construction of a map of the environment to facilitate navigation or localization of target objects. The use of different layers for different tasks is different from the approach which traditionally has been used in robotics, where control is integrated in a “perceive-plan-control” cycle. The two approaches to control of a robot are illustrated in figure 1.



Introduction to Vision Systems:

The typical vision system consists of the camera and digitizing hardware, a digital computer, and hardware and software necessary to interface them. This interface hardware and software is often referred to as a preprocessor. The operation of the vision system consists of three functions:
1. Sensing and digitizing image data.
2. Image processing and analysis.
3. Application.
The sensing and digitizing functions involve the input of vision data by means of a camera focused on the scene of interest. Special lighting techniques are frequently used to obtain an image of sufficient contrast for latter processing. The image viewed by the camera is typically digitized and stored in computer memory. The digital image is as a frame of data vision, and is frequently captured by a hardware device called frame grabber. These devices are capable of digitizing images at the rate of 30 frames per second. The frames consist of a matrix of data representing projections of the scene sensed by the camera. The elements of the matrix are called pixels. The number of pixels is determined by a sapling process performed on each image frame.

A single pixel is the projection of a small portion of the scene which reduces that portion to a single value.
The value is a measure of the light intensity for that element of the scene. Each pixel intensity is converted to a digital value.
The digitized image matrix for each frame is stored and then subjected to image processing and analysis function for data reduction and interpretation of the image.
Typically an image frame will be thresholded to produce a binary image, and then various feature measurements will further reduce the data representation of the image.

Image Processing versus Image Analysis:

Image processing relates to the preparation of an image for latter analysis and use. Images captured by a camera or a similar technique (e.g. by a scanner) are not necessarily in a form that can be used by image analysis routines. Some may need improvement to reduce noise, others may need to be simplified, and still others may need to be enhanced, altered, segmented, filtered, etc. Image processing is the collection of routines and techniques that improve, simplify, enhance, or otherwise alter an image. Image analysis is the collection of processes in which a captured image that is prepared by image processing is analyzed in order to extract information about the image and to identify objects or facts about the object or its environment.

Two-And-Three-Dimensional Images:

Although all real scenes are three dimensional, images can either be two or three dimensional. 2-dimensional images are used when the depth of the scene or its features need not be determined. As an example, consider defining the surrounding contour or the silhouette of an object. In that case, it will not be necessary to determine the depth of any point on the object. Another example is the use of vision system or inspection of an integrated circuit board. Here, too there is no need to know the depth relationship between different parts, and since all parts are fixed to a flat plane, no information about the surface is necessary. Thus, a 2-dimensional image analysis and inspection will suffice. Three- dimensional image processing deals with operations that require motion direction, depth measurement, remote sensing, relative positioning and navigation. All three dimensional vision systems share the problem of coping with many-to-one mappings of scenes to images to extract information from these scenes image processing techniques are combined with Artificial intelligence techniques. In this paper we shall consider a vision system for 2-dimensional image processing only.

Acquisition of Images:

There are two types of vision cameras: analog and digital. Analog cameras are not very common anymore, but are still around; they used to be standard at television stations. Digital cameras are much more common and are mostly similar to each other. A video camera is a digital camera with an added videotape recording section. Otherwise the mechanism of image acquisition is the same as in the other cameras that do not record an image. Whether the captured image is analog or digital, in vision systems the image is eventually digitized. In a digital form, all data are binary and are stored in a computer file or memory chip


Vidicon Camera:

A vidicon camera is an analog camera that transforms an image into an analog electrical signal. The signal, a variable voltage (or current) vs. time, can be stored, digitized,broadcast, or reconstructed into an image. With the use of a lens the scene is projected onto a screen made up of two layers: a transparent metallic film and a photoconductive mosaic that is sensitive to light. The mosaic reacts to the varying intensity of light by varying its resistance. As a result, the image is projected on to it; the magnitude of the resistance at each location varies with the intensity of light. An electron gun generates and sends a continuous cathode beam through two pairs (deflectors) that are perpendicular to each other. Depending on the charge of each pair of capacitors, the electron beam is deflected up or down, and left or right and is projected on to the photoconductive mosaic. At each instance, as the beam of electron hits the mosaic, the charge is conducted to the metallic film and can be measured at the output port. The voltage measured at the output is V=IR, where I is the current (of the beam of electrons), and R is the resistance of the mosaic at the point of interest.


Digital Camera:

A digital camera is based on solid-state technology. As with other cameras, a set of lenses is used to project the area of interest onto the image area of the camera. The main part of the camera is a solid-state silicon wafer image area that has hundreds of thousands of extremely small photosensitive areas called Photosites printed on it. Each small area of the wafer is a pixel. As the image is projected on to the image area, at each pixel location of the wafer a charge is developed that is proportional to the intensity of light at that location. Thus, a digital camera is also called a Charge Coupled Device or CCD camera, and a Charge Integrated Device or CID camera. The collection of charges, if read sequentially, would be a representation of the image pixels. The wafer may have as many as 520,000 pixels in an area with dimensions of a fraction of an inch (3 /16 × 1/4). Obviously, it is impossible to have direct wire connections to all of these pixels to measure the charge in each one. To read such an enormous number of pixels, 30 times a second the charges are moved to optically isolated shift registers next to each photosite, are moved down to an output line, and then are read. The result is that every 30th of a second the charges in all pixel location are read sequentially and stored or recorded. The output is discrete representation of the image – a voltage sampled in time – as shown in figure (a) and (b) is the CCD element of a VHS camera. Similar to CCD cameras for visible lights, long wavelength infrared cameras yield a television-like image of the infrared emissions of the scene.

Digital Images:

The sampled images from the aforementioned process are first digitized through an analog-to-digital converter (ADC) and then either stored in the computer storage unit in an image format such as TIFF, JPG, BMP, etc., or displayed on a monitor. Since it is digitized, the stored information is a collections of 0’s and 1’s that represent the intensity of light at each pixel; a digitized image is nothing more than a computer file that contains these 0’s and 1’s, sequentially stored to represent the intensity of light at each pixel. The files can be accessed and read be a program, can be duplicated and manipulated, or can be rewritten in a different form. Vision routines generally access this information, perform some function on the data, and either display the result or store the manipulated result in a new file. An image that has different gray levels at each pixel location is called gray image. The gray values are digitized by a digitizer, yielding strings of 0’s and 1’s that are  sequentially displayed or stored. A color image is obtained by superimposing threeimages of red, green, and blue hues, each with a varying intensity and each equivalent toa gray image( but in a colored state). Thus, when the image is digitized, it will similarlyhave strings of 0’s and 1’s for each hue. A binary image such that each pixel is either fully light or fully dark - a 0 or a 1. To achieve a binary image in most cases a gray image is converted by using the histogram of the image and a cutoff called a threshold. A histogram determines the distribution of the different gray levels. One can pick a value that best determines a cutoff level with least distortion and use that value as a threshold to assign 0’s (or “off”) to all pixels whose gray levels are below the threshold value and to assign 1’s(or “on”) to all pixels whose gray values are above the threshold. Changing the threshold will change the binary image. The advantage of a binary image is that it requires far less memory and can be processed much faster than gray or colored images.

Image Processing Techniques:

Image techniques are used to enhance, improve, or otherwise alter an image and to prepare it for image analysis. Usually, during image processing information is not extracted from the image. The intension it that is to remove faults, trivial information, or information that may be important, but not useful, and to improve the image. As an example, suppose that an image was obtained while the object was moving, and as a result the image is not clear. It would be desirable to see if the blurring in the image could be reduced or removed before the information about the object (such as its nature, shape, location, orientation, etc.) can be determined. Again, consider an image that is corrupted by direct lighting that reflected back, or and image that is noisy because of low light. In all these cases, it is desirable to improve the image and prepare it before image analysis routines are used.

The various techniques employed in image processing and analysis are:
1. Image data reduction
2. Segmentation
3. Feature extraction
4. Object recognition

This paper primarily deals with the process of segmentation thus further discussion about other techniques is avoided.

Segmentation:

Segmentation is the generic name for the number of different techniques that divide the image into segments of its constituents. In segmentation, the objective is to group areas of an image having similar characteristics or features into distinct entities representing parts of the image. One of the most important techniques which this papers deals with is thresholding.

Thresholding:

Thresholding is a binary conversion technique in which each pixel is converted into a binary value either black or white. This is accomplished by utilizing a frequency histogram of the image and establishing what intensity (gray level) is to be the border between black and white. To improve the ability to differentiate, special lighting techniques must often be employed.It should be pointed out that the above method of using a histogram is only one of a large number of ways to threshold an image. Such a method is said to use a global threshold for an entire image. When it is not possible to find a single threshold or an entire image, an approach is to partition the total image into smaller rectangular areas and determine the threshold or each window being analyzed. Images of a weld pool in real time were taken and digitized using thresholding technique. The images were thresholded at various threshold values and also at the optimum value to show the importance of choosing an appropriate threshold. Two such sample images are shown here.These images clearly show the importance of the threshold being chosen. The optimum threshold is determined from the histogram.

Conclusion:

Thresholding is the most widely used technique for segmentation in industrial vision applications. The reasons are that it is fast and easily implemented and that the lighting is usually controllable in an industrial setting. In this paper as an example  a weld pool image is digitized using thresholding technique. The paper also successfully demonstrates the effect of choosing various thresholds. This technique can also be applied to scenes in which multiple objects occupy the view port.

References:


  • Minimization and quantification of Arc interference in Robotic Welding– C.V.Sriram, Prof. C.L.V.Prasad and Prof. M.M.M. Sarkar, proceedings of MEAK 2K2- National Conference on CAD/CAM.
  • K.S.Fu, R.C.Gonzalez, C.S.G.Lee, Robotics-control, sensing, vision and intelligence, McGraw-Hill Book Company.
  • Mikell P.Groover, Mitchell Weiss, Roger N. Nagel, Nicholas G. Odrey , Industrial Robotics technology, programming and applications, McGraw-Hill Book Company.
  • Robert J. Schilling, Fundamentals of Robotics Analysis and control, Prentice-Hall of India Private Limited.
  • Saeed B. Nikku , Introduction to Robotics Analysis, Systems, Applications, Pearson Education Asia.
  • P.A.Janakiraman, Robotics and Image Processing an Introduction, Tata McGraw Hill Publishing Company.