Search This Blog

Simplicity is the ultimate sophistication.” — Leonardo da Vinci

Saturday, 3 June 2017

Nature of logical statements and postulates.

Nature of logical statements and postulates.

Postulates are usually propositions that do not have a definite proof. They are usually statements concluded from an experiment. A Logical statement is defined as an assertion obtained by the application of rules that are defined to be correct. In the context of this article, term ‘rule’ is used interchangeably in place of the term ‘logical-statement’ and the term rule-set is always used interchangeably with the phrase a set of logical statements’. And by definition, a logical statement is a statement derived from fixed principles that are known to be correct. Contrary to the popular belief that every statement can be proved, in reality, not all statements can be proven. This is because, in the end, every rule is constructed by fixed rules and to prove the ‘fixed rules’ correctness, we will need more ‘fixed rules’.

Thursday, 15 December 2016

Neural Networks

Neural Network Library (work in progress) 

               -By K Sreram  

I have recently written a simple neural network algorithm and I have posted it on github. It uses feed-forward and back propagation algorithms. It is still far from a complete neural network tool. The link to the source:  

Wednesday, 24 August 2016

Significance of learning algorithms in robotics

Significance of learning algorithms in robotics
                                     Neural Network systems in robotics
                                                         Copyright © 2016 K Sreram, all rights reserved
Abstract Robotic systems with dynamic adaptability are becoming increasingly popular in research. This is specifically because, it gives us the opportunity to in-build functionalities that cannot be implemented using the conventional algorithms. Even if an attempt was made to use human statisticians in place of learning algorithms, it would require statisticians to spend extensive amount of time in deducing the behavior and properties of the data. And soon, statisticians will be further required to readapt the system to a completely newer data-set. This requirement is almost completely avoided after the introduction of learning algorithms like neural networks. This paper briefs on the application of neural networks and other learning algorithms in automated systems and robotics, and also explains about how robots with the capacity to dynamically adapt (using a learning algorithm) prove vital in solving real-life problems.  
Keywords—Neural Networks; Robotics; Dynamic adaption; statistic; data-mining; unsupervised-learning;
Robotics now play a major role in automatizing tasks in many industries. In production industries, robots replace humans on various hazardous tasks which involve interacting with live machines. But most of the manual tasks that are successfully replaced by robots, are the kind of tasks that can be carried over by certain pre-defined set of rules. Robots haven’t yet gone to the extent of being able to replace tasks which cannot be performed using the same set of rules more than once. Robots with learning algorithms are still at its verge of development. Though learning algorithms aren’t prominently used in robots, they prove extremely useful in categorizing uncategorized data and filtering white-noise from real time data. Learning algorithms are not efficient enough to take decisions based on the data it receives. This paper presents the use of learning algorithm in robots with an example of a design of a robot that changes the pressure of its wheel’s suspension to reduce the intensity of the robot’s vibration as it moves on irregular surfaces. This paper also explains about robots that do not use learning algorithms in them, but can be proven to be extensively useful in solving vital real-life problems.  
A combination of distance sensors in the robots front-end detects the distance between the point at which the sensors are placed and the ground. Irregularities in the ground causes a deviation from the expected distance-measure; these deviations along with the velocity of the robot taken into consideration (this is required because Doppler’s effect causes the frequency of the sound detected to increase) can be used to construct the pattern of the irregularities in the ground. Learning algorithms like the deep Neural Networks can be used to approximately form a correlation between these parameters and the expected change in the pressure in the wheels.
The implementation of the dynamically alterable suspension system is more of mechanical concern, so in this paper we discuss more about constructing the irregular pattern in the surface from the parameters like velocity of the robot, distance values measured by the first, second and the third sensors. This information will be in-turn used by static algorithms (along with parameters like the time required for changing the pressure of the suspension) to determine the amount by which the robot’s suspension increases. Learning algorithms alter the behaviour of itself to adapt to the required training data set. The process of adaption to a learning data can be thought of as learning a relation  (for the value of in a specific domain) such that the future values inputted to the system cause the function to approximately return the expected output.
A learning algorithm can be generally expressed as,  where, a set of parameter values that changes with time, and is the set of inputs and is the set of outputs. This algorithm tries to adapt to a given training data-set and progressively changes the values in to fit-in with the training data. One of such algorithms that learn based on inputted data-sets is neural network.
Robots, like the mechanical arm, conditionally execute certain processes based on explicitly defined rules. But when it is required to process external information with a lot of white noise and make a decision based on such information, leaning algorithms are required. Learning algorithms work with indefinite rules; such rules are complexly defined with many constrains. These algorithms learn to adapt and enhance its purpose. Some examples of robots that use learning algorithms include automated driver car, map construction robots, feature recognition (both in visual and audio). These robots are directly subjected to real-world input data. Learning algorithms take in real-world data and filter the white noise in them to be able to accurately classify the obtained information. Manually defining constrains for the filtration (or classification) process is generally difficult for human statisticians. So the best alternate will be to device an algorithm, that accepts these real-world data-sets and its respective classifications and gradually enhance its ability to classify similar data. There are two types of learning algorithms, supervised learning algorithm and unsupervised learning algorithm.
Supervised learning algorithms expect both the data and its expected classification (or response) as training data-sets, and try to form a correlation between the input data and the expected output data. Unsupervised learning algorithms do not require a specifically customized learning data-set for adapting itself; these algorithms capture the pattern underlying the data inputted to the system rather than forming a correlation between two data-sets. Unsupervised learning algorithms are used for clustering similar data-sets, for computing the density of the amount of similar data-sets inputted to the system, for defining a subset of data-set that must be allowed to enter the system (transformation process).
Supervised learning algorithms aid in making decisions based on the scenario prevalent in the external environment, while unsupervised learning algorithm help in recognizing the common data-set inputted to the system. The most widely used learning algorithm is Neural Network. The design of this algorithm was originally inspired by the functionality of our human brain. Though this system was designed trying to emulate the brain-system, the difference in functionality of both the brain and the neural network system is quite drastic. The detailed functionality of a single brain-neuron is still vivid, while some scientist believe a single neuron is far more complex than assumed. But in a neural network system, each neurons have simple activation functions, which computes the result of the neuron, by taking in the input as the parameter. The connectivity between any two neurons is a weighed directed link.
The neurons are separated into separate layers, while the initial layer acts as the “input layer”, and the final layer acts as the “output layer”. When the system is computed, the values set in the input layer goes through a series of transformations before it reaches the output layer. At the output layer, it now becomes ready to be immediately used in making conditional decisions. Generally, feature reorganizing algorithms are used in tracking abnormal structures in medical image data (either X-rays or continuous image feeds, in terms of videos).
Many mechanical robots that move objects, impart screws, nails and other peripherals at a faster and efficient rate are used in industries in place of manual skilled labours. Also, surgical robots that are remotely controlled by a surgeon seated in a console room do not require any specific machine learning mechanisms. Any task that can’t be represented by a large volume of data cannot be replaced by machine learning. Also, machine learning techniques aren’t quite reliable to be used in surgical robots.
This unreliability is because of the ambiguity in the system’s structure, which is responsible for providing the nearly accurate output. This signifies that, at any point in time, it becomes hard (or impossible) for humans to predict the behaviour of the learning algorithm. Though machine learning is not used in sensitive areas such as in the surgical robots, these algorithms prove extensively beneficial in detecting patterns in images and signals.
Remote bomb disposal robots:  
These are robots which are remotely controlled by police officials, to enter into places implanted with bombs and to find them; the bomb disposal system either defuses the bomb or disposes it by carrying it to a safer place. These robots, like the surgical robots are remotely controlled by an official. Also, added on, these robots can be controlled to move about in completely unfeasible areas and find and detect bombs.
Such robots that are mechanically controlled remotely have a console, which can be used to operate the robot. These robots would contain multiple cameras placed at different ends to provide a greater view of the field and for efficiently performing the required task. Robotic arms with specialized equipment will be used in defusing the bomb (for example the water jet disrupter). Each command or set of commands are communicated with the robot in an encrypted environment, in such a way that the same set of radio signal pattern cannot be broadcasted twice.   
Surgical robots:
Like the bomb disposal robot, the surgical robots are controlled by the surgeon from a console. The robotic arms have high precision and hence increases the accuracy by several times; also, because the direct operation is done by machines, the surgical process will be more hygienic than when done by a human doctor.
These are self-adapting algorithms that are inspired by the functions of our brain’s neurons. Though they don’t represent the exact functionality of our brain neurons, they are able to correlate large uncategorized data-sets and find a pattern in them. If such tasks were performed by human statisticians, it would be highly time consuming and expensive.
Working of neural network system:
A neural network system is a directed weighted graph formed by the network of various nodes which are referred to as neurons. An algorithmic neuron, like the biological neurons, contain synaptic connections; the strength of these synaptic connections are determined by weight values. Let the network graph be represented as. Then, when we compute the result of the system, we get. Let the vertices be formed by three sets, where is the set of input layer neurons, is the output layer and is the set of hidden layer neurons. For simplicity, let’s assume that any two successive layers obtained from the system are bijective.
To retain the bijective nature of the system, let us also consider each of these layers to be of dimension , with the hidden neuron set separated into layers; that is, . And again, let us assume each of these layers to be ordered into a  matrix of neurons. Each of these neurons in each layer have an “activation function” which helps the system learn nonlinear data-sets. Let’s represent any neuron in the system as where is the layer number and is the neuron number within that layer. Then the neuron connects with the neuron  where represents the size of the layer. Let’s denote the weight of a connection as. Where, represents the neuron in the layer and  represents the connecting neuron in the  layer.
Each neuron is assumed to hold the result of the activation function, as well as the input it received. For each neuron  in the system, the input it receives is computed as,
Now, the output is computed as,  where,  is the activation function of the neuron.  The same method is used to compute the result from the input neurons, until the output neurons. The values that get set at the output neurons are returned as the network result. The system dynamically changes its behaviour by altering the weights to be able to accurately represent the training data-set fed to the system. For any given pair of input and an expected output the system first computes the network result and based on the output given by the system, it measures the error in the network output based on the expected output.
The error
Because it isn’t quite efficient to compute square root, the general cost function used is
The system can use prominent methods like the gradient descent to alter the weight values accordingly to reduce the error in the system.
Situations where these neural network systems are used:
Unlike robots that don’t have any machine learning algorithm in them, robots which have machine learning algorithms tend to behave more unpredictable. This is because of the constant alteration of their system, which enable them to change and re-adapt to the environment. For these systems to even function, it must be subjected to a large volume of data. Any problem that cannot be reduced to a large volume of data cannot be substituted by learning algorithms.
Data-mining and pattern recognition are extensively used in areas like feature detection, evaluating marketing trend, finding a correlation in large and uncategorized data like real-time social network data (can include, video, audio or text).
This system will require to compute the amount of volume to reduce or increase for each of the wheels, to ensure that the smoothness of the movement is maintained. In this example, we consider it to use a neural network system with one hidden layer. The input must contain the velocity of the robot, the weight of the robot the distance input provided by four sensors placed right in front of each of the robots’ four wheels. These sensors give the information on the distance of the ground from them at a particular point in time. Also, the system must have the information on the current volume of each of the four suspension system in the robot. So the neural network system will have ten inputs (four for the sensors, four for the current volume and two parameters for the velocity and the weight) and four outputs (for stating the change in pressure in each of the wheel suspensions).
A brief introduction to the mechanic functionality of the robots’ suspension system:
To minimize the violent vibration caused by letting a robotic vehicle run on an irregular surface, a specialized suspension system that works on hydraulic mechanism is required. This suspension system must be able to alter the height of the robot’s wheels, as the robot moves. Altering the volume of fluid within the hydraulic suspension, would cause a change in the pressure exerted on the wheels which eventually causes a change in its height. This change in pressure with volume can be represented as a linear relation as long as the temperature remains the same. The volume can be altered by introducing a “piston system”, which alters the pressure by changing the volume of the fluid container. The below expression shows the relationship between the change in pressure and the change in volume of the fluid.
 Is the change in pressure and  is the change in volume and is the bulk modulus of the system. It is obvious from the above expression that increasing the volume causes a linear increase in pressure applied to the wheels; and conversely, decreasing the volume, causes the pressure to decrease. The length to which the wheel can be lowered down in this robot must be larger compared to ordinary vehicles.   
The neural network design for the robot’s learning system
The hidden layers must use the sigmoid activation function. The hidden layer may contain up to twenty neurons and the output layer must contain four neurons. The data-sets need not be fed into the system separately, rather, we may define a “reward” measure for the system which helps it enhance the combination of weight values that are more optimal. This kind of neural network system is an example of an unsupervised learning system. The vibration caused by the robot can be measured by vibration sensors and the information obtained from this can be directly used for the reward system.
More the vibration detected, lesser will be the reward. And lesser the reward, the more volatile the system becomes. This is the situation of having a cost function that does not depend on the expected output data set. Such a system is an unsupervised learning system. The cost function can be defined as . So for simplicity, let us define the coefficient as one and rewrite the equality as,  or . In situations such as these, where unsupervised learning algorithm is used, the situation resembles a “trial and error” mechanism where the robot tries out various possibilities until it succeeds.
Learning algorithms, though aren’t majorly used in robotics in the current scenario, are subjected to a great deal of research. The main reason why robots with learning capability are not widely used is, the risks associated with them. If the system is just used in filtering and categorizing data, it won’t be much of a problem, but if it the system is supposed to be used in places such as operation theatres, the ambiguity present in the system might not be desired. Unsupervised learning mechanisms are incorporated when the system interacts with the environment in real time, and if the cost function greatly depends on the validation from the environment (or a fitness function). In such cases, the volatility of the system is varied based on the validation it receives from the environment.

Since the image-equations aren't visible, see this pdf file for the article

Featured post

Why complicating matters is not good

“ Simplicity is the ultimate sophistication.” — Leonardo da Vinci Why is complicating things wrong ? - K Sr...