Search This Blog
“Simplicity
is the ultimate sophistication.”
— Leonardo da Vinci
Contact me: sreramk360@gmail.com
Contact me: sreramk360@gmail.com
Wednesday, 21 December 2016
Thursday, 15 December 2016
Neural Networks
Neural Network Library (work in progress)
-By K Sreram
I have recently written a simple neural network algorithm and I have posted it on github. It uses feed-forward and back propagation algorithms. It is still far from a complete neural network tool. The link to the source: https://github.com/Aplokodika/MachineLearning/tree/master/src
Saturday, 24 September 2016
Wednesday, 24 August 2016
Significance of learning algorithms in robotics
Significance of learning
algorithms in robotics
Neural Network systems in robotics
Copyright
© 2016 K Sreram, all rights reserved
Abstract— Robotic systems with dynamic adaptability are becoming increasingly
popular in research. This is specifically because, it gives us the opportunity
to in-build functionalities that cannot be implemented using the conventional
algorithms. Even if an attempt was made to use human statisticians in place of
learning algorithms, it would require statisticians to spend extensive amount
of time in deducing the behavior and properties of the data. And soon,
statisticians will be further required to readapt the system to a completely newer
data-set. This requirement is almost completely avoided after the introduction
of learning algorithms like neural networks. This paper briefs on the
application of neural networks and other learning algorithms in automated
systems and robotics, and also explains about how robots with the capacity to
dynamically adapt (using a learning algorithm) prove vital in solving real-life
problems.
Keywords—Neural Networks; Robotics; Dynamic
adaption; statistic; data-mining; unsupervised-learning;
INTRODUCTION
Robotics
now play a major role in automatizing tasks in many industries. In production
industries, robots replace humans on various hazardous tasks which involve
interacting with live machines. But most of the manual tasks that are
successfully replaced by robots, are the kind of tasks that can be carried over
by certain pre-defined set of rules. Robots haven’t yet gone to the extent of
being able to replace tasks which cannot be performed using the same set of
rules more than once. Robots with learning algorithms are still at its verge of
development. Though learning algorithms aren’t prominently used in robots, they
prove extremely useful in categorizing uncategorized data and filtering white-noise
from real time data. Learning algorithms are not efficient enough to take decisions
based on the data it receives. This paper presents the use of learning
algorithm in robots with an example of a design of a robot that changes the pressure
of its wheel’s suspension to reduce the intensity of the robot’s vibration as it
moves on irregular surfaces. This paper also explains about robots that do not
use learning algorithms in them, but can be proven to be extensively useful in
solving vital real-life problems.
A
combination of distance sensors in the robots front-end detects the distance
between the point at which the sensors are placed and the ground.
Irregularities in the ground causes a deviation from the expected
distance-measure; these deviations along with the velocity of the robot taken
into consideration (this is required because Doppler’s effect causes the
frequency of the sound detected to increase) can be used to construct the
pattern of the irregularities in the ground. Learning algorithms like the deep
Neural Networks can be used to approximately form a correlation between these
parameters and the expected change in the pressure in the wheels.
The
implementation of the dynamically alterable suspension system is more of
mechanical concern, so in this paper we discuss more about constructing the
irregular pattern in the surface from the parameters like velocity of
the robot, distance values measured by the first, second and the third
sensors. This information will be in-turn used by static algorithms (along
with parameters like the time required for changing the pressure of the
suspension) to determine the amount by which the robot’s suspension increases.
Learning algorithms alter the behaviour of itself to adapt to the required
training data set. The process of adaption to a learning data can be thought of
as learning a relation
(for the value of
in a specific domain) such that the
future values inputted to the system cause the function to approximately return
the expected output.


A
learning algorithm can be generally expressed as,
where,
a set of parameter values that
changes with time, and
is the set of inputs and
is the set of outputs. This
algorithm tries to adapt to a given training data-set and progressively changes
the values in
to fit-in with the training data. One
of such algorithms that learn based on inputted data-sets is neural network.





Robots,
like the mechanical arm, conditionally execute certain processes based on
explicitly defined rules. But when it is required to process external
information with a lot of white noise and make a decision based on such
information, leaning algorithms are required. Learning algorithms work with
indefinite rules; such rules are complexly defined with many constrains. These
algorithms learn to adapt and enhance its purpose. Some examples of robots that
use learning algorithms include automated driver car, map construction robots,
feature recognition (both in visual and audio). These robots are directly
subjected to real-world input data. Learning algorithms take in real-world data
and filter the white noise in them to be able to accurately classify the
obtained information. Manually defining constrains for the filtration (or
classification) process is generally difficult for human statisticians. So the
best alternate will be to device an algorithm, that accepts these real-world
data-sets and its respective classifications and gradually enhance its ability
to classify similar data. There are two types of learning algorithms,
supervised learning algorithm and unsupervised learning algorithm.
Supervised
learning algorithms expect both the data and its expected classification (or
response) as training data-sets, and try to form a correlation between the
input data and the expected output data. Unsupervised learning algorithms do
not require a specifically customized learning data-set for adapting itself;
these algorithms capture the pattern underlying the data inputted to the system
rather than forming a correlation between two data-sets. Unsupervised learning
algorithms are used for clustering similar data-sets, for computing the density
of the amount of similar data-sets inputted to the system, for defining a
subset of data-set that must be allowed to enter the system (transformation
process).
Supervised
learning algorithms aid in making decisions based on the scenario prevalent in
the external environment, while unsupervised learning algorithm help in
recognizing the common data-set inputted to the system. The most widely used
learning algorithm is Neural Network. The design of this algorithm was
originally inspired by the functionality of our human brain. Though this system
was designed trying to emulate the brain-system, the difference in
functionality of both the brain and the neural network system is quite drastic.
The detailed functionality of a single brain-neuron is still vivid, while some
scientist believe a single neuron is far more complex than assumed. But in a
neural network system, each neurons have simple activation functions, which
computes the result of the neuron, by taking in the input as the parameter. The
connectivity between any two neurons is a weighed directed link.
The
neurons are separated into separate layers, while the initial layer acts as the
“input layer”, and the final layer acts as the “output layer”. When the system
is computed, the values set in the input layer goes through a series of
transformations before it reaches the output layer. At the output layer, it now
becomes ready to be immediately used in making conditional decisions.
Generally, feature reorganizing algorithms are used in tracking abnormal
structures in medical image data (either X-rays or continuous image feeds, in
terms of videos).
EXAMPLES OF ROBOTS THAT
DO NOT CONTAIN LEARNING MECHANISMS
Many
mechanical robots that move objects, impart screws, nails and other peripherals
at a faster and efficient rate are used in industries in place of manual
skilled labours. Also, surgical robots that are remotely controlled by a surgeon
seated in a console room do not require any specific machine learning
mechanisms. Any task that can’t be represented by a large volume of data cannot
be replaced by machine learning. Also, machine learning techniques aren’t quite
reliable to be used in surgical robots.
This
unreliability is because of the ambiguity in the system’s structure, which is
responsible for providing the nearly accurate output. This signifies that, at
any point in time, it becomes hard (or impossible) for humans to predict the
behaviour of the learning algorithm. Though machine learning is not used in
sensitive areas such as in the surgical robots, these algorithms prove
extensively beneficial in detecting patterns in images and signals.
Remote bomb disposal
robots:
These
are robots which are remotely controlled by police officials, to enter into
places implanted with bombs and to find them; the bomb disposal system either
defuses the bomb or disposes it by carrying it to a safer place. These robots,
like the surgical robots are remotely controlled by an official. Also, added
on, these robots can be controlled to move about in completely unfeasible areas
and find and detect bombs.
Such
robots that are mechanically controlled remotely have a console, which can be
used to operate the robot. These robots would contain multiple cameras placed
at different ends to provide a greater view of the field and for efficiently
performing the required task. Robotic arms with specialized equipment will be
used in defusing the bomb (for example the water jet disrupter). Each command
or set of commands are communicated with the robot in an encrypted environment,
in such a way that the same set of radio signal pattern cannot be broadcasted twice.
Surgical robots:
Like
the bomb disposal robot, the surgical robots are controlled by the surgeon from
a console. The robotic arms have high precision and hence increases the
accuracy by several times; also, because the direct operation is done by
machines, the surgical process will be more hygienic than when done by a human
doctor.
NEURAL NETWORK SYSTEMS
These
are self-adapting algorithms that are inspired by the functions of our brain’s
neurons. Though they don’t represent the exact functionality of our brain
neurons, they are able to correlate large uncategorized data-sets and find a
pattern in them. If such tasks were performed by human statisticians, it would
be highly time consuming and expensive.
Working of neural network
system:
A
neural network system is a directed weighted graph formed by the network of
various nodes which are referred to as neurons. An algorithmic neuron, like the
biological neurons, contain synaptic connections; the strength of these
synaptic connections are determined by weight values. Let the network graph be
represented as
. Then, when we compute the result
of the system, we get
. Let the vertices
be formed by three sets
, where
is the set of input layer neurons,
is the output layer and
is the set of hidden layer neurons.
For simplicity, let’s assume that any two successive layers obtained from the
system are bijective.







To
retain the bijective nature of the system, let us also consider each of these
layers to be of dimension
, with the hidden neuron set
separated into
layers; that is,
. And again, let us assume each of
these layers to be ordered into a
matrix of neurons. Each of these
neurons in each layer have an “activation function” which helps the system
learn nonlinear data-sets. Let’s represent any neuron in the system as
where
is the layer number and
is the neuron number within that
layer. Then the neuron
connects with the neuron
where
represents the size of the layer
. Let’s denote the weight of a
connection as
. Where,
represents the neuron in the
layer and
represents the connecting neuron
in the
layer.
















Each
neuron is assumed to hold the result of the activation function, as well as the
input it received. For each neuron
in the system, the input it
receives is computed as,


Now,
the output is computed as,
where,
is the activation function of the
neuron. The same method is used to compute the result from the input neurons,
until the output neurons. The values that get set at the output neurons are
returned as the network result. The system dynamically changes its behaviour by
altering the weights to be able to accurately represent the training data-set
fed to the system. For any given pair of input and an expected output
the system first computes the network result and based on the output given by
the system, it measures the error in the network output based on the expected
output.


The
error 

Because
it isn’t quite efficient to compute square root, the general cost function used
is

The
system can use prominent methods like the gradient descent to alter the
weight values accordingly to reduce the error in the system.
Situations where these
neural network systems are used:
Unlike
robots that don’t have any machine learning algorithm in them, robots which
have machine learning algorithms tend to behave more unpredictable. This is
because of the constant alteration of their system, which enable them to change
and re-adapt to the environment. For these systems to even function, it must be
subjected to a large volume of data. Any problem that cannot be reduced to a
large volume of data cannot be substituted by learning algorithms.
Data-mining
and pattern recognition are extensively used in areas like feature detection,
evaluating marketing trend, finding a correlation in large and uncategorized
data like real-time social network data (can include, video, audio or text).
A ROBOT THAT LEARNS TO
MOVE SMOOTHLY ON IRREGULAR SURFACES
This
system will require to compute the amount of volume to reduce or increase for
each of the wheels, to ensure that the smoothness of the movement is maintained.
In this example, we consider it to use a neural network system with one hidden
layer. The input must contain the velocity of the robot
, the weight of the robot
the distance input provided by four
sensors placed right in front of each of the robots’ four wheels. These sensors
give the information on the distance of the ground from them at a particular
point in time. Also, the system must have the information on the current volume
of each of the four suspension system in the robot. So the neural network
system will have ten inputs (four for the sensors, four for the current
volume and two parameters for the velocity and the weight) and four outputs
(for stating the change in pressure in each of the wheel suspensions).


A brief introduction to
the mechanic functionality of the robots’ suspension system:
To
minimize the violent vibration caused by letting a robotic vehicle run on an
irregular surface, a specialized suspension system that works on hydraulic
mechanism is required. This suspension system must be able to alter the height of
the robot’s wheels, as the robot moves. Altering the volume of fluid within the
hydraulic suspension, would cause a change in the pressure exerted on the
wheels which eventually causes a change in its height. This change in pressure
with volume can be represented as a linear relation as long as the temperature
remains the same. The volume can be altered by introducing a “piston system”,
which alters the pressure by changing the volume of the fluid container. The
below expression shows the relationship between the change in pressure and the
change in volume of the fluid.




The neural network design
for the robot’s learning system
The
hidden layers must use the sigmoid activation function
. The hidden layer may contain up
to twenty neurons and the output layer must contain four neurons. The data-sets
need not be fed into the system separately, rather, we may define a “reward”
measure for the system which helps it enhance the combination of weight values
that are more optimal. This kind of neural network system is an example of an
unsupervised learning system. The vibration caused by the robot can be measured
by vibration sensors and the information obtained from this can be directly
used for the reward system.

More
the vibration detected, lesser will be the reward. And lesser the reward, the
more volatile the system becomes. This is the situation of having a cost
function that does not depend on the expected output data set. Such a
system is an unsupervised learning system. The cost function can be defined as
. So for simplicity, let us define
the coefficient as one and rewrite the equality as,
or
. In situations such as these,
where unsupervised learning algorithm is used, the situation resembles a “trial
and error” mechanism where the robot tries out various possibilities until it
succeeds.



CONCLUSION
Learning
algorithms, though aren’t majorly used in robotics in the current scenario, are
subjected to a great deal of research. The main reason why robots with learning
capability are not widely used is, the risks associated with them. If the
system is just used in filtering and categorizing data, it won’t be much of a
problem, but if it the system is supposed to be used in places such as
operation theatres, the ambiguity present in the system might not be desired. Unsupervised
learning mechanisms are incorporated when the system interacts with the
environment in real time, and if the cost function greatly depends on the
validation from the environment (or a fitness function). In such cases, the
volatility of the system is varied based on the validation it receives from the
environment.
Since the image-equations aren't visible, see this pdf file for the article
Monday, 11 July 2016
Why increasing complexity is not good?
“Simplicity
is the ultimate sophistication.”
— Leonardo da Vinci
Why
is complicating things wrong ?
-
K
Sreram
Common
sense always deceives us by overrating something abnormally
complicated for being something of extraordinary significance. It is
natural for most people to accept what they don't understand as a
“great work of art” and give it significant importance. People
also tend to underestimate what they do know, and what they are
really capable of unless they compare themselves with others. This
inspiration over difficult or complicated tasks build the desire in
them to try out such task, until they stop finding it difficult. But
when it comes to judging someone's work by searching for the work
which impresses you, your judgment becomes deceiving; as you judge
them based on what you are
really good at and what you
know quite well. So being impartial is more difficult than it seems,
when it comes to judging others for
their work.
Wednesday, 20 January 2016
Code Archive: Sudoku Puzzle solver
Code
Archive: Sudoku puzzle solver.
The
following code is published under GNU
public license:
The documentation explaining this can be found here: http://www.codeproject.com/Articles/1032477/Sudoku-puzzle-Solver
Author's contact: sreramk@outlook.com
Author's contact: sreramk@outlook.com
Wednesday, 6 January 2016
Getting marks in exams
Scoring marks in exams
It is every student’s dream, to be appreciated in the community they
live in. It may be their classroom or their home, but they all feel they deserve
more respect than they are given.
Why students want to get marks? And are their desires real?
Monday, 4 January 2016
The unknown reality behind IQ tests
The unknown reality behind IQ tests
For
most people their carrier depends on clearing public exams. There is a lot of competition
out there, for getting the job they want. Usually, exams conducted nationwide are used
in shortlisting the candidates appearing for the job interview. It could be a corporate
company or a government organization, the number of people willing to take up a
post is alarmingly high in the third world countries. If you observe closely,
people willing to get a post in these organizations don’t choose their jobs
based on personal interests, but rather take them up through others recommendations
and social pressures. So the corporate companies and the government organizations need a constant and intact method for screening the huge number
people seeking jobs, and shortlist them to a very small number.
Friday, 1 January 2016
How robots think?
How robots think?
We have all seen robots in science fictional movies, where they have the ability to fight wars, be independent leaders, create new machines, and in fact behave more intelligent and stronger than humans. But why aren’t we hearing about such robots in the news or witnessing one of them before our eyes? Because, such level of intelligence is impossible to attain, in the current technological trend. All that we have created are, the robots that blindly listen to a set of commands and does specific tasks in factories; at times being directly controlled by a human. The more “intelligent” robots among the ones we have developed identify their masters, obstacles or definite vector lines correctly and provide a definite response to them.
These are all that our current day scientists have: A bunch of machines that automatize certain tasks traditionally known to have been taken care of by humans alone. Investigating on the question, “why can’t we create a robot that thinks like us?” Shows that the major limitation we have is designing a system that comprehends the huge amount of distorted and random information given to it as input. While we walk across a roadway, we observe many complex objects and navigate through without any trouble. It is almost impossible to gain such accuracy in an artificially intelligent machine. It would require months of dedicated time to create a machine that even does half of what we do while walking on the road. Robots think using decision making trees that have a definite response programmed for a particular situation they are provided with (and there are many other methods researchers use for this purpose). It is impossible to program all the possible situations to a robot, as it would take forever. So scientists use algorithms that learn (learning algorithms). These learning algorithms modify their response to certain situations presented to them so slightly each time, to make their response more adaptive and accurate. This is a one-sentence description to what learning algorithms are. But there is an intensive ongoing research on the field of artificial intelligence, so this makes it clear that such a brief description for learning algorithms is not enough, even for an article that targets common men.
So to talk about how robots think, we need to have a brief view on how learning algorithms work. There are many kinds of learning algorithms. These algorithms map immediate situations to the required or optimal response. Usually, such algorithms have methods for evaluating how well their response have worked (or how close was their response to the unknown and the hypothetical “perfect” response). This kind of function, that judges how well the response worked will be easier to create, rather than forming the response itself. Depending on how accurate the response was, the algorithm changes itself to make its response more accurate. It is quite complicated to think of an algorithm that alters itself. Practically, the algorithm just alters a value or a set of values that tune into how the program must respond to the situations. Now, this might seem more promising to be realized practically. Most of the times, the problem is not just how to choose the best response, but how to gain the information regarding the situation from the environment, without letting the white noise to influence the decision. It could be an image or a video footage or a soundtrack. Information directly grasped from the environment contains a huge amount of random errors; the same scenery photographed twice will never look the same (on a precise scale. Not for us humans, for computers that try match the two images head-on).
To understand this better, let’s see an example of a learning system. Let’s say that you want to develop a really good user interface for an Android application. And you have become so creative, that you want different people of different age groups to have different UI for the same application. Having five UIs, you have to make the system learn which UI most suits a particular user. So the information you get (and you need) is the age of the user, the user’s gender and the user’s location. Of course, you can prompt the user to give away more information, to make your program’s judgement more accurate. But to maintain simplicity, let’s just assume that each user is prompted to provide only these three pieces of information.
In this case, the “situation” is the set of information comprising of the age, gender and the location of the user. The “response” is the solution to the question which UI must be assigned to a user. When the user signs in, the information regarding the age, gender and location are obtained from him. Now, the system uses frequency of usage and the opt-out frequency (the value measuring how many of the new users the company lost moments or days after they started using the application) to measure how well the UI influences the user to keep using the application. But, if a person stops using an application it could be for any reason. It could be because he didn’t want the application anymore or he had installed the wrong application for his purpose. At the same time, the information entered by the user (his age, gender and location) could also be wrong, as not all human are truthful.
So how will the system manage such anomalies in the accuracy of the information provided? And how will the system learn which UI is best suitable for a particular person? These are the problems handled by a learning algorithm. Ones who are familiar with learning algorithms might have heard the terms “Neural network” and “Genetic algorithm” quite often. Computer science is a very new subject and it’s still in its early stages of development (though we have many fancy devices in the market). So we have to use the knowledge from other subjects, which have been developed by a human for several centuries to improvise this branch of study. So whenever possible, computer science learns by looking around. Just look at the ants and observe their marvellous coordination as a colony, to do the same with programs. Just look at birds and observe how they fly, or how selfish they behave while sharing their work while flying to design an intelligent computer program that imitates them. This is more like how Kung-fu was founded by looking at how snakes and the monkeys fight!
The same way, the Genetic algorithm was developed by looking at how species develop new features after many generations due to genetic mutation to adapt themselves to their environment. And neural network algorithm was developed as an attempt to design a system that more or less imitates how our brain works (though it is argued that it does not exactly resemble the functioning of our brain).
So to define the system, $f_u$ is taken as the usage frequency and $f_{out}$ is taken as the opt-out frequency. We have 3 parameters, the location, the user’s gender, and the user’s age. So the information provided by the user is a point in a three dimensional graph with each axis having a specific domain. Gender can only have two values, while age can range from zero to about one hundred and the location can be an integer value indexing each of the possible locations to it (like, each number is assigned a country, or state or city). It is more preferable to have the nearby regions, in the list of locations to have values that are closer to each other assigned to them.
The gender, age and location are plotted in X, Y and Z axis respectively. Let’s divide the X axis into two major parts; and let’s further divide Y into five parts (each part having a particular age range; for example the first range being the age group from 0 to 12 and the second range being from 13 to 19 and so on) and also the Z axis into five parts. So on the whole we have $5\times 5 \times 2 = 50$ unit squares. If each of these 50 unit squares have a set of 5 values, with each of these values showing the weight of the connectivity of that particular region to each of the five possible UIs. By altering these values we alter the probability of assigning a particular UI to a user.
Thereby, each region is mapped to each of the UIs. At any point in time, $w = \frac{kf_u}{k_{out}}$. So each time the user with a particular UI comes to use the application the mapping between the, region which the information he provided falls under, to the UI he uses gets strengthened. Each time someone uninstalls the application, the connectivity in the region under which the information he provided comes under (in the graph) is weakened. So if a new user starts using the application, our algorithm will know exactly what user interface to give him. So these weight values just act as probability weights in choosing a UI. For example, let’s assume that the information $i$ provided by a user falls under the region $r_i$. There are five values in the region $r_i$ each expressing the weight value of the connection to each of the five UIs. Let these five values be represented as $w_1, w_2, …, w_5$. $p_w(1) = \frac{w_1}{\sum {w_j}}, p_w(2) = \frac{w_1}{\sum {w_j}}, …$ and so on; where, $p_w(1)$ is the probability of choosing the first UI.
We have now seen how a learning algorithm can be used to make a choice out of five given choices. But in case of robots, the choices they would have to make will not be as simple as these. But fundamentally, this is how the computer takes decisions. It tunes its values based on what it observers and in turn, behaves accordingly.
People may say that it is impossible to achieve human-level intelligence in a computer, but we can’t know that for sure. Who knows, there might be a super-intelligent robot waiting for us in the near future!
copyright (c) 2016 K Sreram, all rights reserved.
copyright (c) 2016 K Sreram, all rights reserved.
Subscribe to:
Posts (Atom)
Featured post
Why increasing complexity is not good?
“ Simplicity is the ultimate sophistication.” — Leonardo da Vinci Why is complicating things wrong ? - K Sr...
-
The unknown reality behind IQ tests For most people their carrier depends on clearing public exams. There is a lot of competition ou...
-
Insertion sorting algorithm analysis This article shows the analysis of insertion sorting algorithm and a program to implement insert...
-
Read Sreram KS 's answer to Is over-confidence a bad thing? on Quora