Search This Blog

Simplicity is the ultimate sophistication.” — Leonardo da Vinci
Contact me: sreramk360@gmail.com

Friday, 1 January 2016

How robots think?



How robots think?

We have all seen robots in science fictional movies, where they have the ability to fight wars, be independent leaders, create new machines, and in fact behave more intelligent and stronger than humans. But why aren’t we hearing about such robots in the news or witnessing one of them before our eyes? Because, such level of intelligence is impossible to attain, in the current technological trend. All that we have created are, the robots that blindly listen to a set of commands and does specific tasks in factories; at times being directly controlled by a human. The more “intelligent” robots among the ones we have developed identify their masters, obstacles or definite vector lines correctly and provide a definite response to them.

These are all that our current day scientists have: A bunch of machines that automatize certain tasks traditionally known to have been taken care of by humans alone. Investigating on the question, “why can’t we create a robot that thinks like us?”  Shows that the major limitation we have is designing a system that comprehends the huge amount of distorted and random information given to it as input. While we walk across a roadway, we observe many complex objects and navigate through without any trouble. It is almost impossible to gain such accuracy in an artificially intelligent machine. It would require months of dedicated time to create a machine that even does half of what we do while walking on the road. Robots think using decision making trees that have a definite response programmed for a particular situation they are provided with (and there are many other methods researchers use for this purpose). It is impossible to program all the possible situations to a robot, as it would take forever. So scientists use algorithms that learn (learning algorithms). These learning algorithms modify their response to certain situations presented to them so slightly each time, to make their response more adaptive and accurate.  This is a one-sentence description to what learning algorithms are. But there is an intensive ongoing research on the field of artificial intelligence, so this makes it clear that such a brief description for learning algorithms is not enough, even for an article that targets common men.

So to talk about how robots think, we need to have a brief view on how learning algorithms work. There are many kinds of learning algorithms. These algorithms map immediate situations to the required or optimal response. Usually, such algorithms have methods for evaluating how well their response have worked (or how close was their response to the unknown and the hypothetical “perfect” response). This kind of function, that judges how well the response worked will be easier to create, rather than forming the response itself. Depending on how accurate the response was, the algorithm changes itself to make its response more accurate. It is quite complicated to think of an algorithm that alters itself. Practically, the algorithm just alters a value or a set of values that tune into how the program must respond to the situations. Now, this might seem more promising to be realized practically. Most of the times, the problem is not just how to choose the best response, but how to gain the information regarding the situation from the environment, without letting the white noise to influence the decision. It could be an image or a video footage or a soundtrack. Information directly grasped from the environment contains a huge amount of random errors; the same scenery photographed twice will never look the same (on a precise scale. Not for us humans, for computers that try match the two images head-on).

To understand this better, let’s see an example of a learning system. Let’s say that you want to develop a really good user interface for an Android application. And you have become so creative, that you want different people of different age groups to have different UI for the same application. Having five UIs, you have to make the system learn which UI most suits a particular user. So the information you get (and you need) is the age of the user, the user’s gender and the user’s location. Of course, you can prompt the user to give away more information, to make your program’s judgement more accurate. But to maintain simplicity, let’s just assume that each user is prompted to provide only these three pieces of information.     

In this case, the “situation” is the set of information comprising of the age, gender and the location of the user. The “response” is the solution to the question which UI must be assigned to a user. When the user signs in, the information regarding the age, gender and location are obtained from him. Now, the system uses frequency of usage and the opt-out frequency (the value measuring how many of the new users the company lost moments or days after they started using the application) to measure how well the UI influences the user to keep using the application. But, if a person stops using an application it could be for any reason. It could be because he didn’t want the application anymore or he had installed the wrong application for his purpose. At the same time, the information entered by the user (his age, gender and location) could also be wrong, as not all human are truthful.

So how will the system manage such anomalies in the accuracy of the information provided? And how will the system learn which UI is best suitable for a particular person? These are the problems handled by a learning algorithm. Ones who are familiar with learning algorithms might have heard the terms “Neural network” and “Genetic algorithm” quite often. Computer science is a very new subject and it’s still in its early stages of development (though we have many fancy devices in the market). So we have to use the knowledge from other subjects, which have been developed by a human for several centuries to improvise this branch of study. So whenever possible, computer science learns by looking around. Just look at the ants and observe their marvellous coordination as a colony, to do the same with programs. Just look at birds and observe how they fly, or how selfish they behave while sharing their work while flying to design an intelligent computer program that imitates them. This is more like how Kung-fu was founded by looking at how snakes and the monkeys fight!

The same way, the Genetic algorithm was developed by looking at how species develop new features after many generations due to genetic mutation to adapt themselves to their environment. And neural network algorithm was developed as an attempt to design a system that more or less imitates how our brain works (though it is argued that it does not exactly resemble the functioning of our brain).

Now let’s get back to solving the UI problem. We may use probability in choosing the right UI for the right person. And we may also have means to adapt to how much each of the occurrence’s probability value must be altered, to make the decision more accurate. It is also good to have a facility to enable the user to choose between UIs in the early stage of the application’s development. To begin with, each new user who signs in to use the application is assigned a UI in random, with the probability distribution being even for all the five choices (choices of UI). Now, we map the information provided by the user with the UI assigned to him and strengthen the mapping inversely dependent on the opt-out frequency and directly dependent on the usage frequency.

So to define the system, $f_u$ is taken as the usage frequency and $f_{out}$ is taken as the opt-out frequency. We have 3 parameters, the location, the user’s gender, and the user’s age. So the information provided by the user is a point in a three dimensional graph with each axis having a specific domain. Gender can only have two values, while age can range from zero to about one hundred and the location can be an integer value indexing each of the possible locations to it (like, each number is assigned a country, or state or city). It is more preferable to have the nearby regions, in the list of locations to have values that are closer to each other assigned to them.     

The gender, age and location are plotted in X, Y and Z axis respectively. Let’s divide the X axis into two major parts; and let’s further divide Y into five parts (each part having a particular age range; for example the first range being the age group from 0 to 12 and the second range being from 13 to 19 and so on) and also the Z axis into five parts.  So on the whole we have $5\times 5 \times 2 = 50$ unit squares. If each of these 50 unit squares have a set of 5 values, with each of these values showing the weight of the connectivity of that particular region to each of the five possible UIs. By altering these values we alter the probability of assigning a particular UI to a user.

Thereby, each region is mapped to each of the UIs. At any point in time, $w = \frac{kf_u}{k_{out}}$. So each time the user with a particular UI comes to use the application the mapping between the, region which the information he provided falls under, to the UI he uses gets strengthened. Each time someone uninstalls the application, the connectivity in the region under which the information he provided comes under (in the graph) is weakened.  So if a new user starts using the application, our algorithm will know exactly what user interface to give him. So these weight values just act as probability weights in choosing a UI. For example, let’s assume that the information $i$ provided by a user falls under the region $r_i$. There are five values in the region $r_i$ each expressing the weight value of the connection to each of the five UIs. Let these five values be represented as $w_1, w_2, …, w_5$. $p_w(1) = \frac{w_1}{\sum {w_j}}, p_w(2) = \frac{w_1}{\sum {w_j}}, …$ and so on; where, $p_w(1)$ is the probability of choosing the first UI.

We have now seen how a learning algorithm can be used to make a choice out of five given choices. But in case of robots, the choices they would have to make will not be as simple as these. But fundamentally, this is how the computer takes decisions. It tunes its values based on what it observers and in turn, behaves accordingly.  

People may say that it is impossible to achieve human-level intelligence in a computer, but we can’t know that for sure. Who knows, there might be a super-intelligent robot waiting for us in the near future! 

 copyright (c) 2016 K Sreram, all rights reserved.   

Wednesday, 16 December 2015

What exactly caused the Chennai rains this december (2015)?


What exactly caused the Chennai rains this December (2015)?
                                    Article author: K Sreram
Heavy rain that hadn’t occurred in Chennai for over hundred years is something important to consider to understand our world’s climatic change. We have all heard about global warming, in which the global surface temperature gradually increase as a consequence of greenhouse effect. The global average temperature raised by 1˚C from 1880 to 2010. Does this value sound quite small? Well it's probably not. Because, this data is “average global surface temperature”. So on an average, all places on the map experience a temperature hike by 1˚C; obviously we have the net hike being directly proportional to the Earth’s surface area; which means that it eventually sums up to a huge value. Can the Chennai rains be related to this temperature hike? Yes it can! Global warming does not just cause glaciers melting in the Polar Regions, and the raising of the sea water level.

Saturday, 21 November 2015

how chess works


How chess games work

People have always wondered, “How to play like a grandmaster?” But when the question is “How to get into IIT?” the simple answer known to everyone is study hard and solve all problems; also make sure you solve them on time during the exam. But when it comes to chess, I found it really hard to come up with “what do players actually do to win the game?” Or what causes a grandmaster to play better than anyone? If the question is about IIT-JEE, the simple answer would be, “Just know them all, and solve problems on time”. Nevertheless when it comes to chess, it is quite hard to determine how a good chess game should be played.

Chess is a battle, where you mustn’t make mistakes. You might already know that any standard chess engine running on an ordinary machine would easily defeat the world’s top grandmasters. If you have tried playing with one such chess engines, you would think “Fine, that’s obvious. The computer is unimaginably strong. Beating it is impossible”. Not so soon; if a super grandmaster plays ten games with the computer, he is bound to lose most of them. But not all of them! This clearly shows that not all chess games played by a computer is unbeatable. If you are someone (like me) who have tried to improve your chess by trying the beat the engine, but failed unbelievably but finally managed to bring the game to a draw, after several “reverse moves” and finally realised that you have mistakenly set a lesser “fixed search depth” value; and badly want to know how chess works. You have come to the right place.

So what is so different about chess grandmasters? Or perhaps, any strong player (especially if you have got the awesome opportunity of playing chess with a chess grandmaster, being an ordinary player). Let me tell you the secret. You need to always make sure that you do the following each time you play:

Thursday, 5 November 2015

Writing a good story


Writing a good story
While reading a story, we tend to feel as if the events in the story are real. We cannot say this is exactly true, because I the hero is in a dangerous situation, you don’t feel “shocked” or “scared” the same way you would feel when it were real. But, you will know “How it feels” to be in that situation and be shocked or scared. Now let’s go on to the part where we actually write our story.

Saturday, 3 October 2015

Bellman Ford Algorithm implementation in c++

BELLMAN FORD ALGORITHM IMPLEMENTATION IN C++


Shortest path algorithm: Elaborated


Shortest path algorithm
Email: sreramk@outlook.com
Shortest path algorithms are used in may real life applications, especially applications involving maps and artificial intelligence algorithms which are NP in nature. A graph is a collection of nodes connected by edges  and can be expressed as. Stands for vertices. These vertices and the connecting edges together can be imagined to form a three dimensional geometrical structure. In some cases, for each (here  are vertices which are connected) there is a weight value assigned to it. This can also be called as “distance” or “cost” of connecting the two nodes. The shortest path algorithm traces the minimum distance (or cost) between two nodes  which are either directly or indirectly connected. The weight values along each possible paths to the destination node from the source node are summed up, and the path with the minimum summation value is chosen as the shortest path.
We may represent a weighted graph as where the extra parameter represents the set of weight values across each edge. Weight of the edge. Usually, shortest path algorithms are of time complexity because every node needs to be visited at least once. In case of AI algorithms, the vertices are generated based on specific rules, and usually it’s a function of depth; because most nodes have more than one connections, we may also have non-polynomial relation between the depth and the number of nodes.
Algorithms like Dijkstra’s algorithm’s time complexity is (note, represents number of nodes and represents number of edges). Dijkstra’s algorithm holds only for the situations where. But from the definition of a weighted graph given above, weight values can also be negative. So a modified version of the algorithm which also solves problems with negative weight edge is required. But this modified algorithm would require a greater runtime-time complexity; Dijkstra’s algorithm is a greedy algorithm, which selects the best possible path to every node in the graph, originating from source. But Dijkstra’s algorithm fails to solve problems that have negative weight values. In scenarios where negative weight values are present, another algorithm called Bellman Ford algorithm is used. But its runtime complexity is. Which is far greater than Dijkstra’s algorithm. Generally,  unless there are disjoint nodes or graphs. Unless stated, the graphs discussed here are always assumed to not contain any disjoint sets or nodes.

Sunday, 2 August 2015

Fundamentals of calculus chapter 4: More on differential equations

Fundamentals of calculus chapter 4: More on differential equations

Differential equations do not have immediate means to solve them. The reason is, because differential equations are defined from differentiation and so is its integral, it’s easier to find the differential equation rather than it’s solution. If our motive is purely to find the solution of differential equations, then our best approach will be to try and find how the expressions “behave” while trying to differentiate them. Then we may back trace this sequence of steps to find the original expression.

Featured post

Why increasing complexity is not good?

“ Simplicity is the ultimate sophistication.” — Leonardo da Vinci Why is complicating things wrong ? - K Sr...