With AI and Machine learning now moving out of the realm of science fiction into reality, it’s time for enterprises to recognize the potential in Machine learning.
It’s estimated that the more than 50% of big enterprises are conducting their own experiments in AI.
With an increase in the number of connected devices and the growing size of the internet of things, the amount of data generated is growing with leaps and bounds.
According to Cisco's Visual Networking Index initiative, the Internet is now in the "zettabyte era." A zettabyte equals 1 sextillion bytes, or 1,000 exabytes. By the end of 2016, global Internet traffic will reach 1.1 zettabytes per year, according to Cisco, and by 2019, global traffic is expected to hit 2 zettabytes per year.
United we stand- working together
The only way in which we can deal with the data and gather vital information from it is through the use of Machine learning. Since we are still in the learning phase with this new technology, most tech giants have open sourced their projects to gather acceptance and understanding.
Machine Learning is here to stayAI timeline
A brief look back in history brings up certain key points in the journey of Artificial Intelligence.
In case you are interested in a more in depth timeline, check out the Timeline of Computer History.
1941 Isaac Asimov’s short science fiction story introduces the three laws of Robotics. This is believed to be the first use of the term ‘robotics’.
1943 Two scientists, Warren S McCulloch and Walter H Pitts, publish a paper ‘A logical Calculus of the Ideas Immanent in Nervous Activity’, that has since become the foundation for the study in artificial neural networks.
1948 Norbert Wiener publishes the book ‘Cybernetics’ that influences research into AI and control systems
1951 Alan Turing creates a standard test to answer the question ‘Can machines think?’ He felt that if answers to a question cannot be distinguished from those of a human, then it would prove that machines can ‘think’
1956 John McCarthy Marvin Minsky, Claude Shannon organize a summertime research at Dartmouth with leading thinkers of the time and christened the field of study ‘artificial intelligence’
1958 Johan Mccarthy at MIT invents the programming language LISP (List Programming) that has helped simplify the type of programming required to mirror human thinking.
1965 Stanford team led by Ed Feifenbaum , Joshua Ledreberg and Carl Djerassi create DENDRAL – an artificial intelligence program designed to solve problems using the ‘if-then’ rules format.
1977 C3PO and R2D2 become the face of the future of robotics.
1986 LMI Lambda LISP workstation made its debut. It was cheaper and faster than its predecessor LISP
1997 Deep Blue defeats Grandmaster Gary Kasparov in 2 games and Kasparov won one game while the other three matches ended in a draw.
2000 Honda introduced their humanoid robot ASIMO (Advanced Step in Innovative Mobility). It could walk 1 mph, climb stairs and change direction after detecting hazards.
2005 Stanford Racing Team wins the 2005 DARPA ‘Grand Challenge’ with their autonomous vehicle ‘Stanley’ completing the 175 mile desert course in less than 7 hours with no human intervention.
2010 IBM’s Watson defeats Jeopardy! Tournament of Champions contestants.
2011 Siri marks her entry into our world with the Apple iPhone 4S.
Making it work for you
Like any new technology, AI and Machine Learning are at the inception stage and have a long way to go before becoming an integral part of the workplace.
While Enterprises will do well to keep track of these developments and stay prepared, here are a few ways in which they can prepare for a seamless transition in the future:
⦁ Keep analyzing : Machine learning depends on the data it is provided and the algorithms it uses to analyze the data. You have to understand these algorithms to ensure that the right data is being provided for optimal results.
⦁ Keep it open: Encourage employees to get involved in the process. Let them utilize the data to see how their performances can be improved. This will help debunk the myth that machines will replace their jobs in the future.
⦁ Keep experimenting: It is possible that the algorithms might point to a solution that would not have been tried in the normal course of business. At the risk of sounding preachy, it is essential to keep an open mind and trust the system even if it goes against normal practices till date.
⦁ Keep customers first: Machine learning has great potential to improve customer experiences. A good example is how Amazon uses this technology to improve customer recommendations. It is important to pass on such benefits to the customers to ensure that everyone benefits from this new technology.
A robotic future
The biggest fear that most employees have is of being replaced by machines. However, like with any new idea or technology, people need to take the time to understand it and learn with it in order to make it work. There are bound to be hiccups along the way. But without the human element, any technology would just remain another cog in the machine.