Deep Learning and Artificial Intelligence

*originally published at CSB Tech Blog


Over the past decade, AI inspired innovations have begun to mature, becoming the hottest tech trend of the new millennium. During that time, computer vision improved by leaps and bounds, along with machine translation, speech recognition, and many types of data analytics. While we’ve experienced many dramatic improvements in our online experience, machine learning has been working quietly in the background. These advances are made even more dramatic in that they seem to have occurred overnight. Although the applications of AI research do not possess an independent intelligence, with today’s processing capabilities their capacity for automation is unparalleled.

Deep learning is a branch of machine learning (ML) that came about as a result of the increased storage capabilities, access to data, and the increased processing power available in the new millennium. These impressive applications currently infiltrating every industry, are the result of over 70 years of research and development.




A Brief History of Artificial Intelligence


While computers were first being conceived, we were already wondering how to design them to behave intelligently. In 1946, Alan Turing made the first detailed design of a computer program. In subsequent years, he was intensely concentrated on the problem of artificial intelligence, and created the now-famous test for determining whether a machine can think. The same year the Turing Test was introduced, Iaasac Asimov’s I-Robot featured humanoid robots with artificial intelligence. Asimov, along with other science fiction authors of that era inspired researchers and excited the imaginations of a generation about the potential for artificial intelligence.

During this time, research in neuroscience inspired computer systems called Artificial Neural Networks (ANN). In an attempt to mirror the way our brains work, the connections between artificial neurons grow stronger when fired together. Meanwhile, other researchers were working the idea of creating programs capable of learning. Ten years after the birth of the computer, the field of Artificial intelligence was officially born during a workshop at Dartmouth College, in 1956. A few years later, Arthur Samuel coined the term “Machine Learning” for the science of using statistics and probability theory to create algorithms that improve the performance of a given task through experience rather than explicit instructions. ANN is considered a branch of ML and is among the earliest techniques in the field of AI.

Cycles of Hype


Since the beginning of AI research, this technology has gone through cycles of hype, leading to over-inflated expectations, disappointment, and loss of funding. In the 50s, the world was introduced to computers playing human games, solving algebra word problems, and learning languages. The intelligence displayed by these machines was simply incredible to the people of the time, and there was much confidence among governments, enterprise, and the academic sphere that these would quickly lead to practical applications. By the 70s, after failing to deliver on the early promise, AI fell into disfavor. Another AI hype cycle began in the 80’s, when a few successful commercial applications brought a renewal of interest. However, it’s application was still not highly developed, and there were difficulties in getting algorithms to do the work expected of them.

The Deep Learning Revolution


In the early 2000’s, neural networks had fallen out of favor, such that it became exceedingly difficult to get research published on the topic. In 2006, a small group of researchers, led by Geoffrey Hinton, made a plan to re-brand neural networks as Deep Learning. Their paper, A fast learning algorithm for deep belief nets, sparked a revival of research into neural networks. It proposed training many more layers than before, with results far exceeding previous attempts. Three years later, Stanford researchers published Large-scale deep unsupervised learning using graphics processors. The GPU became the key to unlocking ANN and other AI techniques. These methods produced results up to 70% faster than any previous attempts, dramatically reducing the time required to perform an experiment. The reason a GPU is so much faster than a CPU is because a CPU must be able to switch very quickly between applications. The GPU on the other hand does not switch between tasks quickly, rather all of it’s power is in repeatedly performing the same type of operation.

In 2009, researchers began working on a dataset to map out the world of images in a competition known as ImageNet. The next year, deep learning methods were introduced, dominating the competition. From that point forward, deep learning drew a lot of attention to AI research, having its breakout year in 2012. By 2016, deep learning went mainstream and is now playing a role in our every day lives.


Behind the Scenes


Machine learning occurs in a few stages including data processing, training, deployment and monitoring. First, you must determine what questions you want to ask, or what problems you need to solve. Next, you must determine which data is likely to provide the answer. Then, the data must be gathered and prepared, removing extraneous information, incomplete datasets, and ensuring that it is complete and correctly formatted. Data selection and preparation methods often require running tests on small samples, and transforming the data multiple times.

Once the data is ready, an algorithm must be selected. There are four main categories of ML models: Supervised, Unsupervised, Semi-Supervised, and Reinforcement algorithms.

  • Supervised algorithms are used when there is a target output. These models are fed labeled data until they produce the desired results. Those results may include classification of data, prediction, and anomaly detection.
  • Unsupervised algorithms are used to find relationships in unlabeled data and can help to eliminate variables.
  • Semi-supervised algorithms are fed small amounts of labeled data and taught to classify a large batch of unprocessed data.
  • Reinforcement algorithms use trial and error to improve the performance of a software agent within a specific context.

These models are consistently trained with new data. While seeking an optimal output, the data may be reformatted multiple times. There are even algorithms to monitor the progress of primary algorithms, notifying technicians in case of an anomaly.

Algorithmic Bias


AI influenced applications are quickly changing the enterprise landscape. As ML vastly improves translation and voice recognition apps, it also offers a versatile way to interpret financial data and improve business practices. It’s easy to get excited about the possibilities, however, algorithms are only as good as their designers. Without proper training, machine learning will not produce the desired results. One way that can happen is by using old datasets that don’t accurately reflect the current market. Incomplete data can lead to biased results. We tend to assume that computer code won’t share our biases, but experience has proven otherwise.

Machine learning is currently used in determining medical care, and making legal, or financial decisions. Some of these applications have been found to contain bias. For example, there is a legal product available that performs a risk assessment of defendants. This program is used to determine bail and sentences. An independent study found that that this product labeled African Americans as higher risk, who did not go on to commit further crime. White defendants, on the other hand, were labeled as lower risk that did commit new crimes. In 2015, Google’s advertising engine was proven far less likely to show women advertising for high paying jobs. These are just two examples of how racism, sexism, and other types of bias can slip into this far-reaching innovation.

Understandably, there is much concern surrounding the lack of transparency in machine learning practices. AlgorithmWatch, a non-profit research and advocacy group, was founded to address those concerns. This group works specifically to evaluate and increase the transparency of algorithmic decisions with social relevance. They network with experts from a variety of cultures and disciplines to explain this complex subject to the general public and develop strategies to mitigate the effects of bias in algorithmic decision making. This fledgling industry will require regulation and purposeful strategies to avoid the danger of bias in the algorithms themselves, and their training data. Society as a whole needs to be actively engaged with the values going into automated decision making. That can be a difficult task, as knowledge about these processes is not widespread and we only see their results.


Welcome to the Future


While computers aren’t gaining the ability to think, they can process volumes of information at a rate that’s difficult to comprehend. That ability has become increasingly important as the modern world creates more data than it can possibly process. The amount of data produced is continually growing. Today, one of the most important tasks in artificial intelligence is in creating algorithms that can extract value from massive datasets. That fact inspired Harvard Business Review to declare Data Science as “Sexiest Job in the 21st Century.

Applications of artificial intelligence bring an incredible amount of automation to our lives. For example, these algorithms are able to process an organization’s entire financial ledger and look for anomalies, allowing human auditors to examine specific cases that require attention. Google Translate introduced an AI-driven translation engine that is continually improving, with user-submitted translations to train and refine the software. We now have voice-activated digital assistants that are quickly becoming indispensable, and will soon be standard in the workplace. Of course you’ve noticed how good social media is at guessing who you might know, or how Netflix knows what type of movie you might like to watch.

There are currently applications that can accurately colorize black and white images, increase the resolution of low-res photos (like in CSI, but for real), and recognize the makeup of images. These computer vision applications are already impressive, and we know they’ll continue to improve. What we don’t know is how incredible they’ll become. These are just a few examples of the automation that’s growing more powerful every day.

This technology is creating opportunities in finance, entertainment, education, science, law, and every other field you can imagine. These new tools are automating some jobs out of existence. However, they are expected to create as much work, if not more, than they replace. As with any major innovation, AI-tech comes with challenges, risks, and the potential for great rewards. Learning more, and taking advantage of the opportunities, sooner than later, could give you the edge to get ahead. Anyone who doesn’t very well may be left behind.


Further Reading

No comments:

Post a Comment