What it is and why it matters.
Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.
History Today's World Who Uses It How It Works Next Steps
Artificial Intelligence History
The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.
Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.
This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.
While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.
1950s–1970s
Neural Network
Early work with neural networks stirs excitement for “thinking machines.”
1980s–2010s
Machine Learning
Machine learning becomes popular.
2011–2020s
Deep Learning
Deep learning breakthroughs drive AI boom.
Present Day
Generative AI
Generative AI, a disruptive tech, soars in popularity.
What is the role of ethics in the future of AI? How important is big data? Why is domain knowledge crucial for the success of AI?
Most importantly: “It really is who has the data. That’s who will be the king,” says Harper Reid, Technology Pioneer.
Why is artificial intelligence important?
AI automates repetitive learning and discovery through data. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks. And it does so reliably and without fatigue. Of course, humans are still essential to set up the system and ask the right questions.
AI adds intelligence to existing products. Many products you already use will be improved with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies. Upgrades at home and in the workplace, range from security intelligence and smart cams to investment analysis.
AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that algorithms can acquire skills. Just as an algorithm can teach itself to play chess, it can teach itself what product to recommend next online. And the models adapt when given new data.
AI analyzes more and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers used to be impossible. All that has changed with incredible computer power and big data. You need lots of data to train deep learning models because they learn directly from the data.
AI achieves incredible accuracy through deep neural networks. For example, your interactions with Alexa and Google are all based on deep learning. And these products keep getting more accurate the more you use them. In the medical field, AI techniques from deep learning and object recognition can now be used to pinpoint cancer on medical images with improved accuracy.
AI gets the most out of data. When algorithms are self-learning, the data itself is an asset. The answers are in the data – you just have to apply AI to find them. Since the role of the data is now more important than ever, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win. But using that data to innovate responsibly requires trustworthy AI. And that means your AI systems should be ethical, equitable and sustainable.
Artificial Intelligence in Today's World
Is artificial intelligence always biased? Does AI need humans? What will AI do next?
Five AI technologies that you need to know
How Artificial Intelligence Is Being Used
Every industry has a high demand for AI capabilities – including systems that can be used for automation, learning, legal assistance, risk notification and research. Specific uses of AI in industry include:
Health Care
AI applications can provide personalized medicine and X-ray readings. Personal health care assistants can act as life coaches, reminding you to take your pills, exercise or eat healthier.
More health care solutions
Retail
AI provides virtual shopping capabilities that offer personalized recommendations and discuss purchase options with the consumer. Stock management and site layout technologies will also be improved with AI.
More retail solutions
Manufacturing
AI can analyze factory IoT data as it streams from connected equipment to forecast expected load and demand using recurrent networks, a specific type of deep learning network used with sequence data.
More manufacturing solutions
Banking
Artificial Intelligence enhances the speed, precision and effectiveness of human efforts. In financial institutions, AI techniques can be used to identify which transactions are likely to be fraudulent, adopt fast and accurate credit scoring, as well as automate manually intense data management tasks.
More banking solutions
AI has been an integral part of SAS software for years.
WildTrack and SAS: Saving endangered species one footprint at a time.
Flagship species like the cheetah are disappearing. And with them, the biodiversity that supports us all. WildTrack is exploring the value of artificial intelligence in conservation – to analyze footprints the way indigenous trackers do and protect these endangered animals from extinction.
Additionally, several technologies enable and support AI:
Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
Natural language processing (NLP) is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.
Graphical processing units are key to AI because they provide the heavy compute power that’s required for iterative processing. Training neural networks requires big data plus compute power.
The Internet of Things generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.
Advanced algorithms are being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.
APIs, or application programming interfaces, are portable packages of code that make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.
The goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it’s not a replacement for humans – at least not anytime soon.
AI Technology and How It's Used?
Artificial intelligence or AI is a popular buzzword you’ve probably heard or read about. Articles about robots, technology, and the digital age may fill your head when you think about the term AI. But what is it really, and how is it used?
Artificial intelligence is a technological advancement that involves programming technology to problem solve. Artificial intelligence is often talked about in conjunction with machine learning or deep learning and big data.
What is AI?
The definition of artificial intelligence is the theory and development of computer programs that are able to do tasks and solve problems that usually require human intelligence. Things like visual perception, speech recognition, decision-making, and word translation are all things that would normally need human intelligence, but now computer programs are able use their intelligence and capability to solve these tasks.
This type of intelligence was born in June of 1965 where a group of scientists and mathematicians met at Dartmouth to discuss the idea of a computer that could actually think. They didn’t know what to call it or how it would work, but their conversations there created the spark that ignited artificial intelligence. Since the “Dartmouth workshop,” as it is called, there have been highs and lows for the development of this intelligence. Some years went by where the idea of developing an intelligent computer was abandoned, and little to no work was done on this kind of intelligence at all. And in recent years, a flurry of work has been done developing and integrating this exciting intelligent technology into daily lives.
How does artificial intelligence differ from human intelligence?
So how is AI different from human intelligence? Artificial intelligence and the algorithms that make this intelligence run are designed by humans, and while the computer can learn and adapt or grow from its surroundings, at the end of the day it was created by humans. Human intelligence has a far greater capacity for multitasking, memories, social interactions, and self-awareness. Intelligence that is artificial doesn’t have an I.Q. making it very different from humans and human intelligence. There are so many facets of thought and decision making that artificial intelligence simply can’t master—computing feelings just isn't something that we can train a machine to do, no matter how smart it is. You can't automate multitasking or create autonomous relationships. Cognitive learning and machine learning will always be unique and separate from each other. While AI applications can run quickly, and be more objective and accurate, its capability stops at being able to replicate human intelligence. Human thought encompasses so much more that a machine simply can’t be taught, no matter how intelligent it is or what formulas you use.
How does AI work?
While it’s one thing to know what AI is, it’s another to understand the underlying functions. Artificial intelligence operates by processing data through advanced algorithms. It combs large data sets with its algorithms, learning from the patterns or features in the data. There are many theories and subfields in AI systems including:
Machine learning
Machine learning uses neural networks to find hidden insights from data, without being programmed for what to look for or what to conclude. Machine learning is a common way for programs to find patterns and increase their intelligence over time.
Deep learning
Deep learning utilizes huge neural networks with many layers, taking advantage of its size to process huge amounts of data with complex patterns. Deep learning is an element of machine learning, just with larger data sets and more layers.
Cognitive computing
Cognitive computing has a goal for a human-like interaction with machines. Think robots that can see and hear, and then respond as a human would.
Computer vision
In AI, computer vision utilizes pattern recognition and deep learning to understand a picture or video. This means the machine can look around and take pictures or videos in real time, and interpret the surroundings.
The overall goal of AI is to make software that can learn about an input, and explain a result with its output. Artificial intelligence gives human-like interactions, but won’t be replacing humans anytime soon.
How is AI used?
Artificial intelligence is being used in hundreds of ways all around us. It has changed our world and made our lives more convenient and interesting. Some of the many uses of AI you may know include:
Voice recognition
Most people know to call out for Siri when they need directions, or to ask their smart home Alexa to set a timer. This technology is a form of artificial intelligence. Machine learning helps Siri, Alexa, and other voice recognition devices learn about you and your preferences, helping it know how to help you. These tools also utilize artificial intelligence to pull in answers to your questions or perform the tasks you ask.
Self-driving cars
Machine learning and visual recognition are used in autonomous vehicles to help the car understand its surroundings and be able to react accordingly. Facial recognition and biometric systems help self-driving cars recognize people and keep them safe. These cars can learn and adapt to traffic patterns, signs, and more.
Chatbots
Many companies are utilizing artificial intelligence to strengthen their customer service teams. Chatbots can interact with customers and answer generic questions without needing to use a real human’s time. They can learn and adapt to certain responses, get more information to help them produce a different output, and more. A certain word can trigger them to put out a certain definition as a response. This expert system can give a human-level of interaction to customers.
Online shopping
Online shopping systems utilize algorithms to learn more about your preferences and predict what you’ll want to shop for. They can then put those items right in front of you, helping them grab your attention quickly. Amazon and other retailers are constantly working their algorithms to learn more about you and what you might buy.
Streaming services
When you sit down to watch your favorite TV show or listen to your favorite music, you may get other suggestions that seem interesting to you. That’s artificial intelligence at work! It learns about your preferences and uses algorithms to process all the TV shows, movies, or music it has and finds patterns to give you suggestions.
Healthcare technology
AI is playing a huge role in healthcare technology as new tools to diagnose, develop medicine, monitor patients, and more are all being utilized. The technology can learn and develop as it is used, learning more about the patient or the medicine, and adapt to get better and improve as time goes on.
Factory and warehouse systems
Shipping and retail industries will never be the same thanks to AI-related software. Systems that automate the entire shipping process and learn as they go are making things work more quickly and more efficiently. These entire systems are transforming how warehouses and factories run, making them more safe and productive.
Educational tools
Things like plagiarism checkers and citation finders can help educators and students utilize artificial intelligence to enhance papers and research. The artificial intelligence systems can read the words used, and use their databases to research everything they know in the blink of an eye. It allows them to check spelling, grammar, for plagiarized content, and more.
There are many other uses of AI all around us every day, technology is advancing at a rapid pace and is continually changing how we live.
What is the future of AI?
AI systems are already impacting how we live, and the door to the future is wide open for how it will impact us in the future. AI-driven technology will likely continue to improve efficiency and productivity and expand into even more industries over time. Experts say there will likely be more discussions on privacy, security, and continued software development to help keep people and businesses safe as AI advances.
While many people are worried that robots will end up taking their jobs, the truth is that there are many fields are fairly safe from automation. Fields like IT will continue to be needed to adopt the new technologies and security systems that make AI run. Healthcare professionals and teachers won’t be able to be replaced by robots—the work they do directly with patients and children is something that can’t be replicated. Similarly in business some processes can be automated, but human instinct, decision making, and relationships will always be vital for the future.
Artificial intelligence is transforming the way the world runs, and will continue to do so as time marches on. Now is an ideal time to get involved and get a degree in IT that can help propel you to an exciting AI career. You can be a part of the world-changing revolution that is artificial intelligence.