Encore Episode: Introduction to Artificial Intelligence (AI)
Oracle University Podcast - A podcast by Oracle Corporation - Tuesdays
Categories:
You probably interact with artificial intelligence (AI) more than you realize. So, there’s never been a better time to start figuring out how it all works. Join Lois Houston and Nikita Abraham as they decode the fundamentals of AI so that anyone, irrespective of their technical background, can leverage the benefits of AI and tap into its infinite potential. Together with Senior Cloud Engineer Nick Commisso, they take you through key AI concepts, common AI tasks and domains, and the primary differences between AI, machine learning, and deep learning. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go! 00:33 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:46 Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Welcome to a new season of the Oracle University Podcast. I’m so excited about this season because we’re going to delve into the world of artificial intelligence. In upcoming episodes, we’ll talk about the fundamentals of artificial intelligence and machine learning. And we’ll discuss neural network architectures, generative AI and large language models, the OCI AI stack, and OCI AI services. 01:27 Nikita: So, if you’re an IT professional who wants to start learning about AI and ML or even if you’re a student who is familiar with OCI or similar cloud services, but have no prior exposure to this field, you’ll want to tune in to these episodes. Lois: That’s right, Niki. So, let’s get started. Today, we’ll talk about the basics of artificial intelligence with Senior Cloud Engineer Nick Commisso. Hi Nick! Thanks for joining us today. So, let’s start right at the beginning. What is artificial intelligence? 01:57 Nick: Well, the ability of machines to imitate the cognitive abilities and problem solving capabilities of human intelligence can be classified as artificial intelligence or AI. 02:08 Nikita: Now, when you say capabilities and abilities, what are you referring to? Nick: Human intelligence is the intellectual capability of humans that allows us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using a language and understand the nonverbal cues, such as facial recognition, tone variation, and body language. You can handle objections in real time, even in a complex setting. You can plan for short and long-term situations or projects. And, of course, you can create music and art or invent something new like an original idea. If you can replicate any of these human capabilities in machines, this is artificial general intelligence or AGI. So in other words, AGI can mimic human sensory and motor skills, performance, learning, and intelligence, and use these abilities to carry out complicated tasks without human intervention. When we apply AGI to solve problems with specific and narrow objectives, we call it artificial intelligence or AI. 03:16 Lois: It seems like AI is everywhere, Nick. Can you give us some examples of where AI is used? Nick: AI is all around us, and you've probably interacted with AI, even if you didn't realize it. Some examples of AI can be viewing an image or an object and identifying if that is an apple or an orange. It could be examining an email and classifying it spam or not. It could be writing computer language code or predicting the price of an older car. So let's get into some more specifics of AI tasks and the nature of related data. Machine learning, deep learning, and data science are all associated with AI, and it can be confusing to distinguish. 03:57 Nikita: Why do we need AI? Why’s it important? Nick: AI is vital in today's world, and with the amount of data that's generated, it far exceeds the human ability to absorb, interpret, and actually make decisions based on that data. That's where AI comes in handy by enhancing the speed and effectiveness of human efforts. So here are two major reasons why we need AI. Number one, we want to eliminate or reduce the amount of routine tasks, and businesses have a lot of routine tasks that need to be done in large numbers. So things like approving a credit card or a bank loan, processing an insurance claim, recommending products to customers are just some example of routine tasks that can be handled. And second, we, as humans, need a smart friend who can create stories and poems, designs, create code and music, and have humor, just like us. 04:54 Lois: I’m onboard with getting help from a smart friend! There are different domains in AI, right, Nick? Nick: We have language for language translation; vision, like image classification; speech, like text to speech; product recommendations that can help you cross-sell products; anomaly detection, like detecting fraudulent transactions; learning by reward, like self-driven cars. You have forecasting with weather forecasting. And, of course, generating content like image from text. 05:24 Lois: There are so many applications. Nick, can you tell us more about these commonly used AI domains like language, audio, speech, and vision? Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, or extracting key phrases and so on. Consider the example of translating text. There's many text translation tools where you simply type or paste your text into a given text box, choose your source and target language, and then click translate. Now, let's look at the generative AI tasks. They are generative, which means the output text is generated by a model. Some examples are creating text like stories or poems, summarizing a text, answering questions, and so on. Let's take the example of ChatGPT, the most well-known generative chat bot. These bots can create responses from their training on large language models, and they continuously grow through machine learning. 06:31 Nikita: What can you tell us about using text as data? Nick: Text is inherently sequential, and text consists of sentences. Sentences can have multiple words, and those words need to be converted to numbers for it to be used to train language models. This is called tokenization. Now, the length of sentences can vary, and all the sentences lengths need to be made equal. This is done through padding. Words can have similarities with other words, and sentences can also be similar to other sentences. The similarity can be measured through dot similarity or cosine similarity. We need a way to indicate that similar words or sentences may be close by. This is done through representation called embedding. 07:17 Nikita: And what about language AI models? Nick: Language AI models refer to artificial intelligence models that are specifically designed to understand, process, and generate natural language. These models have been trained on vast amounts of textual data that can perform various natural language processing or NLP tasks. The task that needs to be performed decides the type of input and output. The deep learning model architectures that are typically used to train models that perform language tasks are recurrent neural networks, which processes data sequentially and stores hidden states, long short-term memory, which processes data sequentially that can retain the context better through the use of gates, and transformers, which processes data in parallel. It uses the concept of self-attention to better understand the context. 08:09 Lois: And then there’s speech-related AI, right? Nick: Speech-related AI tasks can be either audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech-to-text conversion or speaker recognition, voice conversion, and so on. Generative AI tasks are generative in nature, so the output audio is generated by a model. For example, you have music composition and speech synthesis. Audio or speech is digitized as snapshots taken in time. The sample rate is the number of times in a second an audio sample is taken. Most digital audio have a sampling rate of 44.1 kilohertz, which is also the sampling rate for audio CDs. Multiple samples need to be correlated to make sense of the data. For example, listening to a song for a fraction of a second, you won't be able to infer much about the song, and you'll probably need to listen to it a little bit longer. Audio and speech AI models are designed to process and understand audio data, including spoken language. These deep-learning model architectures are used to train models that perform language with tasks-- recurrent neural networks, long short-term memory, transformers, variational autoencoders, waveform models, and Siamese networks. All of the models take into consideration the sequential nature of audio. 09:42 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit mylearn.oracle.com to get started. 10:10 Nikita: Welcome back! Now that we’ve covered language and speech-related tasks, let’s move on to vision-related tasks. Nick: Vision-related AI tasks could be image related or generative AI. Image-related AI tasks will use an image as an input, and the output depends on the task. Some examples are classifying images, identifying objects in an image, and so on. Facial recognition is one of the most popular image-related tasks that is often used for surveillance and tracking of people in real time, and it's used in a lot of different fields, including security, biometrics, law enforcement, and social media. For generative AI tasks, the output image is generated by a model. For example, creating an image from a contextual description, generating images of a specific style or a high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of an object, machine components, buildings, medication, people, and even more. 11:14 Lois: So, then, here again I need to ask, how do images work as data? Nick: Images consist of pixels, and pixels can be either grayscale or color. And we can't really make out what an image is just by looking at one pixel. The task that needs to be performed decides the type of input needed and the output produced. Various architectures have evolved to handle this wide variety of tasks and data. These deep-learning model architectures are typically used to train models that perform vision tasks-- convolutional neural networks, which detects patterns in images; learning hierarchical representations of visual features; YOLO, which is You Only Look Once, processes the image and detects objects within the image; and then you have generative adversarial networks, which generates real-looking images. 12:04 Nikita: Nick, earlier you mentioned other AI tasks like anomaly detection, recommendations, and forecasting. Could you tell us more about them? Nick: Anomaly detection. This is time-series data, which is required for anomaly detection, and it can be a single or multivariate for fraud detection, machine failure, etc. Recommendations. You can recommend products using data of similar products or users. For recommendations, data of similar products or similar users is required. Forecasting. Time-series data is required for forecasting and can be used for things like weather forecasting and predicting the stock price. 12:43 Lois: Nick, help me understand the difference between artificial intelligence, machine learning, and deep learning. Let’s start with AI. Nick: Imagine a self-driving car that can make decisions like a human driver, such as navigating traffic or detecting pedestrians and making safe lane changes. AI refers to the broader concept of creating machines or systems that can perform tasks that typically require human intelligence. Next, we have machine learning or ML. Visualize a spam email filter that learns to identify and move spam emails to the spam folder, and that's based on the user's interaction and email content. Now, ML is a subset of AI that focuses on the development of algorithms that enable machines to learn from and make predictions or decisions based on data. To understand what an algorithm is in the context of machine learning, it refers to a specific set of rules, mathematical equations, or procedures that the machine learning model follows to learn from data and make predictions on. And finally, we have deep learning or DL. Think of an image recognition software that can identify specific objects or animals within images, such as recognizing cats in photos on the internet. DL is a subfield of ML that uses neural networks with many layers, deep neural networks, to learn and make sense of complex patterns in data. 14:12 Nikita: Are there different types of machine learning? Nick: There are several types of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. Supervised learning where the algorithm learns from labeled data, making predictions or classifications. Unsupervised learning is an algorithm that discovers patterns and structures in unlabeled data, such as clustering or dimensionality reduction. And then, you have reinforcement learning, where agents learn to make predictions and decisions by interacting with an environment and receiving rewards or punishments. 14:47 Lois: Can we do a deep dive into each of these types you just mentioned? We can start with the supervised machine learning algorithm. Nick: Let's take an example of how a credit card company would approve a credit card. Once the application and documents are submitted, a verification is done, followed by a credit score check and another 10 to 15 days for approval. And how is this done? Sometimes, purely manually or by using a rules engine where you can build rules, give new data, get a decision. The drawbacks are slow. You need skilled people to build and update rules, and the rules keep changing. The good thing is that the businesses had a lot of insight as to how the decisions were made. Can we build rules by looking at the past data? We all learn by examples. Past data is nothing but a set of examples. Maybe reviewing past credit card approval history can help. Through a process of training, a model can be built that will have a specific intelligence to do a specific task. The heart of training a model is an algorithm that incrementally updates the model by looking at the data samples one by one. And once it's built, the model can be used to predict an outcome on a new data. We can train the algorithm with credit card approval history to decide whether to approve a new credit card. And this is what we call supervised machine learning. It's learning from labeled data. 16:13 Lois: Ok, I see. What about the unsupervised machine learning algorithm? Nick: Data does not have a specific outcome or a label as we know it. And sometimes, we want to discover trends that the data has for potential insights. Similar data can be grouped into clusters. For example, retail marketing and sales, a retail company may collect information like household size, income, location, and occupation so that the suitable clusters could be identified, like a small family or a high spender and so on. And that data can be used for marketing and sales purposes. Regulating streaming services. A streaming service may collect information like viewing sessions, minutes per session, number of unique shows watched, and so on. That can be used to regulate streaming services. Let's look at another example. We all know that fruits and vegetables have different nutritional elements. But do we know which of those fruits and vegetables are similar nutritionally? For that, we'll try to cluster fruits and vegetables' nutritional data and try to get some insights into it. This will help us include nutritionally different fruits and vegetables into our daily diets. Exploring patterns and data and grouping similar data into clusters drives unsupervised machine learning. 17:34 Nikita: And then finally, we come to the reinforcement learning algorithm. Nick: How do we learn to play a game, say, chess? We'll make a move or a decision, check to see if it's the right move or feedback, and we'll keep the outcomes in your memory for the next step you take, which is learning. Reinforcement learning is a machine learning approach where a computer program learns to make decisions by trying different actions and receiving feedback. It teaches agents how to solve tasks by trial and error. This approach is used in autonomous car driving and robots as well. 18:06 Lois: We keep coming across the term “deep learning.” You’ve spoken a bit about it a few times in this episode, but what is deep learning, really? How is it related to machine learning? Nick: Deep learning is all about extracting features and rules from data. Can we identify if an image is a cat or a dog by looking at just one pixel? Can we write rules to identify a cat or a dog in an image? Can the features and rules be extracted from the raw data, in this case, pixels? Deep learning is really useful in this situation. It's a special kind of machine learning that trains super smart computer networks with lots of layers. And these networks can learn things all by themselves from pictures, like figuring out if a picture is a cat or a dog. 18:49 Lois: I know we’re going to be covering this in detail in an upcoming episode, but before we let you go, can you briefly tell us about generative AI? Nick: Generative AI, a subset of machine learning, creates diverse content like text, audio, images, and more. These models, often powered by neural networks, learn patterns from existing data to craft fresh and creative output. For instance, ChatGPT generates text-based responses by understanding patterns in text data that it's been trained on. Generative AI plays a vital role in various AI tasks requiring content creation and innovation. 19:28 Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. As you complete the course, you’ll find skill checks that you can attempt to solidify your learning. Lois: And remember, the AI Foundations course on MyLearn also prepares you for the Oracle Cloud Infrastructure 2023 AI Foundations Associate certification. Both the course and the certification are free, so there’s really no reason NOT to take the leap into AI, right Niki? Nikita: That’s right, Lois! Lois: In our next episode, we will look at the fundamentals of machine learning. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 20:13 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.