Artificial Intelligence (AI): The broad field of computer science focused on creating intelligent machines capable of performing tasks that typically require human intelligence.
Machine Learning (ML): A subset of AI that enables computers to learn and make predictions or decisions without being explicitly programmed. It involves algorithms that can learn from and analyse data.
Deep Learning: A specialised area of machine learning inspired by the structure and function of the human brain's neural networks. Deep learning models, called neural networks, learn to perform tasks by analysing vast amounts of data.
Neural Networks: Mathematical models composed of interconnected artificial neurons, designed to process information and make predictions. They are commonly used in deep learning algorithms.
Natural Language Processing (NLP): The branch of AI concerned with the interaction between computers and human language. NLP enables computers to understand, interpret, and generate human language, facilitating tasks such as language translation and sentiment analysis.
Computer Vision: A field of AI focused on teaching computers to interpret and understand visual information from images or videos. Computer vision enables machines to "see" and process visual data, supporting applications like object recognition and image classification.
Robotics: The interdisciplinary field involving the design, construction, and operation of robots. AI plays a crucial role in robotics, as it enables robots to perceive and respond to their environment, perform tasks, and learn from interactions.
Data Mining: The process of discovering patterns, relationships, and insights within large datasets. Data mining techniques are used to extract valuable information and knowledge from data, which can be utilised in various AI applications.
Algorithm: A set of well-defined instructions or rules designed to solve a specific problem or accomplish a particular task. Algorithms form the core of AI systems and dictate how machines process and analyse data.
Bias in AI: AI algorithms can inherit biases from the data they are trained on, leading to discriminatory or unfair outcomes. Understanding and addressing bias in AI systems is crucial for promoting fairness, inclusivity, and ethical use of AI technologies.
Explainability: The ability to understand and interpret the decisions or predictions made by AI models. Explainable AI aims to provide transparency and insights into the inner workings of AI systems, making them more understandable and trustworthy.
Ethical AI: The consideration and integration of ethical principles in the design, development, and deployment of AI systems. Ethical AI emphasises responsible and accountable practices, ensuring that AI technologies align with societal values and do not cause harm.