- Algorithm – A set of steps that computers follows to carry out operation. In AI, we use algorithms to train machine learning models and to perform tasks such as image recognition and natural
language processing. - Anomaly detection – A technique to identify outliers or data points that deviate from the norm. We use anomaly detection in applications like fraud detection, cybersecurity, and healthcare.
- Backpropagation – Backpropagation computes the gradients of the loss function with respect to the weights of the network and update the weights accordingly.
- Bias – In machine learning, bias refers to the tendency of a model to make systematic errors in its predictions.
- Bot – A software program that can automate tasks. For example chatbots in customer service applications, where they can answer questions and provide support to customers.
- Clustering – A technique to group data points that are similar to each other. We use clustering for market segmentation, fraud detection, and image analysis.
- Confusion matrix – A table that displays the performance of a machine learning model. It shows the number of times the model correctly and incorrectly classified the data points.
- Convolutional neural network (CNN) – A type of neural network that is used for image recognition and computer vision tasks. CNNs learns the spatial relationships between pixels in images, to perform tasks such as object detection and classification.
- Generative Adversarial Networks (GANs): A type of machine learning model that consists of two neural networks that compete against each other. The first network creates fake data that is indistinguishable from real data. The second network tries to distinguish between real and fake data. GANs can be used to generate realistic images, text, and other data.
- Data mining – The process of extracting knowledge from datasets. We use data mining in business applications such as customer segmentation and fraud detection.
- Decision tree – A type of machine learning model that is used for classification and regression tasks. Decision trees work by splitting a problem into multi steps decision making process (hence tree), and each step leads to a different outcome.
- Deep learning – A type of machine learning that uses artificial neural networks to learn from data. It can learn complex patterns in data, which allows them to perform advanced tasks such as image
recognition and natural language processing. - Dimensionality reduction – A technique used to reduce the number of features in a dataset. It improves the performance of machine learning models by making the data easier to learn, reduce the data
size, and make data easier to visualize. - Ensemble learning – A technique that combines multiple machine learning models to improve the performance of a single model. Ensemble learning methods can reduce the variance and bias of machine learning models.
- Feature engineering – The process to transform raw data into features that are more informative for machine learning models. This can be done by transforming existing features, or by combining features in new ways. It improve the performance of machine learning models and making it more interpretable.
- Gradient descent – An optimization algorithm in machine learning. It updates updating the parameters of a model iteratively, in the direction of the negative gradient of the loss function.
- Hyperparameter – A parameter that controls the learning process of a machine learning model. Hyperparameters are typically set by the user, and they can have a significant impact on the performance of the model.
- Image recognition – A typical task in computer vision where machines can spot and identify objects in images. It is used in applications such as self-driving cars and facial recognition.
- Learning rate – A hyperparameter that controls the step size of the gradient descent algorithm. Higher learning rate will cause the model to learn more quickly, but it may also lead to overfitting.
- Machine learning – A subfield of artificial intelligence that allows computers to learn from data without being explicitly programmed. Machine learning models are trained on a dataset of labeled
data, and they can then be used to make predictions on new data points. - Natural language processing (NLP) – The field of computer science that deals with the interaction between computers and human languages. We use NLP in applications such as machine translation,
text analysis, and chatbots. - Neural network – A type of machine learning model that is inspired by the human brain. Neural networks are made up of interconnected nodes, and they can learn to learn complex patterns in data.
- Overfitting – A problem that occurs when a machine learning model learns the training data too well. Overfitted models cannot generalize to new data points, and they may make inaccurate predictions.
- Underfitting: A problem that occurs when a ML model is not complex enough to accurately capture relationships between a dataset’s features and a target variable.
- Parameter – A value that is used to control the behavior of a machine learning model. Parameters are typically set by the user, and the value are used and can be updated during the training process.
- Reinforcement learning – A type of machine learning that allows an agent to learn how to behave in an environment by trial and error. Reinforcement learning agents are rewarded for taking actions
that lead to positive outcomes, and they are punished for taking actions that lead to negative outcomes. - Regression – Regression is a type of supervised learning for predicting a continuous value. We use it for forecasting in applications such as finance and healthcare.
- Rule-based system – A type of machine learning model that makes prediction based on a set of pre-defined rules. Rule-based systems are often used in applications where we need to explain the reasoning
behind a decision. A good example is credit card application. - Sentiment analysis – Identify the sentiment of a piece of text. We use sentiment analysis in customer feedback analysis and social media monitoring.
- Supervised learning – A type of machine learning where the model is trained on a dataset of labeled data. The model learns to make predictions by associating the labels with the features of the data.
- Support vector machine (SVM) – A type of machine learning model that is used for classification and regression tasks. SVMs work by finding the hyperplane that best separates the data points
in the training dataset. - Unsupervised learning – A type of machine learning where the model is trained on a dataset without a target or labelled data. The model learns to make predictions by finding patterns in the data.
- Validation set – A dataset for evaluating the performance of a machine learning model. The validation set is not used to train the model, and it is used to make sure that the model is not overfitting the training data.
- Variance – A measure of how much a model’s predictions vary from the true values. A high variance model is likely to make inaccurate predictions for new data points.
- Accuracy – A measure of how often a model makes correct predictions. Accuracy is calculated by dividing the number of correct predictions by the total number of predictions.
- Cost function – A function that is used to measure the error of a machine learning model. We minimize the cost function during the training process.
- Cross-validation – Cross-validation works by dividing the data into a training set and a test set. The training set is used to train the model, and the test set is used to evaluate the model. The most common cross-validation is the k-fold cross-validation. We divide the data is into k folds. The model is trained on k-1 folds, and evaluate the accuracy of the model on the remaining folds.
- Data cleaning – The process of correcting errors in data. Data cleaning is an important step in the machine learning process, as it improves the performance of the model and makes the data more
understandable. - Data preprocessing – To transform data into a format that is suitable for machine learning. Data preprocessing involve tasks such as feature engineering, normalization, and scaling.
- Monte Carlo Methods: A computational algorithm that relies on repeated random sampling to obtain numerical results. One of the basic examples of getting started with Monte Carlo methods is the estimation of Pi.
- Transfer Learning: A machine learning method that focuses on storing knowledge gained while solving one problem, and then applying it to a different but related problem.
- Boosting: A machine learning ensemble meta-algorithm for reducing bias and variance in supervised learning.
- AdaBoost: Short for Adaptive Boosting, it’s a machine learning algorithm that is used as a classifier. When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative ‘hardness’ of each training sample is fed into the tree growing algorithm.
- Gradient Boosting: A machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.
- XGBoost: Short for eXtreme Gradient Boosting, XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable.
- Feature Selection: The process of selecting a subset of relevant features for use in machine learning.
- Random Forest: An ensemble learning method that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees.
- Linear Regression: A linear approach to modeling the relationship between a dependentvariable and one or more independent variables.
- Logistic Regression: A statistical model that in its basic form uses a logistic function to model a binary dependent variable.
- Naive Bayes: A classification technique based on Bayes’ Theorem with an assumption of independence among predictors.
- Bayes’ Theorem: A principle that describes how to update the probabilities of hypotheses when given evidence.
- Predictive Modeling: A process that uses data and statistics to predict outcomes with data models.
- Text Mining: The process of deriving meaningful information from natural language text.
- Perceptron: The simplest form of a neural network, a binary classifier used in supervised learning.
- Artificial Intelligence (AI): The capability of a machine to imitate intelligent human behavior.
- Long Short-Term Memory (LSTM): A type of recurrent neural network that can learn order dependence in sequence prediction problems.
- Transformer Models: A type of model in NLP that uses self-attention mechanisms. For example, Google’s BERT and OpenAI’s GPT are transformer models.
- Feedforward Neural Network: An artificial neural network wherein connections between the nodes do not form a cycle.
- Activation Function: This is a function that decide whether a neuron in a neural network should be fired or not. It introduces non-linearity to the network so that it can learn complex patterns and relationship in the data.
Brought to you by
The LEAD team believes in growth. The question we ask every day: How can we help our students achieve more?
Get the scoop on the latest stuff.
Guides
Ultimate Guide to Data Science
Recent Posts
0 Comments