Basics
Machine learning (ML) is a subfield of artificial intelligence (AI) focused on developing algorithms that allow computers to learn and improve from experience without being explicitly programmed, imitating intelligent human behavior. It finds applications in a wide range of domains, including image and speech recognition, natural language processing, autonomous vehicles, recommendation systems, healthcare, finance, and more. Its ability to analyze large datasets, detect patterns, and make predictions has revolutionized many industries, leading to improved decision-making, automation of tasks, and the development of innovative products and services. At its core, machine learning starts with data—numbers, photos, text, time series, or data from sensors.
The first phase involves gathering and preparing data that the ML model will use as training data to train itself. Next, the [human] programmer must choose a machine learning model to use, supply the data, and let the computer model train itself to find patterns or make predictions. Over time, the programmer can also tweak the model, including changing its parameters, to assist in advancing it towards more precise outcomes.
Approaches
Machine learning has three broad subcategories:
- Supervised Learning: In supervised learning, a model is trained on labeled data, where each input example is associated with a corresponding target or output. The goal for the system is to learn a mapping from inputs to outputs, enabling the model to make predictions on new, unseen data.
- Unsupervised Learning: Unsupervised learning involves training models on unlabeled data, where the algorithm aims to find hidden patterns or structures in the data without explicit guidance. Common tasks include clustering similar data points together or dimensionality reduction for data visualization and compression.
- Reinforcement Learning: Reinforcement learning is about training agents to make sequential decisions in an environment to maximize a cumulative reward. The agent learns through an iterative process, receiving feedback from the environment in the form of rewards or penalties based on its actions.
Each approach has different algorithms and techniques used to train machine learning models, such as decision trees, neural networks, support vector machines, and clustering algorithms. Additionally, machine learning has specialized areas like deep learning, which involves training deep neural networks with many layers to learn complex patterns in data.
Lifecycle
Data science and ML teams need to experiment with the data and train learning models, to help push them toward more accurate results. This is one of the basic parts of the machine learning lifecycle. This lifecycle has three main stages:
- Data fetching and wrangling
- Model design and training
- Model deployment and serving
Data fetching and wrangling
At this stage, you structure the data pipeline for preprocessing and fetch data from one or more sources. These sources can differ, depending on the application. In the most common applications, the data come from sources like a database, a streaming service, or even just plain files. On the operational front, the diversity and the variety of these data sources require you to build loaders and create the interface, after which you can integrate them with different ML tools available. Feature engineering plays a crucial role, where you craft and select relevant features to enhance model performance. For this exact reason, you may need to wrangle your data in a preprocessing step. A typical preprocessing step involves one or more of the following:
- Clean outlying data points from the data.
- Pass data through functions, which will map them in a different dimensionality (for example, kernels).
- Change the shape of or transpose the data, to become consistent with the model’s expected input.
Model design and training
Model selection involves experimenting with different architectures, hyperparameters, and evaluation metrics to identify the optimal configuration. Throughout this iterative process, you can use cross-validation techniques to ensure robustness and generalization. Once you have the final model architecture, training begins, often leveraging powerful hardware accelerators. During this iterative process, you’ll perform a series of experiments (for example, different algorithms, alternative datasets, hyperparameter tuning, and so on). The results of these experiments produce different models, which you need to track and compare. After comparing the models, you’ll deploy the best model for your use case. This yields another data management problem: model storage, versioning, comparison, and general management of ML models become significant concerns. Thus, you can think of the entire machine learning lifecycle as a data management problem.
Model deployment and serving
In the last step of this cycle, you need to store, train (correctly), test, and monitor an ML model (which you can serve through an endpoint). In this stage, the focus shifts from development to deployment. Once the trained model is ready, you need to integrate the model into a production environment to serve predictions in real-time or batch processing. This involves containerizing the model for easy deployment and management. The serving infrastructure must be scalable, reliable, and performant to handle varying loads and ensure low latency. Additionally, you can use versioning and A/B testing to manage model updates and assess their impact on business objectives. Security measures, such as encryption and access control, safeguard the deployed models and sensitive data.
Continue reading to learn how TileDB addresses the data management problems commonly associated with the machine learning lifecycle.