You are viewing an older version of this section. View current production version.
Azure Blob Pipelines Quickstart
Azure Blob Pipelines Requires MemSQL 5.8.5 or above.
Azure Blob Pipeline Quickstart
To create and interact with an Azure Pipeline quickly, follow the instructions in this section.
Prerequisites
To complete this Quickstart, your environment must meet the following prerequisites:
- Operating System: Mac OS X or Linux
- Docker: Version 1.12 or newer. If using Mac OS X, these instructions are written for Docker for Mac. Docker Toolbox is compatible as well, but no instructions are provided. While Docker is required for this Quickstart, Pipelines and MemSQL itself have no dependency on Docker.
- Azure Account: This Quickstart uses Azure Blob Store.
Part 1: Creating an Azure Blob Container and Adding a File
The first part of this Quickstart involves creating a new container in your Azure account, and then adding a simple file into the container. You can create a new container using a few different methods, but the following steps use the browser-based Azure Console.
Note: The following steps assume that you have previous experience with Azure. If you are unfamiliar with this service, see the Azure Docs.
- On your local machine, create a text file with the following CSV contents and name it books.txt:
The Catcher in the Rye, J.D. Salinger, 1945
Pride and Prejudice, Jane Austen, 1813
Of Mice and Men, John Steinbeck, 1937
Frankenstein, Mary Shelley, 1818
- In a browser window, go to the Azure Portal and authenticate with your Azure credentials.
- After you’ve authenticated, click the “Storage Accounts” option in the left sidebar to access the Azure Storage console.
- After the container has been created, click on it from the list of containers.
- Now you will upload the books.txt file you created earlier. Click the Upload button in the top left of the page, and either drag-and-drop the books.txt file or select it using a file dialog by clicking Add Files.
Once the books.txt file has been uploaded, you can proceed to the next part of the Quickstart.
Part 2: Creating a MemSQL Database and Azure Blob Pipeline in Docker
Now that you have an Azure container that contains an object (file), you can use MemSQL to create a new pipeline and ingest the blobs. In this part of the Quickstart, you will create a Docker container to run MemSQL and then create a new Azure pipeline.
In a new terminal window, execute the following command:
docker run --name memsql -p 3306:3306 -p 9000:9000 memsql/quickstart
This command automatically downloads the memsql/quickstart Docker image from Docker Hub, creates a new container using the image, assigns the container a user-friendly name (memsql), and finally starts the container.
You will see a number of lines outputted to the terminal as the container initializes and MemSQL starts. Once the initialization process is complete, open a new terminal window and execute the following command:
docker exec -it memsql memsql
This command accesses the MemSQL client within the Docker container. Now we will create a new database and a table that adheres to the schema contained in books.txt file. At the MemSQL prompt, execute the following statements:
CREATE DATABASE books;
CREATE TABLE classic_books
(
title VARCHAR(255),
author VARCHAR(255),
date VARCHAR(255)
);
These statements create a new database named books
and a new table named
classic_books
, which has three columns: title
, author
, and date
.
Now that the destination database and table have been created, you can create an Azure pipeline. In Part 1 of this Quickstart, you uploaded the books.txt file to your container. To create the pipeline, you will need the following information:
- The name of the container, such as:
my-container-name
- Your Azure Storage account’s name and key, such as:
- Account Name:
your_account_name
- Account Key:
your_account_key
- Account Name:
Using these identifiers and keys, execute the following statement, replacing the placeholder values with your own:
CREATE PIPELINE library
AS LOAD DATA AZURE 'my-container-name'
CREDENTIALS '{"account_name": "your_account_name", "account_key":
"your_account_key"}'
INTO TABLE `classic_books`
FIELDS TERMINATED BY ',';
You can see what files the pipeline wants to load by running the following:
SELECT * FROM information_schema.PIPELINES_FILES;
If everything is properly configured, you should see one row in the Unloaded
state, corresponding to books.txt
. The CREATE PIPELINE
statement creates a new pipeline named library
, but the pipeline has not yet been started, and no data has been loaded. A MemSQL pipeline can run either in the background or be triggered by a foreground query. Start it in the foreground first.
START PIPELINE library FOREGROUND;
When this command returns successfully, all files from your bucket will be loaded. If you check information_schema.PIPELINES_FILES
again, you should see all files in the Loaded
state. Now query the classic_books
table to make sure the data has actually loaded.
SELECT * FROM classic_books;
+------------------------+-----------------+-------+
| title | author | date |
+------------------------+-----------------+-------+
| The Catcher in the Rye | J.D. Salinger | 1945 |
| Pride and Prejudice | Jane Austen | 1813 |
| Of Mice and Men | John Steinbeck | 1937 |
| Frankenstein | Mary Shelley | 1818 |
+------------------------+-----------------+-------+
You can also have MemSQL run your pipeline in background. In such a configuration, MemSQL will periodically poll Azure Blob Storage for new files and continuously load them as they are added to the storage container. Before running your pipeline in the background, you must reset the state of the pipeline and the table.
DELETE FROM classic_books;
ALTER PIPELINE library SET OFFSETS EARLIEST;
The first command deletes all rows from the target table. The second causes the pipeline to start from the beginning, in this case, “forgetting” it already loaded books.txt
so you can load it again. You can also drop and recreate the pipeline, if you prefer.
To start a pipeline in the background, run START PIPELINE
.
START PIPELINE library;
This statement starts the pipeline. To see whether the pipeline is running, run SHOW PIPELINES
.
SHOW PIPELINES;
+----------------------+---------+
| Pipelines_in_books | State |
+----------------------+---------+
| library | Running |
+----------------------+---------+
At this point, the pipeline is running and the contents of the books.txt file should once again be present in the classic_books
table.
Foreground pipelines and background pipelines have different intended uses and behave differently. For more information, see the START PIPELINE topic.
Next Steps
Now that you have a running pipeline, any new files you add to your container will be automatically ingested. To understand how an Azure pipeline ingests large amounts of objects in a container, see the Parallelized Data Loading section in the Extractors topic. You can also learn more about how to transform the ingested data by reading the Transforms topic.