Outdated Version

You are viewing an older version of this section. View current production version.

Azure Blob Pipelines Quickstart min read


Alert

Azure Blob Pipelines Requires MemSQL 5.8.5 or above.

Azure Blob Pipeline Quickstart

To create and interact with an Azure Pipeline quickly, follow the instructions in this section.

Prerequisites

To complete this Quickstart, your environment must meet the following prerequisites:

  • Azure Account: This Quickstart uses Azure Blob Store.
  • SingleStore DB installation –or– a SingleStore Managed Service cluster: You will connect to the database or cluster and create a pipeline to pull data from your Azure Blob Store.

Part 1: Creating an Azure Blob Container and Adding a File

The first part of this Quickstart involves creating a new container in your Azure account, and then adding a simple file into the container. You can create a new container using a few different methods, but the following steps use the browser-based Azure Console.

Note: The following steps assume that you have previous experience with Azure. If you are unfamiliar with this service, see the Azure Docs.

  1. On your local machine, create a text file with the following CSV contents and name it books.txt:
The Catcher in the Rye, J.D. Salinger, 1945
Pride and Prejudice, Jane Austen, 1813
Of Mice and Men, John Steinbeck, 1937
Frankenstein, Mary Shelley, 1818
  1. In a browser window, go to the Azure Portal and authenticate with your Azure credentials.
  2. After you’ve authenticated, click the “Storage Accounts” option in the left sidebar to access the Azure Storage console.
  3. After the container has been created, click on it from the list of containers.
  4. Now you will upload the books.txt file you created earlier. Click the Upload button in the top left of the page, and either drag-and-drop the books.txt file or select it using a file dialog by clicking Add Files.

Once the books.txt file has been uploaded, you can proceed to the next part of the Quickstart.

Part 2: Creating a SingleStore Database and Azure Blob Pipeline

Now that you have an Azure container that contains an object (file), you can use SingleStore Managed Service or DB to create a new pipeline and ingest the blobs.

We will create a new database and a table that adheres to the schema contained in books.txt file. At the MemSQL prompt, execute the following statements:

CREATE DATABASE books;
CREATE TABLE classic_books
(
title VARCHAR(255),
author VARCHAR(255),
date VARCHAR(255)
);

These statements create a new database named books and a new table named classic_books, which has three columns: title, author, and date.

Now that the destination database and table have been created, you can create an Azure pipeline. In Part 1 of this Quickstart, you uploaded the books.txt file to your container. To create the pipeline, you will need the following information:

  • The name of the container, such as: my-container-name
  • Your Azure Storage account’s name and key, such as:
    • Account Name: your_account_name
    • Account Key: your_account_key

Using these identifiers and keys, execute the following statement, replacing the placeholder values with your own:

CREATE PIPELINE library
AS LOAD DATA AZURE 'my-container-name'
CREDENTIALS '{"account_name": "your_account_name", "account_key":
"your_account_key"}'
INTO TABLE `classic_books`
FIELDS TERMINATED BY ',';

You can see what files the pipeline wants to load by running the following:

SELECT * FROM information_schema.PIPELINES_FILES;

If everything is properly configured, you should see one row in the Unloaded state, corresponding to books.txt. The CREATE PIPELINE statement creates a new pipeline named library, but the pipeline has not yet been started, and no data has been loaded. A SingleStore pipeline can run either in the background or be triggered by a foreground query. Start it in the foreground first.

START PIPELINE library FOREGROUND;

When this command returns successfully, all files from your bucket will be loaded. If you check information_schema.PIPELINES_FILES again, you should see all files in the Loaded state. Now query the classic_books table to make sure the data has actually loaded.

SELECT * FROM classic_books;
+------------------------+-----------------+-------+
| title                  | author          | date  |
+------------------------+-----------------+-------+
| The Catcher in the Rye |  J.D. Salinger  |  1945 |
| Pride and Prejudice    |  Jane Austen    |  1813 |
| Of Mice and Men        |  John Steinbeck |  1937 |
| Frankenstein           |  Mary Shelley   |  1818 |
+------------------------+-----------------+-------+

You can also have SingleStore run your pipeline in background. In such a configuration, SingleStore will periodically poll Azure Blob Storage for new files and continuously load them as they are added to the storage container. Before running your pipeline in the background, you must reset the state of the pipeline and the table.

DELETE FROM classic_books;
ALTER PIPELINE library SET OFFSETS EARLIEST;

The first command deletes all rows from the target table. The second causes the pipeline to start from the beginning, in this case, “forgetting” it already loaded books.txt so you can load it again. You can also drop and recreate the pipeline, if you prefer.

To start a pipeline in the background, run START PIPELINE.

START PIPELINE library;

This statement starts the pipeline. To see whether the pipeline is running, run SHOW PIPELINES.

SHOW PIPELINES;
+----------------------+---------+
| Pipelines_in_books   | State   |
+----------------------+---------+
| library              | Running |
+----------------------+---------+

At this point, the pipeline is running and the contents of the books.txt file should once again be present in the classic_books table.

Info

Foreground pipelines and background pipelines have different intended uses and behave differently. For more information, see the START PIPELINE topic.

Next Steps

Now that you have a running pipeline, any new files you add to your container will be automatically ingested. To understand how an Azure pipeline ingests large amounts of objects in a container, see the Parallelized Data Loading section in the Extractors topic. You can also learn more about how to transform the ingested data by reading the Transforms topic.