Fetching data from OPC UA
Subscription-based approach
Fetching data from OPC UA
Periodic fetch approach
Fetching data from OPC UA
For fetching data from OPC UA, two approaches were selected in the end. The first one, periodic fetch, is intended to be used for experimentation and testing purposes as it allows for an easy consumption of historical, replayed data that results in CSV files ready for further processing. The second approach, subscription-based, is the one selected for the final, real-time version of the system. In this approach, subscriptions are used to reduce unnecessary communication with OPC UA server and data is send for further processing only at predefined intervals.
Placement of OPC UA Client
Co-located with AGV and OPC UA Server
Placement of OPC UA Client
Co-located with AGV and OPC UA Server
The OPC UA Client is co-located with OPC UA Servers in the same LAN e.g. within a single factory. It allows to have a centralized local processing hub that connects to all OPC UA Servers, provides resilience to Internet connection outages, and results in lower latency between OPC UA Server/Client. OPC UA Client is the only component that handles sending out data to the cloud for further processing, allowing for buffering and local persistence.
High-level diagram - cloud part
Azure IoT Hub
Service that allows for secure and reliable communication between cloud and IoT devices. It supports the management of specific devices, authentication and authorization, and integrates with services such as Azure Stream Analytics or Azure Data Lake Storage. Additionally, it can be enhanced with Azure IoT Edge to deploy services directly at edge devices.
In the case of the presented architecture, Azure IoT Hub is the entry point to the system, which handles AGV definition and registration on cloud side, ingestion of all data incoming to the cloud, as well as authentication and authorization for connecting to the cloud from OPC UA Client.
Azure Stream Analytics
Service that is a fully managed stream analytics engine, designed to process large volume of streaming data. It can be used to enrich the data, preprocess it, or discard invalid events. It can also be integrated with Azure Functions or Azure Machine Learning, to enable e.g. anomaly detection on incoming data streams. Azure Stream Analytics jobs can also be executed on edge devices.
Azure Stream Analytics
In the implemented architecture, Azure Stream Analytics Jobs are a key point when it comes to data processing.
The implemented jobs are responsible for the following:
- Integration with Azure Machine Learning Endpoints which perform wheel anomaly detection as well as momentary power consumption predictions
- Passing data to PostgreSQL for visualisations
- Passing data to Event Hub / Data Lake for long-term storage
Azure Stream Analytics
Azure ML Endpoints
Azure Machine Learning Endpoints enable seamless deployment and management of machine learning models in production environments. They also offer convenient integration with Azure Stream Analytics to allow direct inference from within Azure Stream Analytics Jobs. At the current time, we have two working ML models deployed to Azure Machine Learning Endpoints: wheel anomaly classification and momentary power consumption prediction models, both integrated with Azure Stream Analytics jobs.
Azure ML Endpoints
Azure PostgreSQL
Azure PostgreSQL is a fully managed relational database service provided by Microsoft Azure, offering PostgreSQL as the database engine. In the implemented architecture, it is used as a temporary storage for raw AGV data, as well as for inference results, in order to enable visualizations and alerting based on these metrics.
Grafana
Grafana is an open-source analytics and visualization platform that allows users to create, explore, and share dashboards from data stored in various sources such as databases, cloud services, and monitoring systems. In the implemented architecture, it is used for visualizations and alerting based on raw data received from AGVs as well as the results of running inference with deployed machine learning models. Implemented solution uses Azure Managed Grafana offering.
Grafana
Azure Event Hub
Azure Event Hub is a generic event ingestion service. It support multiple source and outputs, natively integrates with services such as Azure Functions. It supports three protocols for consumers and producers - AMQP, Kafka, and HTTPS. It also supports data capture to save data to Azure Data Lake Storage for long-term retention.
In the implemented architecture, Azure Event Hub is used for persisting data to Azure Data Lake for long-term use.
Azure Data Lake Storage Gen2
A centralized, single-storage platform for data ingestion, processing and visualisation. Massively scalable, according to documentation it can handle exabytes of data, with throughput at gigabites per second. It supports hierarchical namespaces, that allow for efficient data access. It can be integrated with multiple analytical frameworks and offers Hadoop compatible access.
In the implemented architecture it is used as a long-term storage of data collected from AGVs.
Infrastructure as Code
Infrastructure as Code (IaC) is a concept of defining and managing cloud infrastructure with configuration rather than manual interaction with GUI or via CLI. It allows to define and deploy repeatable cloud infrastructures, while at the same time providing a definition and overview of all your services.
Most of the implemented architecture has been defined using Infrastracture as Code approach.
Terraform
Terraform is an IaC tool that allows for managing infrastructure across multiple cloud providers such as Microsoft Azure, Amazon Web Services, or Google Cloud Platform. It uses a human-readable language for definitions of resources, it records state to track changes across deployments. Its configuration can be commited to version control systems to provide an audit trail of changes to your infrastructure.
Partial terraform config
resource "azurerm_eventhub" "eventhub" {
name = var.eventhub_name
namespace_name = azurerm_eventhub_namespace.eventhub_namespace.name
resource_group_name = azurerm_resource_group.rg.name
partition_count = 1
message_retention = 1
capture_description {
enabled = true
encoding = "Avro"
interval_in_seconds = 300
destination {
name = "EventHubArchive.AzureBlockBlob"
blob_container_name = azurerm_storage_container.storage_container.name
storage_account_id = azurerm_storage_account.storage_account.id
archive_name_format = "{Namespace}/{EventHub}/{PartitionId}/{Year}/{Month}/{Day}/{Hour}/{Minute}"
}
}
}
Cobot
By progressive
Cobot
- 106