Category Industrial data analytics

Application layer attacks – Examining Security and Privacy in IoT

Application layer attacks target the software and services that run on IoT devices or the cloud services that manage them. Attackers could exploit vulnerabilities in the software or firmware running on the device to gain control over it or access sensitive data. Attackers could also launch attacks such as SQL injection or cross-site scripting (XSS) attacks on the web applications used to manage the devices.

IoT networks face a wide range of attacks, and each layer of the network presents different vulnerabilities. IoT security must be implemented at each layer of the network to mitigate the risks associated with these attacks. The use of encryption, authentication, and access controls can help to secure physical devices and the data transmitted between them. Regular updates and patches should be applied to the software and firmware running on the devices to address any known vulnerabilities. Overall, a layered security approach that considers the entire IoT ecosystem can provide a more robust defense against attacks.

We can see different forms of attacks on embedded IoT systems in Figure 11.2:

Figure 11.2 – Different attacks on embedded systems

The diagram provides a structured view of potential vulnerabilities an embedded system may face, categorizing them based on the method or perspective of the attack. It categorizes the different attacks into three main types: Software-based, Network-based, and Side-based, described as follows:

Software-based attacks:

  • Malware: Malicious software intended to damage or exploit an embedded system
  • Brute-forcing access: A method of trial and error whereby an attacker attempts to guess the correct access credentials
  • Memory-buffer overflow: A situation where a program writes data outside the bounds of pre-allocated fixed-length buffers, leading to potential code execution or system crashes

Network-based attacks:

  • MITM: An attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are communicating directly with each other
  • Domain Name System (DNS) poisoning: An attack where the attacker redirects DNS entries to a malicious site
  • DDOS: An attempt to disrupt the regular functioning of a network by flooding it with excessive traffic
  • Session hijacking: When an attacker takes over a user’s session to gain unauthorized access to a system
  • Signal jamming: An interference with the signal frequencies that an embedded system might use, rendering it inoperable or reducing its efficiency

Side-based attacks:

  • Power analysis: Observing the power consumption of a device to extract information
  • Timing attacks: Analyzing the time taken to execute cryptographic algorithms to find vulnerabilities
  • Electromagnetic analysis: Using the electromagnetic emissions of a device to infer data or operations

With that understanding, we can now look at how cloud providers such as Amazon Web Services (AWS) provide powerful tools to manage security on the platform.

Technical requirements – Examining Security and Privacy in IoT

From smart homes to connected cars, IoT devices have become ubiquitous in our daily lives. However, with increased connectivity and data exchange comes the risk of security and privacy breaches. As more and more sensitive information is transmitted through these devices, it is essential to examine security and privacy measures in place to protect both the users and the devices themselves.

In this chapter, we will explore various security and privacy concerns in IoT, including risks associated with data breaches and strategies used to mitigate them. We will also discuss the importance of privacy in IoT and how it is protected, as well as the challenges of implementing security measures in a rapidly evolving technological landscape. By examining these critical issues, we can gain a better understanding of the measures needed to ensure the security and privacy of IoT devices and networks.

In this chapter, we’re going to cover the following main topics:

The current state of risk and security within IoT

Security and privacy controls within the cloud management landscape

Risk management within the IoT landscape

Privacy and compliance within IoT networks

Cryptography controls for the IoT landscape

Practical – Creating a secure smart lock system

Technical requirements

This chapter will require you to have the following hardware and software installed:

Hardware:

  • Raspberry Pi
  • Single-channel relay
  • 1 1k Ohm resistor
  • Push button switch
  • Jumper cables
  • Breadboard
  • Mobile charger 5V/1A to use as power supply

Software:

  • Blynk app
  • Arduino IDE

You can access the GitHub folder for the code that is used in this chapter at https://github.com/PacktPublishing/IoT-Made-Easy-for-Beginners/tree/main/Chapter11/.

The current state of risk and security within IoT

As IoT technology continues to evolve and expand into new areas of our lives, it is critical that we understand the current state of risk and security within IoT networks. In this section, we will explore the current landscape of IoT security, including the most common types of IoT security threats and the current state of IoT security standards and regulations. We will also discuss best practices for securing IoT networks and devices, as well as challenges and opportunities for improving IoT security in the future. We can start off by taking a look at how security encompasses IoT in Figure 11.1:

Figure 11.1 – Overview of how security encompasses IoT

Figure 11.1 presents a structured overview of the current state of risk and security within IoT. The diagram is segmented into four main columns, representing distinct aspects of IoT: Device, Communications, Cloud platform and services, and Use Cases.

The diagram emphasizes the diverse facets of IoT, spanning from device-level hardware to broad use cases. It shows expansive areas where security is paramount in the IoT ecosystem, from individual devices and their communication pathways to the cloud platforms that store and process data, and finally, the real-world applications and sectors that implement IoT solutions.

We can continue the discussion by taking a look at challenges within security on IoT networks.

A case study for data analytics – Working with Data and Analytics

Now that we have seen use cases and have learned about how we can evaluate IoT deployments that leverage data analytics services on AWS, let’s take a look at how one industrial environment can utilize the AWS environment to perform data analytics workloads and the workflow behind it. We can see this case represented in Figure 10.4:

Figure 10.4 – AWS data analysis within an industrial environment

In this workflow, we can see that the industrial environment is pushing data onto AWS Greengrass, which in turn uses the IoT MQTT protocol to deliver data to AWS IoT Core. It will then in turn put through data to AWS IoT Analytics to be further visualized via QuickSight. On the other hand, if an IoT rule is triggered, it will instead feed the data to Amazon SNS, where the operations team will be notified through an alert. Additionally, data can also be fed in by moving the on-premises database onto the cloud with Database Migration Service (DMS), which is a service used for migrating databases onto AWS. It can then be ingested using Amazon Kinesis Data Streams and processed using AWS Lambda, where it then will be fed into AWS IoT Analytics.

Now that we’ve become more familiar with these workflows for data analytics, let’s get on to our practical.

Practical – creating a data pipeline for end-to-end data ingestion and analysis

In this practical, we will look to create a data pipeline based on the AWS console. This will follow the architecture shown in the following diagram:

Figure 10.5 – Data pipeline workflow for data ingestion

We will have a device send data to a channel. The channel will receive the data and send it through the pipeline, which will pipe the data through to the data store. From the data store, we can then make SQL queries to create a dataset from which we will read the data.

We can now go ahead and start off by creating a channel.

Practical – smart home insights with AWS IoT Analytics – Working with Data and Analytics-1

In this practical exercise, we will explore IoT data analytics using AWS. Specifically, we will use AWS services such as S3, Glue, Athena, and QuickSight to analyze a dataset of IoT sensor readings collected from a smart home over a period of 1 month.

You will need the following software components as part of the practical:

An AWS account (you can create one for free if you don’t have one already) A dataset of IoT sensor readings (you can create a sample dataset or use a publicly available dataset)

Let’s move to the various steps of the practical, as follows:

Download the occupancy detection dataset:

  1. We can obtain a dataset from https://github.com/PacktPublishing/IoT-Made-Easy-for-Beginners/tree/main/Chapter10/analyzing_smart_home_sensor_readings/datatest.csv.
  2. Open the dataset and take note of the fields inside it.

To start off, we will have to load our dataset into an Amazon S3 bucket:

  1. Sign in to your AWS Management Console.
  2. Navigate to the Amazon S3 service.
  3. Click on the Create bucket button. Name the bucket and choose a region. Click Next.
  4. Keep all the default settings in the Configure options page and click Next.
  5. Ensure public access is blocked for security reasons and click Next.
  6. Review your settings and click Create bucket.
  7. Navigate inside your newly created bucket, click on Upload, and drag and drop (or browse to) your datatest.csv file. Once uploaded, click Next.
  8. Keep the default permissions and click Next.
  9. Review the properties and click Upload.

We now will look to create an AWS Glue crawler to traverse our data and create a table in the AWS Glue Data Catalog:

  1. Navigate to the AWS Glue service.
  2. Click on the Crawlers tab under Data Catalog and then click Create crawler.
  3. Name your crawler and click Next.
  4. Select Not yet for the question Is your data already mapped to Glue tables.
  5. Click on Add a data source and choose S3 as the data source. Click Browse S3 and select the bucket you have just created. Click Next.
  6. Choose or create an Identity and Access Management (IAM) role that gives AWS Glue permissions to access your S3 data. Click Next.
  7. For the frequency, you can choose Run on demand. Click Next.
  8. Choose Add database, then name your database (for example, SmartHomeData). Navigate to your newly created database and click on Add table. Name your table (for example, SensorReadings) and select your database. Leave all other settings as they are. Click Next in the current window along with the subsequent ones, up to the window where you click Create to create the table.
  9. Review the configuration and click Create crawler.

With that, we have created an AWS Glue crawler to traverse our data. Now, we can look at transforming our data:

Use AWS Glue to transform the data and create a new table with additional columns:

  1. Navigate to ETL Jobs in the AWS Glue sidebar.
  2. Select Visual with a blank canvas and click on Create.
  3. Name your job on the top left and select or create an IAM role that has the right permissions.
  4. An Add nodes window should pop up. In the Sources tab, click on Amazon S3 to add an Amazon S3 node. Afterward, click on the Transforms tab and click on the Select Fields node. Finally, click on Target and click on Amazon S3.
  5. You should now have three nodes on your canvas. Connect the data source to the Transform – SelectFields node by dragging the black dot at the bottom of the Data source – S3 bucket node to the Select Fields node. Do the same to connect the Select Fields node to the Data target – S3 bucket node:

Figure 10.2 – Visualization of the three nodes on the canvas

Click on the Data Source – S3 bucket node. For the S3 source type, click on the Data Catalog table. Afterward, choose the database that you created. Choose the table that was created.

Afterward, click on Select Fields. Here, choose the field’s temperature and humidity.

We now need to create another S3 bucket for the output. Create a new S3 bucket with whatever name you want for it.

AWS IoT Analytics – Working with Data and Analytics

AWS IoT Analytics is a service that is used to collect, process, and analyze data that is obtained from IoT devices. You can process and analyze large datasets from IoT devices with the help of IoT Analytics without the need for complex infrastructure or programming. You can apply mathematical and statistical models to your data to make sense of it and make better decisions accordingly. You can also integrate it with many other services from AWS, such as Amazon S3 or Amazon QuickSight, to perform further analytical and visualization workloads.

The following are components of IoT Analytics that are crucial for you to know that we will be using as we go through our exercise within the next subsection:

Channel: A channel is used to collect data from a select Message Queuing Telemetry Transport (MQTT) topic and archive unprocessed messages before the data is published to the pipeline. You can either use this or send messages to the channel directly through the BatchPutMessage API. Messages that are unprocessed will be stored within an S3 bucket that will be managed either by you or AWS IoT Analytics.

Pipeline: Pipelines consume messages that come from a channel and allow you to process the messages before then storing them within a data store. The pipeline activities then perform the necessary transformations on the messages that you have, such as renaming, adding message attributes, or filtering messages based on attribute values.

Data store: Pipelines then store the processed messages within a data store, which is a repository of messages that can be queried. It is important to make the distinction between this and a database as it is not a database but is more like a temporary repository. Multiple data stores can be provisioned for messages that come from different devices or locations, or you can have them filtered by their message attributes depending on how you configure your pipeline along with its requirements. The data store’s processed messages will also be stored within an S3 bucket that can be managed either by you or AWS IoT Analytics.

Dataset: Data is retrieved from a data store and made into a dataset. IoT Analytics allows you to create a SQL dataset or a container dataset. You can further explore insights in your dataset through integration with Amazon QuickSight or Jupyter Notebook. Jupyter Notebook is an open source web application that allows you to create and share documents containing live code, equations, visualizations, and narrative text, and is often used for data cleaning and transformation, numerical simulation, statistical modeling, data visualization, and ML. You can also send the contents of a dataset to an S3 bucket, allowing you to then enable integration with existing data lakes or in-house applications that you may have to perform further analysis and visualization. You can also send the contents to AWS IoT Events to trigger certain actions if there are failures or changes in operation.

SQL dataset: An SQL dataset is like the view that would be had from a materialized view of an SQL database. You can create SQL datasets by applying an SQL action.

Trigger: A trigger is a component you can specify to create a dataset automatically. It can be a time interval or based on when the content of another dataset has been created.

With an understanding of these components, we can look at other services that we will also come across in our practical exercises.

Introduction to data analysis at scale – Working with Data and Analytics

Data analysis is often done at scale to analyze large sets of data using the capabilities of cloud computing services such as AWS. Designing a workflow for the data analysis to follow is the pivotal starting point for this to be performed. This will follow five main categories: collection, storage, processing, visualization, and data security.

In this section, we will be introducing you to data analysis on AWS, discussing which services we can use as part of AWS to perform the data analytics workloads we need it to, and walking through the best practices that are part of this. We will understand how to design and incorporate workflows into the IoT network that we currently have and work with it to better power our capabilities.

Data analysis on AWS

Data analysis on AWS can be summarized in five main steps. These steps can be seen in the following diagram:

Figure 10.1 – Data analysis workflow on AWS

Let’s look at the steps in more detail:

Collect: In this phase, data is collected from the devices within the environment. Services that are usually in charge of this include AWS IoT Core and AWS IoT Greengrass, which collects the data and ingests it into the cloud.

Process: Data can then be processed according to how the configuration is set up for it. Services such as AWS IoT Analytics are made for this purpose.

Store: Data can then be stored, either temporarily or for long-term storage. This can be done on services such as Amazon Simple Storage Service (S3), Amazon Redshift, and Amazon DocumentDB.

Analyze: Data will then be analyzed. Services such as AWS Glue and Amazon Elastic MapReduce (EMR) can be used for this purpose, while also potentially performing more complex analytics and ML tasks as necessary.

Build: We can then build datasets using this data, making patterns from the processed data that we have received from the workloads that are run.

With that, we have understood the different steps of how a typical data analysis workflow would go at a high level. Now, we can look at the different services in AWS that help facilitate this.

AWS services

Several important services can be used for data processing workloads. These five services are just a few of them, and there are definitely more that can be mentioned and that we encourage you to have a look at. For more information on this, you can refer to the documentation that is linked in the Further reading section at the end of the chapter.

Monitoring the EC2 Thing when publishing messages – Operating and Monitoring IoT Networks

Now, we can start monitoring how the Thing is doing in publishing messages through Amazon CloudWatch:

Navigate to Services, search for CloudWatch, and click on it.

Click on All Metrics under the Metrics menu in the left pane.

Navigate to IoT –> Protocol Metrics and click on the checkbox for the PublishIn.Success metric. You will see the metrics that have been published successfully being reflected on the graph that is shown on the page.

Hence, you’ve created your first Greengrass solution with monitoring based on it!

Creating an AWS IoT Greengrass group for edge computing is a useful exercise to test and validate different edge computing scenarios. By using Greengrass core components such as Lambda functions, connectors, and machine learning models, you can gain practical experience in developing and deploying edge computing solutions that process and analyze IoT data locally, without the need for cloud connectivity. You can also use the AWS IoT Greengrass dashboard to monitor and manage the Greengrass group and its components, set up alerts and notifications, and troubleshoot issues as they arise.

Now, upload the code to GitHub and see whether you can also answer the following questions, based on your hardware/code for further understanding and practice on the concepts that you have learned through this practical:

Can you also try to connect the data to Prometheus?

Can you recreate a similar setup but with EC2s as the devices?

Important note

When working with different kinds of monitoring tools, concepts will often be similar between one program and the next. This is the reason why we ask you to try out different monitoring software on your own as well. Within industrial cases, you will also find that many types of monitoring tools are used, depending on the preferences of the firm and its use cases.

Summary

In this chapter, we explored the best practices for operating and monitoring IoT networks. We discussed the importance of continuous operation, setting KPIs and metrics for success, and monitoring capabilities both on-premises and in the cloud using AWS IoT services. We also looked at several practical exercises that can be used to gain hands-on experience in operating and monitoring IoT networks. These included simulating IoT networks using virtualization, developing AWS Lambda functions to process and analyze IoT data, creating AWS CloudWatch dashboards for IoT metrics, setting up AWS IoT Greengrass groups for edge computing, and using the AWS IoT simulator to test different operating and monitoring strategies.

By learning and applying these best practices and practical exercises, students can develop the skills and knowledge necessary to design, deploy, and manage robust and reliable IoT networks. They will gain experience in using AWS IoT services and tools to monitor and analyze IoT data, set up alerts and notifications, and troubleshoot issues as they arise. Ultimately, they will be well-equipped to meet the challenges of operating and monitoring IoT networks in a variety of real-world scenarios.

In the next chapter, we will be looking at working with data and analytics within IoT with services on AWS.

Further reading

For more information about what was covered in this chapter, please refer to the following links:

Learn more about data lakes and analytics relating to managing big data on AWS: https://aws.amazon.com/big-data/datalakes-and-analytics/

Understand more on how to use Grafana through its official documentation: https://grafana.com/docs/grafana/latest/

Explore further on AWS IoT Greengrass through its official documentation: https://docs.aws.amazon.com/greengrass/index.html

Learn more about different analytics-based deployments through AWS’ official whitepapers: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/analytics.html

Learn more on different analytics solutions provided by AWS: https://aws.amazon.com/solutions/analytics/

Creating a unified monitoring solution – Operating and Monitoring IoT Networks

Creating a unified monitoring solution for an IoT network that includes both on-premises and cloud-based resources can be challenging, but is essential for ensuring comprehensive visibility and control over the entire network. Fortunately, AWS provides a range of tools and services that can be used to create unified monitoring solutions for IoT networks.

One key tool for creating a unified monitoring solution is AWS IoT SiteWise, which can collect, structure, and search IoT data from industrial equipment and processes across on-premises and cloud-based resources. SiteWise enables organizations to standardize and normalize data from disparate sources, making it easier to analyze and monitor the health and performance of entire networks.

Another important tool for creating a unified monitoring solution is AWS Systems Manager, a management service that enables an organization to automate operational tasks and manage on-premises and cloud-based resources from a single console. Systems Manager can be used to monitor system health and performance, track compliance with security and regulatory requirements, and automate responses to common issues.

To create a unified monitoring solution for IoT networks, organizations should first define their monitoring requirements and establish clear objectives and goals for the monitoring solution. They should also identify the KPIs that will be used to measure system performance and health and develop plans for monitoring those KPIs across all on-premises and cloud-based resources.

Organizations should leverage AWS CloudFormation or AWS Control Tower to automate the deployment and management of monitoring resources across both on-premises and cloud-based environments. CloudFormation enables organizations to create and manage a collection of related AWS resources, while Control Tower provides a pre-configured environment that includes best practices for security and compliance.

There are many tools that are used in conjunction with AWS to achieve such a centralized way of monitoring our resources. SIEM platforms, such as Splunk or IBM QRadar, offer this kind of integrated approach. They aggregate and analyze log data from various sources within the IoT ecosystem, helping in detecting, analyzing, and responding to security incidents and threats.

Finally, organizations should leverage dashboards and visualization tools, such as AWS QuickSight, to provide real-time visibility into system performance and health across on-premises and cloud-based resources. Dashboards can be customized to display relevant metrics and KPIs and can be shared with relevant stakeholders to ensure that everyone has a comprehensive view of system performance. By creating a unified monitoring solution for IoT networks, an organization can gain comprehensive visibility and control over its entire network, making it easier to detect and respond to potential issues before they become critical problems. An example of how such a solution has been utilized is with Philips. Philips has effectively harnessed the power of AWS to revolutionize its approach to healthcare technology. It has established a notable presence in the digital healthcare sphere with its HealthSuite Digital Platform, developed on AWS. This platform has been transformative in streamlining patient monitoring remotely, efficiently managing a range of devices through cloud connectivity, and unifying diverse healthcare data for more coherent analysis. This strategic move has not only sped up the introduction of new healthcare technologies but has also ensured adherence to critical security regulations.

Additionally, Philips has made significant strides in the field of medical imaging diagnostics by leveraging artificial intelligence (AI) and machine learning through AWS services. The HealthSuite specifically targets the complexities involved in medical imaging data analysis. It serves as a comprehensive platform that aggregates various forms of data, including patient records and readings from wearable technology. This integration, facilitated by AWS IoT Core and Amazon SageMaker, empowers Philips to handle a vast network of IoT devices and extract valuable insights for clinical use. These innovations by Philips are a testament to the importance of the power of having a unified monitoring solution in organizations and how much it contributes to growth.

Setting KPIs and the metrics for success – Operating and Monitoring IoT Networks

It is important to understand why you are conducting the monitoring that you are doing, and the appropriate milestones for managing its progress. In this section, we will look into how we can set clear objectives and appropriately define KPIs to measure how well we are progressing.

Setting clear objectives and goals for monitoring

Setting clear objectives and goals is an important step in implementing a successful continuous monitoring strategy for IoT networks. Organizations should identify the specific metrics and KPIs they want to track and establish thresholds for acceptable performance levels. This will allow them to quickly identify any issues that may arise and take corrective action before they cause significant disruptions to their networks.

Some common objectives and goals for continuous monitoring in IoT networks include the following:

Improving network reliability: Organizations may set objectives to reduce downtime and improve overall network uptime. This could include monitoring key network components and identifying potential issues before they cause disruptions.

Enhancing security: Security is a critical concern for IoT networks, and organizations may set goals to ensure that their networks are protected from potential cyber threats. This could include monitoring network traffic and identifying anomalous behavior that may indicate a security breach.

Optimizing network performance: Organizations may set objectives to improve the overall performance of their IoT networks, such as reducing latency or improving throughput. This could involve monitoring network traffic and identifying areas where improvements could be made.

Minimizing operational costs: Organizations may set goals to reduce operational costs associated with managing their IoT networks. This could involve identifying inefficiencies in their networks and automating processes to reduce the need for manual intervention.

Once objectives and goals have been established, organizations should identify the specific metrics and KPIs that will be used to measure performance. For example, if the goal is to improve network reliability, organizations may track metrics such as network uptime, response time, and error rates. These metrics should be tracked continuously and compared against predefined thresholds to identify any potential issues.

In addition to defining metrics and KPIs, organizations should also establish processes for reviewing and analyzing monitoring data to identify opportunities for optimization and improvement. This may involve using visualization tools such as dashboards and reports to gain insights into network performance and identify areas where improvements can be made.

Different types of KPIs

There are different types of KPIs that can be used to monitor IoT networks. There are five categories of KPIs that they can fall under: device-level, network-level, user-level, security, and business-level.

Device-level KPIs

These KPIs measure the performance and health of individual IoT devices and include their availability, response time, and error rates. By monitoring these KPIs, organizations can identify devices that are not functioning properly and take corrective actions to prevent downtime.

Automation and machine learning in monitoring – Operating and Monitoring IoT Networks

Automation and machine learning are important aspects of keeping IoT networks running smoothly and securely. With the help of AWS tools and services, organizations can implement these capabilities to identify and predict issues before they happen and take necessary actions automatically to prevent downtime and performance issues.

One useful tool for automation and machine learning on AWS is Amazon SageMaker. This is a service that allows developers and data scientists to build, train, and deploy machine learning models quickly and easily. By analyzing and predicting IoT devices and network behavior, SageMaker can automatically identify potential issues and trigger necessary actions.

AWS IoT Events is another helpful tool for automation and machine learning on AWS. It is a service that allows organizations to detect and respond to events from multiple IoT devices and applications in real time. This service can automate the detection and resolution of common IoT devices and network issues, improving the overall reliability of the system and reducing the need for manual intervention.

AWS also provides a range of data analytics and processing tools, such as AWS Glue, AWS Lambda, and AWS Data Pipeline. These tools can be used to automate the collection, processing, and analysis of IoT data. By identifying patterns and trends in IoT data, these tools can trigger automated responses when specific conditions are met. To implement automation and machine learning capabilities on AWS for IoT networks, organizations should first define their monitoring requirements and establish KPIs to measure system performance. They should also develop machine learning models and algorithms to analyze and predict IoT devices and network behavior and automate the detection and resolution of common issues.

Organizations can use dashboards and visualization tools, such as AWS QuickSight, to provide real-time visibility into system performance and health. These dashboards can be customized to show relevant metrics and KPIs and can be shared with relevant stakeholders to ensure everyone has a comprehensive view of system performance.

By continually reviewing and analyzing monitoring data, organizations can identify opportunities for optimization and enhancement. This process of continuous improvement ensures that their automation and machine learning strategies remain effective over time, keeping their IoT networks reliable and secure.

Exercise on simulating monitoring networks

In this exercise, we will be looking at simulating an IoT network with AWS IoT Core and monitoring it through the tools provided by the service. Here are the steps to follow along:

Log in to the AWS Management Console and navigate to the AWS IoT Core dashboard.

Click on the Test menu and select Simulator to access the AWS IoT Simulator.

Click on Create a new simulation to create a new simulation model.

Enter a name for the simulation model and click on Create to create the model.

Click on Add a device to add a new virtual device to the simulation model.

Enter a name for the device and select a device type from the drop-down list.

Enter the device’s metadata, including the device ID, device attributes, and device shadow state.

Click on Add a behavior to add a behavior to the device. A behavior is a script that simulates the device’s behavior and generates messages that are sent to AWS IoT Core.

Enter the behavior’s name, type, and script code. The script can be written in JavaScript or Python.

Click on Add a topic to add a topic that the device will publish messages to.

Enter the topic name and click on Add to add the topic.

Click on Run to start the simulation.

Monitor the simulation metrics and logs in the Simulation tab. You can view the number of messages sent and received, the message throughput, and the behavior logs for each device.

Add additional devices, behaviors, and topics to simulate a more complex IoT network.

With the knowledge of how to simulate the monitoring of networks, we can forge ahead to understand the metrics that can affect how we configure them.