The Internet of Things has changed the dimensions of traditional Business IT. To tap the potential need for a highly scalable and reliable IT Infrastructure, they should be on standardized components and open protocols and include the three layer Devices, Controllers and Data Center or the Cloud solutions.
The Internet of Things connected all of kinds intelligent devices, such as mobile devices, sensors, machines or vehicles with each other and with the cloud computing. Analysis of the IoT data offers many opportunities for companies, as they can make faster decisions, optimize business processes or develop new applications and even restructure business models. This has the enormous potential of Internet of Things that is virtually present in every industry, including energy, retail, healthcare, financial services, transportation and manufacturing.
The range of potential new applications is great. It ranges from smart infrastructure with automated lighting and energy management through optimized solutions for inventory, logistics and supply chain management to intelligent manufacturing systems.
However, IoT put completely new challenges on the table when the subject comes for scalability. Gartner (market research firm) expects 6.4 billion devices will be connected via IoT in 2016. This figure represent a massive increase of 30% compared to 2015. Want to get more surprised? 5.5 million Devices will get connected per day in 2016.
In 2020, increase rate is predicated around 20.8 billion. This means, in future a single intelligent system could collect and analyze billions of data objects from millions of different endpoints. This is exceptional demands on processing power, storage and connected networks.
A coin has two sides
The growth is a positive sign for all industry, but however, we cannot ignore the sheer size and the public nature of the Internet of Things, which will bring great challenges. Network and system architects need to optimize the IT infrastructure to meet the higher requirements of the IoT in terms of scalability, reliability and security. IoT based applications and automated business processes make high demands on the availability of the system. Many intelligent systems are used for mission critical applications where system downtime leads to decreased productivity.
The distributed IoT solutions lead to further major security challenges, since the systems are connected via the Internet and use computing power or memory resources from the data center or from the cloud. Therefore, it is very important for companies to improve their security architecture to protect themselves effectively against any data loss, theft and increasingly sophisticated DDOS attacks. This includes comprehensive authentication, authorization and auditing capabilities. These create trust, control access to resources, and ensure compliance with the legal requirements and corporate guidelines (compliance). It is also mandatory for companies to use resilient encryption methods to protect their intellectual property and customer data from theft.
Layer model has important technical requirements
Intelligent IT solutions such as Red Hat technologies are based on things to meet the requirements of IoT based systems for scalability, reliability and security. The solutions are based on a hierarchical model with device layer (Edge Nodes), control layer (controller gateway) and Data Center Tier or Cloud layer. Here come standardized protocols and components for use.
The device layer involving a variety of intelligent terminals that includes mobile devices, wearable gadgets, sensors, control and regulation devices, autonomous machines and appliances, etc… The communication between the devices and the control or checkpoints is based on standard network protocols, either wired or wireless. The forwarding of the raw data and the exchange of control information is based on open messaging standards.
For communication between Edge and Controller – MQTT (message queue telemetry transport) comes into the picture for use. It was developed as part of a project to monitor an oil pipeline of two companies i.e. IBM and Arcom Control Systems. The protocol is characterized by high reliability and is suitable for use in mobile networks and in networks with low bandwidth. An implementation of the Protocol requires little coding. MQTT was released in 2010 under a free license, since 2013, OASIS standard has reached to the version of 3.1.1 in the meantime.
MQTT works on publish and subscribe protocol that is based on a hub-and-spoke architecture. The transmitter (producer) of a message does not directly communicate with a receiver (consumer), but through a broker. By decoupling the two sides, the producer, for example – a sensor, then transmit data when no receiver as a gateway is online. Even more: the producer has no way of knowing whether there is data for one or more Consumer. The broker acts as a server for producer and consumer, which are connected as clients both with the broker. MQTT operates completely independent of the actual content of the messages.
Unlike a client / server protocol like HTTP, MQTT is event-oriented. In the former case, the client asks the server to see if there are new messages. In an event-driven model, the broker informs the consumer when information about a particular topic is present. However, there are no 1:1 queues, as in the Java Messaging Service.
Consumer need for one or more topics are logged and are therefore referred to as a subscriber. In a topic messages are combined into a particular topic. This may for example be measured by sensor temperatures.
The MQTT protocol provides three quality-of-service (QoS) stages. At the lowest level is assured that the message more than once (at most once) is delivered. A producer features in this case a publish message with the QoS level 0. Since the sender does not take over care of the message, this fast and resource-efficient form of Message Exchange is also called “Fire and Forget”. There is no guarantee that the message actually arrives. In other words, it can lead to message loss, but not carry weight when shortly after a lost already, the next value is transmitted.
On the next QoS level it is ensured that the message at least once is received by the consumer. But this also means that a message can be delivered several times. The producer stores the message in a file and then make available in an emergency for a re-transmission. The producer sends a publish message with a Packet Identifier to the consumer. It acknowledges receipt with a package that contains the same Packet Identifier. Does the producer have not received this packet, it can send the original – including Packet Identifier and content – as often send to the consumer. After receiving a message with the already transmitted Packet Identifier retransmit this applies as a new message.
At the highest QoS level is ensured that duplicates are excluded and the messages exactly once have been unsuccessful. In order for this warranty can also be adhered to, there is two-step confirmation. The consumer responds to the receipt of a message with a reception packet. Then the Producer sent another package, on which in turn responds to the consumer.
Controller route data into the data center or in the cloud
The controller level acts as a link between the devices and the data center or the cloud. It collects and stores device data and forwards it. For example, via Java Messaging Service to the data center; further, conversely it conveys control information to the equipment – all based on open messaging standards. In addition, it serves as temporary store for data that are needed for tactical analysis or regulatory requirements.
An alternative to the above described three layer model forms a two layer model, in which the devices are connected directly to the data center or the cloud. This model is very suitable for consumer applications that require less bandwidth and no gateway level for the distribution of workloads.
The post How IoT is changing our World? appeared first on .