IoT: A Comprehensive Introduction To Connectivity 2.0

Written by hackernoon-archives | Published 2017/07/05
Tech Story Tags: connectivity | computer-science | iot | networking | internet-of-things

TLDRvia the TL;DR App

So you’ve heard the term ‘IoT’. You know that its something to do with technology. Unless you are a DIY builder or an active tech practitioner, chances are that you probably don’t know much more.

The proliferation of connectivity in our lives today is quite simply astonishing, and it continues to grow at an ever increasing pace as multinationals and governments pour capital into connecting ‘things’.

Imagine this. You finish a hard day of work at 6 pm. You step out of your office to see your car waiting to take you home. You step in, switch the auto-drive function, and then take a nap as the connected car plans the path to your connected home. It then autonomously works its way through a network of connected roads, and informs you when you finally reach. You then step into the shine of your connected home lights, to find a connected kettle that was perhaps programmed for a hot drink at 7 pm. As you drink and make a meal, the television swifts through your past viewings to see if there is something relevant to show you. As you relax and eat, you are informed by your connected speakers that the weather might be bad tomorrow. But that’s fine because your connected car already knows this, and programs the human-centric lighting in your bedroom to wake you up earlier tomorrow. After the meal, you ready yourself for the night. The connected blanket sets the heat for a comfortable sleep, and as you put your head on the pillow, you drift into deep sleep, knowing the difference that technology has made to your life that day.

Okay, that was perhaps too idealistic. But this seems to be where we are heading today. Everything is connecting. Everyone is also connecting. Human comfort, maximized via a flow of information and conditional logic across devices and everyday items.

This article will provide a broad overview of the IoT revolution, its common architecture, and latest technologies. We start off with a brief overview of the core concept before delving into infrastructures, networks and finally the major protocols in existence today. To understand what it means to be ‘connected’, let’s start by looking at an omnipresent connectivity solution, one that exists all around us in the modern world: the internet.

THE INTERNET (OF COMPUTERS)

The internet is merely a network of computers. Everything you do ‘online’ refers to active manipulation of data from another machine somewhere in the world. In layman’s terms, this means:

a) When you access your email, you request a centralized computer — also called a ‘server’ (Gmail, Yahoo etc.) — to send email data to your machine.

b) When you make a payment online, an intermediary program (PayPal, Stripe etc.) initiates the process of deducting the amount from your bank’s server and adding to the recipient’s server. Note that this process can be fairly long depending on the nature of the counterparties involved. A footprint of this process is then sent to your email server as a receipt of the transaction.

c) When you stream on Youtube, you are requesting Youtube’s servers to send across video data in a timely fashion (incidentally, that’s why the buffer bar loads a bit, then stays still, then starts loading again).

So whatever you do online, you are either accessing data or uploading data for others to access (think of your Facebook profile picture that your friend can access) via a machine-to-machine (M2M) communication. If one were to be precise, the ‘internet’ is actually an abbreviation of the more accurate ‘internet of computers’.

Replace the ‘C’ with ‘T’, and you get to the Internet of Things.

THE INTERNET (OF THINGS)

The genius here is that the set of ‘things’ includes computers, servers as well as everyday items. So rather than two computers talking to each other, a computer can now talk to a thermostat (NEST) or a light bulb (Phillips Hue) directly. Two traditional ‘things’ can me made to talk: a door sensor can now talk to a camera when someone enters the house (IoT based surveillance). The fact is that any combination of computers and tangible objects can interact with each other as many times as needed, and these devices can interact with further devices down the chain.

Combine this with global economic trends and you get a feel for the scale of this operation. With each passing year, the emerging ‘third’ world continues to lift in excess of 25 million people above the poverty line. Fishermen, farmers, shoe repairers, toddy tappers, the traditionally underprivileged and the disadvantaged, are all emerging consumers of a pivotal 21st century technology: the cellphone. In fact, this empowerment of the underclass is so rapid that Gartner predicts a total of 50 billion connected devices by 2020. That’s 3 years from now. But take it a step further: who is to say that these systems are to act alone? If you visit the streets of Trivandrum (my home city in India) today, you will find a raft of cars, carts, trucks, rickshaws, lorrys and people moving every which way in a haphazard attempt to reach their points of destination. The only rule is that there are no rules. If you visit the same street in 2070, it’s more than likely that you will find an integrated system of transport— one that regulates itself, one that sources information across surveillance, weather systems, underground subway lines, airports and other local systems, to feed a diet of pertinent and timely information to the future consumer class. Intelligent systems are already paving the way for intelligent systems of systems and everything is connected to everything else. That seems to be the likely result of the information age we all live in today.

Who knows, maybe we will soon be talking about an internet of ‘X’ where the X is literally everything imaginable — computers, things, human thoughts, behaviors, and so on… The internet of literally everything imaginable sounds rather odd though, so the next Kevin Ashton is going to have to be more creative with words :).

But that’s all in the future. For now, let’s go deeper into the architectural elements of such a system.

THE IOT ARCHITECTURE

Every IoT system is characterized by a combination of edge devices (called nodes), a cloud (back end servers) and a possible gateway.

My (rather horrendous) drawing above captures the conceptual framework behind most IoT systems. Edge devices generally fall into two categories, namely actuators bulbs, radiators etc., which act on a command, and sensors door sensors, thermostats, sensor cameras etc., which sense from their environment. Information about the state of each device is relayed to a central cloud server via one of two duplex communication methods — push (a camera pushes an image of an intruder into a cloud database), or pull (the cloud server periodically pulls house temperature data every half hour). In any case, the user maintains full control of all devices via a network interface on their phone/iPad/another device.

Note that all devices depicted above use gateways to communicate with the Cloud. This happens when the communication standard native to the device network is different to that of the cloud server. We will return to this concept later on, but the primary purpose of a gateway is to act as a ‘translator’ of sorts.

Having looked at the architectural elements of IoT, let’s now try to understand how these things ‘talk’ with one another. We will first review the OSI model — the holy grail of all connectivity solutions — and then move on to look at the major protocols and standards of today.

THE OSI MODEL

The Open Systems Interconnections model is a framework for sending data across devices. See the layered structure below:

The OSI Stack

(Note that the application, transport, network and physical layers are the most important ones to know; hence the variation in color)

To understand what the figure means, let’s consider an event we are all familiar with: the transfer of a file from one phone to another. When you choose your file and hit ‘Send’, your phone has to first figure out the core payload. The payload is a string of bits (ones and zeroes), whose length depends on the type of file (image, video etc.) and its respective features (pixel color, brightness, frame rate, compression etc.). Similar to how humans exchange information in the form of alphabets and words, machines exchange information via bits and bytes (combination of bits which mean something). The simplest IoT use case for this concept is 1 = ‘Turn light bulb on’ and 0 = ‘Turn light bulb off’.

The payload data then works its way through each layer of the OSI model, after which it is routed to its destination via telecommunications networks. You might ask what ‘working its way through each layer’ means. Two words: encapsulation and decapsulation.

Encapsulation is the first phase and this can be compared to a production process. Take your laptop as an example. It started life as a generic circuit board/PCB at the start of a production chain. A whole host of electronic devices —sockets, slots, power connectors, clock generators etc. — were added to transform the PCB into your alpha motherboard. We then had memory components (ROM, RAM modules), processing units (microcontrollers, chip sets) and other control structures added to produce the modern day CPU. After this, we added the main HDD (hard disk) and video card interfaces. On top of all this, we added a screen, surrounding casing and few ports so that the final structure — your laptop — looks appealing to use. Now think of our payload as the initial PCB. The job of each layer then, is to ‘add’ bits to the payload to make it ready for transmission. In other words, the payload is ‘encapsulated’ at each layer by layer specific information as seen below.

Payload Encapsulation

The output from the last layer becomes the ‘payload’ for the next layer, which proceeds to add its own overhead (TH = Transport header, NH = Network header, DLH = Data Link header) to whatever it is provided with.

The benefit of such an approach is that each layer can be made to serve very specific functions. In an IoT context, the most important layer is the application layer as it is key to interoperability (the extent to which devices can talk to a variety of other devices). If device A has a different application layer to device B, they cannot directly communicate with each other. As we shall soon see, this is the reason why consumers of ZigBee devices for example, often experience difficulty in inter-vendor device communication. Moving on, the network and transport headers include message source/destination addresses, forwarding and routing information etc. Data Link headers include MAC addresses, LLC multiplexing, frame synchronizations and other information related to local delivery. A full description of functions is beyond the scope of this article, but the general idea is that each layer is independent of the one above and below it, making its functions exclusive and unique to the overall OSI operation.

The physical layer (PHY) defines how the data is to be transmitted. Several methods exist from varying a signal’s frequency/phase/amplitude (F/P/A shift keying) in a multitude of ways (binary, quadrature etc.). We don’t need to know about these in detail, but note that the difference between well known transfer protocols (WiFi, Bluetooth) can be due to the formal definitions at this layer.

Now enter the final stage: modern telecommunications. Beyond the physical layer, the message (payload plus all overheads) is routed towards its destination via a combination of hops (over other devices) and telecommunication channels (wires, phone lines, cables etc.). There is of course, always a chance that the message or parts of it will be lost during transmission. However, under little interference and ubiquitous telecom networks, your message should arrive at the other phone’s physical layer. What happens now?

Decapsulation.

Each OSI layer at the receiver end removes its respective overhead, until the core payload is exposed. This payload is then stored as a file (e.g. ‘transferredphoto.jpg’) on the phone’s memory, which can be accessed via a respective user application.

That’s Telco 101 finished; let’s now relate this to IoT and its protocols.

PROTOCOLS

A ‘protocol’ is just a permutation of the OSI stack we met earlier. Some IoT protocols define all the layers, and others define only a few. Regardless, the underlying OSI structure is the bedrock for all M2M (machine-to-machine) communication methods.

Like the early days of the internet, the IoT space at the moment is a warzone between various protocols. The most significant contenders in the connectivity race are WiFi, Z-Wave, ZigBee, Thread, and Bluetooth along with a few trailing proprietary solutions. We do not know which one will win, but we can be fairly certain that one among the aforementioned ones will emerge victorious. Let’s see which one we should bet upon…

WiFi

As a classic connectivity solution, and as per the WiFi alliance, carrying half of all global Internet traffic through a ubiquitous network of phones, tablets, laptops and other electronic devices, WiFi should by all means, have outstripped its competition by every yardstick in the world. Unfortunately (for WiFi), this has not been the case.

From an OSI standpoint, WiFi technology is defined by the IEEE 802.11 standard for the physical and data link layers. It then uses well known TCP/UDP and IP standards for transport and network layers, and leaves the application layer undefined.

WiFi OSI Stack

What this implies is that WiFi devices cannot communicate with each other, at least not directly. Now, you might then wonder how WhatsApp calls or email exchanges work. They work because you (or another human) acts as the de-facto application layer. You know that to initiate a call, you have to download the app, create an account, add the contact, tell them to be online during the time of call, and then finally hit the call button. Remember, a protocol is just an OSI permutation defined by layer functions. In WiFi’s case, you perform the duties of the application layer. ‘Things’ in the Internet of Things require hard coded logic, and are (fortunately) not as clever as us. As such, WiFi is non-interoperable.

WiFi infrastructures are also based off a star topology. This means that device to device communication is achieved only via a mediating router. Thus, if the router fails, all further communication attempts fail. This single point of failure might not matter in home environments, but can cause a problem in enterprise grade IoT (e.g. commercial lighting, smart street lights etc.)

Star Networks

Moreover, WiFi was invented for devices needing high data rates and bandwidth requirements. Depending on the PHY type (b/g/n), this equates to an impressive 10–100 megabytes per second. So using good ol’ WiFi allows us to stream a Youtube movie involving the transfer of millions of bits. To facilitate such a transfer, WiFi is extremely power hungry, and this overcapacity is actually an overkill in resource-constrained IoT environments. To take an earlier example, we just want to tell our bulb to ‘turn on’ (represented by a 1) and ‘turn off’ (represented by a 0) — the payload can be fully represented by a single bit. The IoT vision involves a world where globally deployed sensors and nodes feed us data, and in this world, we cannot use solutions that gorge on electricity, or frequent battery changes for that matter.

WiFi does however have one major strength to offer: its existing network infrastructure. Every mainstream consumer electronic device today comes with a WiFi module built in; the only other protocol with this benefit is Bluetooth. If we require every device to communicate with every other device on Connected Earth 2.0, wouldn’t it be cost effective to start with something that is already built in to as many existing devices as possible? Perhaps. But when you imagine powering billions of sensors and items with considerable amounts of electricity and these devices cannot talk to each other, you might think twice before adjudicating in WiFi’s favor. Another issue is the growing global consumer market, causing WiFi’s 2.4 GHz ISM band to become somewhat overcrowded. I certainly wouldn’t be impressed if I trigger a command to turn my bulb off, and the signal gets ‘delayed’ due to the band being prone to interference.

What we need are low power, interoperable solutions. And sadly, WiFi’s current state cannot achieve this.

Let’s now look at the next standard on our race, Z-Wave.

Z-Wave

Z-Wave was introduced by Zensys in 2003 as an official standard, which was later acquired by Sigma Designs.

While WiFi’s strength is its strong network, Z-Wave’s strength is its IoT penetration. The United States is leading the world in consumer home automation, and Z-Wave is the most prominent standard in terms of market position. Abundant anecdotal evidence points to over 1700 existing devices in the market and several millions of compatible units in active circulation. For manufacturers and vendors in home automation, these figures equate to a potential goldmine. Take a look at the Z-Wave OSI stack:

Z-Wave OSI Stack

There are certain interesting contrasts when you compare Z-Wave with WiFi. As you probably figured out from the figure above, Z-Wave covers the full technology stack and therefore, is an interoperable solution. Any Z-Wave device can communicate with any other Z-Wave device, even if they are from separate vendors. This cross vendor interoperability is perhaps a major reason why Z-Wave has garnered the positive attention that it has. Secondly, Z-Wave’s electronics are more in line with what is expected in a constrained environment. Depending on the particular module, you can choose between 9.6, 40 and 100 kilo bits per second, which certainly cover most smart home applications and perhaps even enterprise grade IoT solutions. Energy consumption and transmission band tick the box too, as the module uses very little power in a relatively unused 868–910 MHz band. Lastly, Z-Wave’s topology is different to WiFi’s star topology that we met earlier. Two devices in a Z-Wave network can communicate with each other without the help of a mediator, whereas in WiFi’s case, this can only happen via a router. Let’s take a more detailed look at how this works.

The key point is as follows: each device in a ‘Z-network’ acts as a sender, receiver, and transmitter of messages. This comes in handy if say, the destination is far away and not directly accessible to the source. Think of the message starting at point A and hopping across several other points until it reaches point B. Even before initiating any message transfer, Z-Wave arranges a hypothetical path to the destination via a bunch of other Z-Wave devices. The protocol therefore uses source-based routing to facilitate a mesh network. Other standards like ZigBee are also mesh networks, but as we shall soon see, they rely on different routing techniques.

Mesh Networks

There are pros and cons to this approach. The obvious benefit coming from the arrangement above is the improvement in communication distance. The mesh network increases the protocol’s maximum range from 100 m LOS (line of sight) to easily around twice that. Assuming your environment can be contained within an area of a football pitch, Z-Wave should be able to do all the device talking for you. However, there is a central problem. Due to the source based routing procedure, the transmission will fail if any node in the planned path breaks down (e.g. a broken fuse). While the source has mechanisms to attempt other routes (explorer frames) in this case, the healing procedure can cause unacceptable delays in transmission. You know how you are used to pressing a switch and your appliance turning off instantly? How weird would it be if you had to wait 10 seconds after ‘pressing’ off, before the thing actually turns off?!? Latency surely is abnormal UX, right?

The last thing worth mentioning about this protocol is the requirement of a gateway for communicating from devices external to the network. Say you wanted your phone to command your Z-kettle to boil some water in the morning. Assuming your phone is connected to either local WiFi or a 3G/4G network, the command has to be routed through a gateway before reaching the kettle. Why? Because a WiFi stack is different to a Z-Wave stack (see earlier). Your phone sends a ‘WiFi data packet’ which is different to a ‘Z-Wave data packet’, and as such, someone needs to translate for the kettle to understand the message. Internet based control of any Z-device can only be granted through a gateway that is paired to a local home router. As such, every Z-Wave network has to include one of these things.

Overall, a pretty good standard with the right idea and definitely a potential race winner.

ZigBee

ZigBee started life during the 90s, but it wasn’t until 2004 that the ZigBee alliance first published an official standard.

Ever since the early 2000s, ZigBee and Z-Wave have been battling it out for domination in the smart space. They both facilitate low power, low bandwidth, mesh based networks. ZigBee’s data rate tops off at around 250 kilo bits per second, and the power consumption is less than 1 W in typical applications  certainly favorable numbers in the IoT scene. Let’s look at ZigBee in relation to the OSI model.

ZigBee OSI Stack

This protocol only defines the upper layers of the OSI stack, and delegates the lower layer functions to the well known IEEE 802.15.4 radio. Note that unlike 802.11 which is only used by WiFi, the 802.15.4 can be used by other proprietary solutions too.

Unlike any protocol so far, ZigBee has several standard application layers — Light Link, Home Automation, Smart Energy etc. — defined. Additionally if they wished to, a manufacturer could adopt a proprietary application layer of their own. This seems reasonable at first glance: vendors could choose between delivering products that (a) work within standardized ZigBee ecosystems, or (b) only work within their own ecosystems. So depending on whether their manufacturers implanted similar application profiles, two ZigBee devices may or may not be able to communicate with each other. The incoming ZigBee 3.0 standard is meant to provide complete backward compatibility towards both LightLink and Home Automation profiles, but it remains to be seen whether this will make the protocol fully interoperable. As it stands, the protocol’s current state is a blur between interoperable and non-interoperable.

Some features are worth comparing between the two ‘Z’ protocols. Both require gateways for global device control for reasons outlined earlier. However, while Z-Wave has a limit of 232 devices per network, ZigBee, as per the ZigBee alliance, can supposedly deal with over 60,000 devices. No conclusive evidence exists for this claim, but if true, ZigBee is perhaps the best placed protocol to support a large scaled IoT application (e.g. smart city). However, caution has to still be exercised: 60,000 devices sending messages will undoubtedly clog the network, and this is made worse by the fact that ZigBee operates on the same ISM band as WiFi’s interference prone 2.4 GHz. Communication gurus among you should also recognize how the higher frequency band implies lower penetration power around common obstacles (walls, furniture etc.) In contrast to Z-Wave, ZigBee uses destination-based routing. Should any node on a given transmission path fail, the self healing process reroutes the message towards its destination fairly fast. ZigBee also supports multicasting, which involves the ability to send messages simultaneously to several devices in the same transmission. As of now, Z-Wave does not support this feature.

ZigBee is certainly ahead of WiFi in our race, but it’s unclear where it stands in comparison to Z-Wave. ZigBee has perhaps had slightly more success in enterprise IoT as it gives manufacturers a greater ability to customize, but who is to say that consumers wont trump enterprises?

On to the baby contender in the race: Thread.

Thread

You’ve probably figured out by now that the connectivity market is highly competitive. With a myriad of established players and proprietary solutions, it would be fair to rank this market at the lower end of your business ventures. But Thread thinks different.

The Thread group, started in 2014, is the newest contender in our race. Launched with the sole aim of focusing on the smart home segment and capable of only supporting 250 nodes at maximal usage, this standard promised to specifically address the consumer space. But has it delivered?

Not yet. 2016 came to a close without a single Thread certified mass product in the market, and so far, 2017 has witnessed only minor adoption rates (NXP software stacks, Samsung ARTIK & IoTivity etc.). Contrast this with the number of ZigBee certified products and you get a feel for what Thread is up against. One could say that it takes a while before any new technology establishes itself. This is certainly true if you look at ZWave and ZigBee, which have together accumulated over 15 years since birth. So it might not be Thread’s fault that it’s behind as of now, but we do expect a lot from it in future.

Thread OSI Stack

Take a look at the software stack above. Given that it’s radio is the same as ZigBee’s, the protocol delivers a similar data rate of 250 kilo bits per second on 2.4 GHz. It should also be clear by now that that Thread is not interoperable.

Thread’s major strength perhaps, is the way it defines what it does define. Thread, like several others, is a mesh network. Moreover, as their names suggest, the network and transport layers deal with accurate message routing and Thread specifies these with the largest scale of devices in mind. Using the 6LowPAN framework underneath UDP (a transport layer technology) , Thread is currently the only protocol with the ability to receive IPv6 data packets over the 802.15.4 radio. That makes Thread networks IP addressable. Similar to how your laptop has an IP address in a network setting, Thread products can be reached directly if an authorized router sends information straight to the Thread product. If you wanted to do that in a ZigBee setting, you would need a secondary gateway. However, one thing to bear in mind is that as we saw with WiFi’s case, IP connectivity still requires a human to mastermind the OSI procedure. Thread also claims to have no single point of failure, and this is partially true. Certain nodes such as routers and REEDs (router-eligible end device), have the capability to downgrade and upgrade their functionalities under specific instances of network disruption. If for example, the assigned leader fails, another node takes its place autonomously. However, as you can probably imagine, in networks with one router and several ‘sleepy end devices’, the failure of the router would still equate to a failure of the network. So this argument also has to be taken with a pinch of salt.

Having said that the group does have support from a variety of major corporations. CES 2017 in Las Vegas showcased several major firms — Google, NEST, Samsung, OSRAM, Texas Instruments etc. to name a few — developing proprietary solutions using aspects derived from the Thread protocol. We have an evolutionary tendency to do what others do and so who knows; maybe Thread might gain massive adoption piggybacking the success of existing multi nationals.

To me, what makes this standard interesting is its striking resemblance in technicality to ZigBee. The radio, bandwidth, topology, frequency band, are all the roughly the same. Even power consumption is pretty similar. The Thread stack is more complex than ZigBee’s with its incorporation of 6LowPAN and therefore might consume more power (and bigger microcontrollers), but it remains to be seen if this difference is anything more than marginal. In many ways, Thread looks almost like a successor to ZigBee — its suave, polished offspring — and it should not come as a surprise that the groups have agreed to collaborate on cluster agreements. Parents and children often quibble, but probably don’t mean any inconspicuous harm to one another, right? Well…maybe :)

Let’s move on to our final major protocol, the cosmopolitan Bluetooth.

Bluetooth

Cosmopolitan in the sense that like WiFi, Bluetooth is found in every major electronic device of today. Used mainly for direct and initiated duplex communication between two devices, Bluetooth is known by many. Staunch supporters include the SIG (Bluetooth’s Special Interest Group), as well as software manufacturers like Windows and Apple iOS.

This ubiquity is a major strength for Bluetooth. The concept of ‘just connecting and sending data between any two devices’ is alien to ZigBee, ZWave and Thread, but not to Bluetooth. Any modern phone can connect to any other phone, laptop, or Bluetooth enabled peripheral and start exchanging data packets. Excluding WiFi, this is the only other protocol with this feature. However, it is important to realize that the strength of this argument rests only partially on Bluetooth’s competence as a standard, because as you might know, it was invented all the way back in 1994 by Ericcson. With a 23 year history and a timing that coincided almost perfectly with the 90s spike in mass technology adoption, Bluetooth was literally at the right place at the right time to free ride. So, has it really got the legs to lead IoT now?

It’s model is depicted below:

Bluetooth OSI Stack

Interoperability and full stack? Check. To be more precise, the stack shown above is Bluetooth 4.0, also known as Bluetooth Smart. The original classic Bluetooth was meant to be a data heavy solution clocking in an impressive data rate of 3 megabits per second, with a range of 10–100 m and maximum of 7 devices per piconet (the local Bluetooth network).

Bluetooth Smart has modified these to be more ‘IoT relevant’. Its theoretical range is more than the classic version, but its data rate has been reduced to a power saving 1 mega bit per second. Why is this important? Because full scale IoT dictates that devices have low duty cycles, meaning they wake up only to perform their functions and go to sleep immediately after. This saves power. When expressed as a ratio between wake and sleep times, Bluetooth, due to its high data rate, records a lower duty cycle than other protocols. Think of sending a lot of information in a very small time (so you don’t have to be ‘awake’ very long), rather than sending smaller data components across a longer duration. But bear in mind that nearly all protocols satisfy this criteria somewhat well — Z-Wave for instance has a minor duty cycle of 1%, even though its PHY transmission mechanism is different to Bluetooth. Moreover, unless your IoT use case is large scale, there probably isn’t much data to be transferred anyway. Even then, devices with lower duty cycles should theoretically last longer on small scale coin batteries.

Both versions also feature the technique of adaptive frequency hopping. Whereas all the IoT standards so far are frequency bound, Bluetooth dynamically hops between specified channels to ensure fastest message delivery and avoid noisy interference.

Bluetooth’s integration capabilities are also worth accentuating. We have already seen that the technology is built into most electronic products today. In addition to this large ecosystem, the availability of iBeacons (Apple) and Eddystones (Google) paves the way for enormous value based services. Think of location based services for instance. Your car can quite literally be ‘led’ to an open parking space if an iBeacon is attached to it. Another example: take well being. South Korean trains in Busan use iBeacons to assist pregnant women to find empty seats. Several digital food brands are embracing similar technology to provide customized meal plans to shoppers. All this is on top of extensive beacon use in streamlined inventory management and supply chain tracking. You can see how distinct transmitters capable of passing identifier information to other devices, have the effect of multiplying the potential use cases by a staggering amount. Bluetooth is the only technology with this advantage.

It is a bit of a shame that Bluetooth currently does not have mesh networking capability. Devices still have to be within range to communicate via Bluetooth. The SIG’s Working Group in 2016 announced to add mesh capabilities to the existing Bluetooth Smart protocol; we are yet to see how this will pan out post the recent release.

That roughly covers everything we set out to do. Let’s now wrap this gig up.

FINAL THOUGHTS

As IoT continues to penetrate our daily lives, the journey towards standardization will prove to be a major challenge. As we saw above, each protocol has its pros and cons, and we can’t predict the eventual victor easily. Interestingly, most manufacturers avoid having to deal with this question by leaving it to their customers, and as such, every vendor and reseller is betting on what they think will win. The truth is this: no one knows who will win.

One other thing we have not covered are business related concerns. It’s not just enough for a product to be ‘cool’; it also has to sell. Additionally, what about the roadmap? How do individual products fit into the wider vision in IoT? These are all questions that fundamentally relate to the value proposition for any IoT system. Unfortunately most exhibitions so far, including the most recent HKES in Hong Kong, featured plenty of gimmicky technology without any compelling use cases whatsoever. Finding IoT tech with adequate value potential is another major challenge today, and one that has to be dealt with.

In any case, I am quite convinced that connected devices are the future. The vision for IoT is very real and we are likely to attain the level of comfort described at the start of this article. Time will tell more, but for now, the uncertainty is what makes the space extremely fun to observe. To borrow a quote from my man Pliny the Elder (23 AD), ‘The only certainty is that everything is uncertain’.

Thank you for reading.


Published by HackerNoon on 2017/07/05