Everything You Need to Know About the Internet of Things

Written by Ingenu | Published 2016/12/01
Tech Story Tags: internet-of-things | iot | tech | technology | wireless

TLDRvia the TL;DR App

by Michael Vedomske, Principal Data Scientist at Ingenu, and Dr. Ted Myers, CTO at Ingenu

Yup, just like the title says, this post discusses what you should know about the IoT. We provide the must-know material for sussing through the details when developing an application or considering wireless technologies for connecting a device. That’s a high order, so we will update as new developments are made, we receive requests and feedback, and more is revealed about technologies. In short, this is a living source of IoT information.

We break it down into three basic sections:

  1. A brief history and how the IoT wireless landed on low-power, wide-area technology as the wide-area connectivity technology of choice.
  2. Technical review and first principles for assessing wireless protocols including topics such as battery life vs transmit power, range vs coverage vs link budget, capacity vs data rate, IoT security, and others.
  3. The business perspective and how ultimate profitability is tightly integrated with a wireless technology’s capabilities.

We try to quickly discuss issues and provide links to other materials that discuss them in more detail. As you’ll see, some things just need to be talked through. We hope you benefit, and if you do, please click the heart at the bottom to help others find it too! Enjoy!

Contents

The Path to Here Enter Low Power Wide Area Connectivity 2015: The Year LPWA Grew Up

Comparing IoT Wireless Protocols Will my app have coverage? Range ≠ Coverage One metric to rule them all: Link budget You don’t know a protocol’s battery life until it’s fully developed IoT security, don’t connect without it Data rate ≠ capacity = link capabilities

Cellular LPWA: NB-IOT and LTE-M Part 1: Introduction Part 2: Cellular LPWA Availability Part 3: 3GPP/GSMA is NOT Providing a Graceful Evolution Path for Machines Part 4: Cellular LPWA Complexity Part 5: Uplink Capacity Part 6: Downlink Capacity Part 7: Firmware Download Part 8: Robustness Part 9: Power Consumption

A Deeper Technical Dive: Categories of Low-Power, Wide-Area Modulation Schemes Back to Fundamentals Time to Get Down and Nerdy

**The Business Side of the IoT**Without Device Longevity the IoT Will Never Be The L in LPWA Simple IoT Reality Check

The Economics of Receiver Sensitivity and Spectral Efficiency,  or How to Run an IoT Business Setting the Stage: The Public Network Business

The LPWA business begins with the carrier Coverage Capacity Comparing Specific Technologies Carrier Growth and Future Profitability Relies on Capacity Scaling, aka Cell Splitting

How Carriers Can Have Their Cake and Serve the IoT Too, or,  Why Cellular LPWA Will Never Serve the IoT as it Should It just makes good business sense Economics and misaligned incentives Always second tier

The Path to Here: Legacy IoT Wireless

Wireless sensor networks have historically been served by some combination of traditional cellular or local area solutions like WiFi, mesh, and local RF (Bluetooth, NFC, etc.). These solutions have failed to provide the catalyst needed to push the IoT over the edge and into mainstream adoption for a few basic reasons. First, these traditional approaches (Wi-Fi and Mesh) require a wired power source or changing/charging batteries every 1–2 days. This limits IoT applications to scenarios where there is already a power line or requires installing one, so only the most obvious and strongest cost savings applications are served. Second, they have limited area and depth of coverage per access point. Thus, applications were required to stay within a very limited area around the wireless source preventing many applications from being possible. Thirdly, they were costly to use. Even after several years and over a billion modules used, LTE modules still cost over $40 a piece. Mesh requires an entire network to be built out before it can be used. Local RF solutions require each business to build, manage, and maintain the wireless infrastructure preventing economies of scale.

Enter Low Power Wide Area Connectivity

Publicly available Low Power Wide Area (LPWA) connectivity uniquely solves each of the aforementioned problems. Low-power wide-area connectivity is pretty much what it says: wireless connectivity that covers a wide area using low power. In addition, LPWA can do so with low cost endpoints. LPWA is in contrast to the data and battery intensive 2G, 3G, or LTE cellular wireless technology. It also contrasts with traditional cellular because it is low bandwidth. The vast majority of devices on the IoT will not need the kind of data throughput that traditional cellular is designed to provide. In fact, according to James Brehm & Associates, 86% of IoT devices consume less than 3MB a month.

Of course there will be IoT devices that will need more bandwidth, and those will be served well by higher bandwidth solutions; but the sensors that give us the efficiencies discussed need only to periodically send a few hundred bytes to justify their value.

It should be clear that in order to achieve the grand vision of the IoT we will need publicly available out of the box connectivity for machines and devices.

In other words, the IoT described in the press and in this article must be connected by a ubiquitous wireless service dedicated to machines (much like cellular networks are used today for human driven voice/data connections). Both traditional cellular and LPWA are proposed public network solutions to IoT connectivity. 2G has been used for years to provide this publicly available connectivity that IoT devices need. But with AT&T finishing up their 2G shutdown the end of this year (2016) and others following close behind, it’s clear cellular 2G isn’t the path forward.

2015: The Year LPWA Grew Up

2015 in many ways was the year of LPWA. Three major players have emerged as potential low power wide area connectivity providers: Ingenu, Sigfox, and LoRa. Each of these companies provide different technologies that have long reaching implications on their viability to serve the vision of the IoT. We’ll discuss these in upcoming posts.

Cellular providers are also beginning to join the LPWA movement through 3GPP’s latest work toward creating a standard that matches the LPWA criteria. (Despite the press releases, cellular LPWA isn’t quite there yet. Power usage is notoriously tricky to gauge because it involves so many interactions; so whether cellular’s latest attempts will be low power remains to be seen.) By beginning to develop cellular-LPWA, cellular providers have essentially admitted that traditional cellular is not the appropriate technology to connect the IoT.

What is clear is that LPWA uniquely serves the vision of the IoT. Analysts and wireless carriers agree that LPWA will take the lion’s share of the IoT’s connectivity.

The exact numbers of LPWA connections aren’t important; we know they’ll number in the many billions. What’s important is the nature of the applications these connections enable: truly useful, efficiency enabling applications that are simple, scalable, and improve our lives directly and indirectly.

Comparing IoT Wireless Protocols

Will My App Have Coverage? Range ≠ Coverage

Often people ask, “what’s the range of your protocol?” That’s actually the wrong question. Range is not directly relevant to choosing or building a wireless technology that must have deep reliable coverage. Here’s why. It is very easy to cherry pick a range using ideal conditions. But coverage must account for actual real-world conditions over an entire area. Coverage tells you the probability of getting a message through in an entire area. Range tells you the maximum possible distance getting a message through is possible. Ingenu’s RPMA for example, has closed links over 88 miles. But cherry picking this range is not directly relevant to choosing a wireless protocol that must have deep reliable coverage. We cannot draw an 88 mile radius circle around one of our tower based Access Points and make the credible claim that by πr²,we can cover 24,000 square miles with a single tower. However, there is an indirect relevance in that the same aspect that allows RPMA to be used to build deep, reliable coverage (i.e. link budget) is the same aspect that allows for some truly amazing cherry picked results. Other technologies advertise their cherry pick and that’s one of ours, but it is irrelevant to choosing a protocol. If you want to truly compare two wireless technologies coverage look at link budget. To read more on the pretty awesome 88 mile link closure on one of our customer’s sites, check it out here. Another cool story from our customer in Chile closing a 30 mile link after a local construction accident (which got caught on video, youtube link found in the article) knocked out the closest access point.

One Metric to Rule Them All: Link Budget

Link budget is a single metric, a number in decibel units, that is the simplest way to compare any two wireless technologies. The bigger the link budget, the better coverage a wireless technology will have, period. It accounts for all the other stuff like path loss, propagation loss due to frequency choices (like 900 MHz vs 2.4 GHz), cable loss, modulations choices, receiver sensitivity, and all the rest. Science FTW! Just like balancing your personal budget, when you subtract all expenses from your income you obtain your disposable income, link budget is what is left after all propagation losses are accounted for. Link budget can be “spent” on various tradeoffs between wider coverage, deeper coverage, and more reliable coverage.

tl;dr To instantly know which wireless technology has better coverage, compare their link budgets. That single metric accounts for everything. To read more check out this blog post.

You Don’t Know a Protocol’s Battery life Until It’s Fully Developed

Long battery life is a key driver behind the savings and efficiencies of the IoT.

Steps to Knowing an IoT Protocol’s Battery Life

  1. Finalize the design or standard on paper (For standards bodies this is a 100% finalized and assimilated written standard.)
  2. Build a chip to the completed design or standard (often chipmakers will try and get a marketing headstart by building a chip to an early version of a standard, but these are never the same as the finalized version)
  3. Build an actual commercial product with the chip integrated
  4. Assess chip performance in device in lab conditions
  5. Deploy in real world conditions and confirm battery life performance

It isn’t until after Step 5, that you can actually claim to have met or not met the specification and know the true battery life of a wireless protocol. It is important to not draw conclusions based on steps earlier than step 5 as performance in real world conditions are often very different from the specs. For example, in the book The Qualcomm Equation, Dave Mock discusses how CDMA was supposed to have 10x the capacity of GSM but ended up only having 3x (still a great advantage) once the tech hit the real world. In terms of battery life that’s the difference between, say, 10 years of battery life, and 3 years—not a trivial difference.

Don’t fall for the trap of a single line item comparison between two technologies. Just like you wouldn’t do that for a mobile phone or laptop, don’t do it for your wireless. Here’s an example using battery life of why. Many try to look at the transmit power used to send a signal as a single metric for comparing battery life. Battery life is one of those complicated beasts (unlike coverage which can be summed up using link budget…and that’s just science baby!).

When it comes to battery life, it is better to transmit quickly at a higher power, than to transmit slowly at a lower power. Why? Well, that’s calculus my dear fellow! If battery usage is the area under the curve, then you want to minimize the area under that curve. So sending one acknowledged message at high transmit power very quickly (e.g., RPMA) uses far less battery than sending a single message three times because it isn’t acknowledged using less transmit power (e.g., Sigfox & LoRa technologies). Here’s a picture to demonstrate:

IoT Security: Don’t Connect Without It

Industry-grade security on the IoT is essential. Most protocols used for LPWA connectivity are very light on security. As in 16 and 32 bit authentication rather than the standard 128 bit AES authentication. And that’s a serious problem because those others can just be brute force attacked. Most don’t support compliance with national standards like NERC CIP 002–009, NIST SP 800–5, FIPS 140–2 Level 2, and NISTIR-7628. Some in the industry would like to have IP addresses to everything. In other words, they would like everything to be hackable by the two decades of IP based hacks that any script kiddie can use. Any wireless protocol should be secure by design, not use bolted on approaches. Security is more than encryption. It needs these security guarantees:

  1. Message confidentiality
  2. Message integrity and replay protection
  3. Mutual authentication
  4. Device Anonymity
  5. Authenticated firmware upgrades
  6. Secure multicasts

Live Panel discussing IoT Security:

Data Rate ≠ Capacity = Link Capabilities

Coverage is important because it assures that you can actually connect your application. But once you are connected, what can you do with that link? That is determined by a wireless technology’s capacity. Capacity is what enables all the things you can do with a link. Capacity is simply the usable throughput that a link has after all the reductions in data rate from putting a MAC on top of the PHY layer, after overhead, security, interference, and other real-world stuff is accounted for. Capacity is the usable throughput, the amount of data you as app developer actually get to play with and use for your users.

A data rate is a PHY layer metric which, anyone connecting their 300 Mbps WiFi router knows, is not the actual throughput you experience. Why? Because there’s a lot more going on than just the physical layer. But capacity is the usable throughput, and that’s how you compare two wireless protocols. Typically this is best done by picking a single data model—like number of 32 byte messages per hour a protocol can send—and see how they all stack up.

Capacity is also different for uplink versus downlink. Some LPWA protocols have almost no downlink. For example, Sigfox only has four 8 byte downlink messages on the most expensive platinum package. That is pretty meager. LoRa, due to duty cycle limitations can only support about 10% downlink and so very selectively acknowledges messages.

Play the simple game here to understand capacity’s role in a network technology’s profitability. Remember, if a network technology can’t sustain itself financially it will go bankrupt, and thus any business built on it will also suffer, at minimum, costs for redesigning their app, at most will also fail.

Here’s a webinar discussing how to compare wireless IoT protocols:

Cellular LPWA: NB-IOT and LTE-M

The following series of posts address the cellular standard roadmap (3GPP/GSMA) answer to Low Power Wide Area (LPWA) Connectivity.

A Deeper Technical Dive: Categories of Low-Power, Wide-Area Modulation Schemes

At first glance, the number of communication technologies being discussed for Low-Power, Wide-Area (LPWA) networks may be a bit overwhelming. What may be helpful is to look at the underlying technology from a fundamental perspective and tune out the marketing component.

Back to Fundamentals

Many of you know that Communication Theory is a very mature field going back many decades with a vast wealth of generated knowledge. Tens of thousands of books, articles, and papers have been published over this time. There are giants in the field — Claude Shannon, Harry Nyquist, Ralph Hartley, Alan Turing, and Andrew Viterbi (who has been an Ingenu strategic advisor from the beginning) whose work we can turn to for clarity. This great body of work gives us frameworks and vocabularies for comparison. It’s often a drier and less interesting world once the marketing innovation is subtracted out — but please, bear with me.

The table below shows four categorizations of the various modulation schemes that are being discussed for LPWA. Bold denotes those technologies being branded as applicable to Low-Power, Wide-Area networking. Since this is a technology treatment, I am defining Local Area Network (LAN) and Wide-Network (WAN) by the underlying technology as opposed to how these approaches are being marketed. Note that the most well-known approaches in each category are the Sigfox® technology, LoRa™, also known as Chirp Spread Spectrum (CSS), Narrow-Band IOT (NB-IOT), and Random Phase Multiple Access (RPMA®). These technologies tend to be those with the best marketing (yes, marketing is very important).

Four categorizations of the various modulation schemes that are being discussed for LPWA. Bold denotes those technologies being branded as applicable to Low-Power, Wide-Area networking.

From a technology perspective, the definition of being appropriate as a WAN is whether the multiple-access considerations of coverage and capacity are taken into account:

  • Coverage. If you want to build a WAN, you would like a single piece of network infrastructure (often on a tower or rooftop) to cover as much area as possible.
  • Capacity. There’s not a lot of good in covering a massive area if you cannot support the data needs of all the devices in that footprint (which again, is why we discuss coverage not range because we’re concerend with serving all of the devices in an area, not just one cherry picked one).

Giving a bit more color on the categories:

  • Ultra-Narrow Band (UNB). The reason many companies have elected this approach is the advantage of low barrier to entry. Companies in this category can leverage commodity radios and skip any technology development. These companies tend to argue that no new technology is required. We disagree for many reasons including the inability to make the economics of LPWA work as discussed in Blog 5: The Economics of Receiver Sensitivity and Spectral Efficiency.
  • Non-Coherent M-ary Modulation (NC-MM). This is a commonly used modulation in both LAN and WAN applications. Cellular 2G technology was based on GSM/GPRS which uses a modulation approach called Minimum Shift Keying (MSK) and is also being repurposed to Extended Coverage GSM (EC-GSM) which is cellular LPWA in 2G spectrum. The LoRa modulation (CSS) is a member of this category (as is justified in Blog 3: Chirp Spread Spectrum: The Jell-O of Non-Coherent M-ary Modulation. The “spreading” of CSS has no discernible advantage and indeed, as discussed in Blog 4: “Spreading” — A Shannon-Hartley Loophole?, has some significant drawbacks in terms of spectral efficiency.
  • Direct Sequence Spread Spectrum (DSSS). We described LoRa as “spreading” for no discernible reason. Well it turns out NC-MM does not have the monopoly on this. DSSS has a couple of technologies that also spread for no discernible reason — IEEE 802.11 (the original 1 and 2 Mbps data rates) and Zigbee (based on IEEE 802.15.4). This is just one example to show you that standards bodies are less about technology and more about politics. I will discuss this in more depth in a future blog.
  • Orthogonal Division Multiple Access (OFDM). This is the way you get extreme spectral efficiency. It’s great for voice high-speed data and has enabled LTE (4G) to become the dominant cellular standard. When you try to point this approach at LPWA (such as NB-IOT), significant problems emerge. I will discuss this in more depth in a future blog.

Time to get Down and Nerdy

In these resources, we visit the building blocks of communication and translate the perfectly good and intuitively understandable terms of coverage and capacity to nerd in Blog 2: Back to Basics — The Shannon-Hartley Theorem.

  • Coverage translates to receive sensitivity, which is a function of something called Eb/No (energy per bit relative to thermal noise spectral density).
  • Capacity translates into something called spectral efficiency and we need to go even one level deeper into nerd and assign it a Greek letter (of course) and that Greek letter is…. η.

Using this fundamental framework, in Blog 3: Chirp Spread Spectrum: The Jell-o of Non-Coherent M-ary Modulation, I’ll talk about a category of approaches that is very similar from a technology point of view and one very successfully marketed approach in this category called Chirp Spread Spectrum (also known as LoRa).

And then in Blog 4: “Spreading” — A Shannon-Hartley Loophole?, we discuss in more detail technologies that use links with very low spectral efficiency (η << 0.1) and the capacity implications with three concrete examples: the Sigfox® technology, LoRa™ technology, and Ingenu’s RPMA®.

The Business Side of the IOT

Without Device Longevity, the Internet of Things Will Never Be

The L in LPWA

Long means something different to IoT devices than it does to typical consumer handheld devices. For most IoT applications the device life cycle is 10 to 20 years or more.

‘Low power’ is really a manifestation of a much broader and more universal problem wireless connectivity needs to solve for the IoT: longevity. Longevity is defined as long life expectancy or in terms of technology: a long device life cycle. As applied to wireless technology, longevity means the wireless tech should allow for, and further, enable, a long device life cycle. And long means something different to IoT devices than it does to typical consumer handheld devices. If you have a smart phone older than two years, or even one year, your device is often seen as “old” technology. But for most IoT applications, like those enabling the smart city, the device life cycle is 10 to 20 years or more. For instance, it doesn’t make sense to replace the wireless module in smart streetlights every few years. Once placed, devices supporting infrastructure and enterprise assets need to be left alone in order to gain the cost efficiencies they offer. It simply costs too much to send someone out in a truck every couple of years to tinker with the things.

Simple IoT ROI Reality Check

To help us ground our thinking, let’s check out a simple example calculating the ROI on a device investment. The numbers used in this example are completely hypothetical but represent the notional ideas common in the LPWA space. We’ll assume the investment can bring us a return (or savings through greater efficiency) of $5 a year after all costs are accounted for.

At first blush, this seems like a no-brainer investment. However, let’s take into account the longevity of the device’s technology which we will vary in this example from two to twenty years. Device life encompasses any change in condition that would require a truck roll. Truck rolls are expensive and often show up as hidden costs. Truck rolls would include replacing a battery or replacing the wireless module because the tech is sunsetting (as happens just about every decade with cellular technology). Expected years in service represents how long the device will be in service including any truck rolls needed to keep it working. Truck rolls will most likely be outsourced to a service provider. A truck roll is assumed to cost roughly $350 per device, which frankly, is conservative. A more realistic truck roll cost may be upwards of $500.

The second table shows the annual savings from owning a device with the given device and expected life cycles. The table shows that a device with truck rolls every 5 years to, say, replace a battery, and an expected service life of 15 years would have two truck rolls (one at 5 years and another at 10 years). The cells highlighted in black show the annual savings of $5 with no truck rolls. The cells in gray show the loss (negative savings) that results from needing truck rolls (because the device life cycle is shorter than the expected years in service).

What becomes clear is that truck rolls are an IoT investment killer. In each case, the device life needs to be at least as long as the expected years in service in order to be profitable. As soon as a truck roll is required, the expected savings disappear. In order for an IoT investment to make sense, the savings have to increase 28-fold even to be considered. This result is logical, but the example helps really drive that home. Here’s the beautiful flip side: with the longevity LPWA provides, it makes sense to invest in areas that require a much lower estimated savings per device. This is precisely how the IoT brings about the efficiencies it is proclaimed to bring.

There is a subtle and often overlooked point here that should be emphasized: longevity isn’t just a nice side benefit for the IoT or interesting side effect of LPWA technology; longevity is the very foundation of the IoT’s entire value proposition. In short, without the longevity that LPWA enables, the IoT would be unable to realize its purpose.

Longevity is necessary to garner any demand for IoT devices as it enables the ROI that justifies investing in them. Return on investment greatly hinges on the usable lifetime of IoT devices. The longer they can function as intended, the better the ROI and lower the total cost of ownership (TCO). Because demand for IoT devices has thus far proven to be extremely price sensitive, it will be essential for device manufacturers to attain economies of scale so that their devices’ prices can be at the level businesses and consumers are willing to pay.

The Economics of Receiver Sensitivity and Spectral Efficiency

Or, How to Run an IoT Business

Like this picture, any company needs a lot of green to stay in the black. In the wireless world, green comes from receiver sensitivity and spectral efficiency.

Setting the Stage: The Public Network Business

You should know how a public wireless network (like a cellular network, or in the IoT’s case, a public LPWA network) makes its money. The reason is simple, some technologies simply cannot support the device count they need to build a profitable business. Therefore, the companies built on those technologies will ultimately fail. Which in turn means any businesses that rely on those networks for connectivity will incur at minimum heavy costs for product redesign, and at worst, will also go bankrupt. So, here goes.

The public network business requires two general components: a carrier and applications that may benefit from that network. Let’s define these roles and look at a public LPWA network from these points of view:

  • The carrier owns and operates the LPWA network. The carrier invests in building this network, and charges applications for the use of this network. The successful carrier business profits by the revenue (connectivity fees from the applications) exceeding the expenses of running the LPWA network (tower rental, backhaul expenses, construction costs, human resources, etc.)
  • Applications are benefitted by the connectivity that the LPWA network provides. For an application to participate in an LPWA network, there must be a positive return-on-investment (ROI) of this connectivity. In other words, the value provided by the LPWA connectivity must exceed the connectivity fees paid to the carrier. Moreover, if given a choice, the application will want to maximize their ROI by selecting the LPWA network that minimizes their connectivity fees.

The LPWA business begins with the carrier.

Typically, the carrier must invest in building the network prior to revenue being realized. Let’s introduce a few key terms to bridge the LPWA economics to the LPWA technology:

  • Coverage — a metric of how much network infrastructure is required to reliably cover a region. Most of the expenses associated with running a carrier are proportional to the amount of network infrastructure required (e.g. tower rental, backhaul expenses, construction costs) particularly as the geographic extent of the network becomes large. The number of square miles (or square kilometers) covered, on average, by a piece of network infrastructure (e.g. tower) represents the initial investment a carrier must make to build this network.
  • Capacity — a metric of how many devices, on average, may be supported by a piece of network infrastructure. The capacity metric is relevant to the revenue side of a successful carrier business. The amount of revenue per tower to the carrier will typically be directly proportional to the number of endpoints served by that tower.

From a carrier’s point of view, investment in coverage is a slight “dip in the road” in terms of outward cash-flow; capacity represents that “road up the mountain” in turns of how profitable that network may be based on number of devices supported.

Coverage

To build a network economically means that each piece of infrastructure must cover large amounts of area with high probability. This means that the range of the link must be as high as possible, which, in turn, means the receiver sensitivity must be a good as possible.

In the figure below, we show the amount of typical reliable coverage that can be expected with the three approaches based on the analysis in Section 2.3 of How RPMA Works: The Making of RPMA. Naturally, the cost of a carrier covering a large region is far lower if each tower is capable of covering the large area shown for RPMA.

Capacity

It is important for a technology to allow for connection of a sufficient number of endpoints per piece of networking equipment to make long-term economic sense.

There are costs to delivering LPWA connectivity including the infrastructure cost, the deployment cost, and any maintenance costs. These costs need to be shared among a tremendous number of devices such that each device’s share of the burden is very low. Keep in mind, these are typically low-value devices that can only bear minimal networking expense. If you take a look at the figure below, you will see that RPMA supports a factor of 60x to 1300x more data per piece of networking infrastructure (which equate to more 60x to 1300x more devices) relative to LoRa (exactly where in that range depends whether we are talking uplink/downlink and the particular regulatory domain). The Sigfox technology numbers are similar. Not surprisingly, the 60x to 1300x reduction in the per device connectivity cost is the difference between making the economics of the network work at all, and being able to make the economics of the network work easily.

Let’s Shine a Light on That

Sigfox technology is argued to serve only very low-bandwidth devices and thus, to those who believe this, capacity is not an important attribute for the low-end devices they serve. There are two main problems with this argument:

  • Low-usage devices tend to justify only low connectivity costs. If the Sigfox technology is constrained only to low-usage devices, the carrier requires more of these devices to build a business. Whether the endpoint distribution is fewer high-usage devices, or more numerous low-usage devices, capacity is being consumed, and as such, it is an important figure of merit to understand if there is any economic value to be split between carrier and application.
  • Even if there is some minuscule amount of economic value to be split between application and a carrier employing Sigfox technology, an RPMA carrier will always be able to undercut the connectivity costs due to the tremendously unfair capacity advantage. An RPMA carrier will be in a position to offer more link capability at a small fraction of the cost, and that RPMA carrier will remain massively profitable.

Note the number of devices supported on average per piece of network infrastructure as shown below and imagine the amount of revenue per endpoint for the Sigfox technology and LoRa. In our opinion, these numbers will not support a profitable carrier business model if you assume reasonable connectivity cost per endpoint as revenue.

An additional headwind LoRa in particular faces is the lack of selectivity (discussed in more depth in Section 2.10 of How RPMA Works: The Making of RPMA.) where endpoints that are deployed on private LoRa networks actually consume capacity on overlapping public networks — far more capacity, as a matter of fact, than had they actually been connected to the public LoRa network.

Carrier Growth and Future Profitability Relies on Capacity Scaling, aka Cell Splitting

What happens when a tower is at capacity based on the number of devices shown above? As a carrier, you would probably like to add more tower locations to continue offering robust connectivity. The cellular industry has a term for this called “cell splitting” and it can be done effectively.

However, not all technologies allow for offloading capacity by adding towers. All the technologies in the LAN category cannot add capacity with more towers, including the Sigfox technology and LoRa. Due to lack of support for transmit power control (among other things), once a critical density of endpoints is reached, the system simply ceases to work robustly.

By contrast, RPMA has been built from the ground up as an LPWAN solution. It was not easy or fast to do this. It took considerable investment because a new technology had to be developed from scratch, along with a whole new type of chip to inexpensively implement the new technology with minimal cost and power.

How Carriers Can Have Their Cake and Serve the IoT Too, or Why Cellular LPWA Will Never Serve the IoT As It Should

How Carriers Can Have Their Cake and Serve the IoT Too | by Michael Vedomske

We know that longevity is a core part of the Internet of Things’ (IoT) value proposition that can be inhibited by inadequate technology. But technological limitations are not the only factors that may thwart the IoT’s success. Here we’ll discuss why good business decisions made by traditional wireless carriers, based on economic and cultural forces, also counteract the longevity needed by devices on the IoT. We’ll also propose a couple of key changes to the current cellular ecosystem that will enable them to use these forces to powerfully serve the IoT.

The traditional wireless industry is comprised of many players. The cellular carriers themselves are technology integrators. They take the pieces of technology, integrate them into a single system, market it like crazy, and make it easy to access for consumers. We go to a store, buy a phone, pay for the service and enjoy.

The Ericssons, Qualcomms, and other technology providers of the wireless world develop (and patent) the technology that is integrated into the towers (Ericsson) or the handset (Qualcomm) or both. They make their money by integrating their intellectual property (IP) into as many components of that system as they can. That IP is what the carriers integrate and what ultimately becomes a large part of their costs. This integration process happens in standards bodies like the Third Generation Partnership Project or 3GPP which integrated 3G and LTE.

‘It Just Makes Good Business Sense’

Infrastructure costs are mostly driven by the technology providers, the Ericssons, Nokias, and Huaweis of the world.

Two costs lay heaviest on the balance sheets of traditional wireless carriers: infrastructure and spectrum. Infrastructure involves the cellular radios and other hardware, permitting, tower space leases, backhaul, and many other costs. Infrastructure costs are mostly driven by the technology providers, the Ericssons and Huaweis of the world. They provide the hardware that is used in the base stations to send and receive cellular signals, transmit backhaul to the operating centers, and route traffic. In recent years, software defined radios have allowed the base stations to upgrade their technology not with hardware swaps, but with software upgrades. While these technology upgrades are simpler, the technology providers still want to cash in and thus charge enormously for those upgrades.

Two costs lay heaviest on the balance sheets of traditional wireless carriers: infrastructure and spectrum.

Cost pressure comes from licensed spectrum as well. Licensed spectrum is an extremely expensive resource. In 2015, traditional wireless providers spent $45 billion on spectrum in the United States. The $45 billion amount is more than 100 countries’ GDP. Spectrum is a valuable resource for a reason: it is the lifeblood of wireless voice and data connectivity. Consumers and businesses are willing to pay good money for that high data throughput and carriers need that licensed spectrum to provide it.

To remain profitable, licensed spectrum must be used by carriers for voice and data connections rather than other uses like machine connectivity. Voice/data connectivity brings carriers the most revenue per Hz (a unit of measure used for amount of spectrum). In the industry, this logic is broken down using average revenue per user, or ARPU. It just makes good business sense for carriers to maximize ARPU, especially with the enormous weight of spectrum costs. It is for this very reason that carriers are shutting down their 2G networks.

Carriers must use precious spectrum for the highest average revenu per user (ARPU).

Carriers must use that precious spectrum for the highest ARPU. Two factors will put additional pressure on carriers. The overall market of voice and data users will increase as will the amount of data each user will require. This only exacerbates the importance of using spectrum for highest ARPU purposes. It’s basic economics. Any deviation from that strategy will result in profit loss and punishment in terms of market share and on Wall Street. So, maximizing ARPU ripples throughout all of their business decisions regarding spectrum usage. And that’s as it should be. Businesses that do well serve their best customers well.

Economics and Misaligned Incentives

But what makes good business sense for traditional wireless carriers doesn’t make sense for IoT devices. At least not with the cellular industry’s current dynamics. Anybody or any_thing_ that isn’t high ARPU will naturally and rightfully be relegated to lower priority. And the lowest ARPU customers are the same devices that LPWA is fit to serve. According to James Brehm & Associates, 86% of current IoT devices use less than 3 MB of data per month — those are hardly power users. And the devices that have yet to be developed, the “greenfield applications” as industry insiders would say, are projected by the 3rd Generation Partnership Project (3GPP standards development body) to have an average of 32 KB a month of data. What’s more is that the same 3GPP standards development body has built IoT traffic de-prioritization into its LPWA candidate standards, including LTE-M and others.

Carriers can turn down or turn off machine traffic whenever their expensive spectrum gets clogged with higher ARPU traffic, like during sports events.

In other words, carriers can turn down or turn off machine traffic whenever their expensive spectrum gets clogged with higher ARPU traffic. And it doesn’t take much for that to happen. If you’ve ever been at a sporting event, you’ve probably experienced delays in receiving even a text message because so many people have cell phones connected to the cellular towers in the area.

How would this type of prioritization impact businesses that have their device messages blocked out by the carriers? Naturally, some proportion of the delayed messages will have a minimal impact on business. But some proportion of them will be majorly impacted by these unpredictable interruptions. The more important point is that your business would be subject to the whims of the carriers. And these whims are based on the carriers’ sound business reasoning.

Always Second Tier

The conclusion to be drawn from this is that connected machines will always be second tier to voice/data connections using the same spectrum. Carriers’ current business models depend on this. Their cost structure dictates it. These economic forces will not just go away, and will continue to relegate machine connectivity to the bottom tier.

Voice/data needs are what have pushed the cellular generations from 1G to 2G to 3G to 4G, soon 5G and inevitably to 6G and beyond.

The misaligned incentives between IoT connected businesses and traditional cellular carriers go beyond lower priority machine connectivity. Because human consumption of voice/data is the highest ARPU customer, their needs will continue to be the primary driver of cellular technology’s development in years to come. Voice/data needs are what have pushed the cellular generations from 1G to 2G to 3G to 4G, and in the coming few years, 5G. These cellular generations begin about every nine years.

Sunsets are fine for voice/data customers using smart phones as these devices are upgraded every couple of years. But, the incessant cellular sunsets are completely anathema to the longevity needs of IoT devices. The current cellular ecosystem will be unable to provide adequate IoT device longevity due to the incessant sunsetting cycle which is driven by sound business decisions.

tl;dr Two things keep cellular carriers form serving IoT device needs. 1) They make the most money from voice/data customers so IoT devices will always be second tier to the IoT. 2) The IoT needs long technology life cycles, but the 3GPP, the cellular standards body, is incentivized to change protocols periodically so that the companies participating can grab more of the IP pie and the resulting licensing fees. This IP grab is what pushes standards to change so quickly leading to network sunsets and short IoT lifecycles. This leads to ROI lower than that needed to get to 10s of billions of IoT devices.

Want to know more?

Download our free eBook (fair warning, this is behind a form) that does an extensive technical review and comparison of the leading LPWA protocols.

Want to integrate RPMA into your IoT application? Contact us at info@ingenu.com

If this helped you in even the tiniest way, please show some love and click the heart to recommend it to others. Thank you!


Published by HackerNoon on 2016/12/01