If I was asked to describe technology with only one word, I would, with no hesitation, say that it evolves. There’s always a better, faster, easier, and more efficient way to do things, and there are always creative solutions to complicated problems.
Telecommunication, as a part of the marvelous world of technology, had its fair share of evolution, growth, and development. Considering that every aspect of the industry has been evolving, let's narrow it down to three main pillars: infrastructure, deployment, and protocols.
Infrastructure: Hardware to Software
In earlier years, businesses in various sectors relied on traditional IT infrastructure to run their operations, store & secure their data, and ensure their customers’ demands were met. That traditional approach, however good and still adopted by a percentage of businesses today, had many drawbacks making cloud computing a shiny promise to the majority of companies that made switching to the cloud a necessity.
But let’s not withhold traditional infrastructures their dues: some companies still prefer them because they give IT departments full control of their environments, full responsibility for their security and ownership of their data. But as I said, on-premise infrastructure comes with its limitations. For instance, the cost of installing and maintaining the hardware, restricted server capacity and performance, inflexible access to data, and the infinite time it takes to get needed additional storage space. Some companies are able to deal with those downsides, but for most of them, a more flexible approach was necessary.
We are talking about cloud computing.
One of the many features that makes cloud computing an attractive alternative is that it allows for more resilience and flexibility. Contrary to on-premise infrastructure, the cloud one doesn’t require you to have an entire team for installing and maintaining hardware. And, because data is stored in a variety of servers, if one of them failed or got damaged, you’re not going to lose any data or service quality.
In addition, of course, is the well-known cost-effectiveness of cloud computing. Pay-as-you-go in the cloud requires you to pay only for the services you need, and expand flexibly as your business grows. It also helps you save employee costs for you don’t need the maintenance you would otherwise need.
Having said that, it’s only natural that businesses are switching to the cloud, and if not entirely, they’re opting for a hybrid cloud solution.
Deployment: Manual to Automated
The most common way of evolving is going from manual to automated, and that is what happened to deployment.
According to many industry leaders, automating the deployment process is indispensable, and its necessity is not even up for debate. However, many companies are still sticking to their old ways, most obviously because people prefer what they’re familiar with, and also because they’re not ready for the whole switching chaos, and think it’s wiser to handle the potential disasters of doing things manually.
What’s the worst that can happen? You may ask.
Well, like everything manual, manual deployments are much more prone to human error, and small overlooked errors (like a typo) can lead to big unfavorable malfunctions. Consequently, more errors will cause a timely, draining, stressful process for all the teams.
(If I scared you and you want to learn more, head over to this article here.)
Now, that sounds like the opposite of efficient, right? That’s why we needed the blessing of automation.
Automating the process makes it (by definition) faster, more efficient, and less error-prone. Add to that that automatically, anyone in the team can deploy software. Manually? Not really. That means that with an automated deployment approach, companies drastically reduce the risk of network outages and boost the productivity of their production teams.
And if what I said earlier about the “switching chaos” stuck with you, let me set your mind at peace. Because first of all, the word “chaos” is an overstatement, and the process isn’t as scary as you think. And second of all, you can rely on automated deployment tools that you can integrate into your infrastructure to help manage and reduce the complexity of the operation.
As we all know, the telco industry, as much as any heavily regulated industry (healthcare, banking, insurance...), relies on standards. The Internet Engineering Task Force (IETF) has defined one of the most widely used standard nowadays: the Internet Protocol (RFC 791). The IETF is also the standard development body (SDO) responsible of defining other IP-based protocols used by the The 3rd Generation Partnership Project (3GPP) and GSM Association (GSMA) in their IMS, 4G, 5G and more reference architectures deployed to billions of users worldwide. For instance, Session Initiation Protocol (SIP), Stream Control Transmission Protocol (SCTP), Hypertext Transfer Protocol/2 (HTTP2) are all illustrations of this major contribution.
The major evolution with this regard in the context of 5G, is the recent use of HTTP-REST Application Programming Interfaces (APIs) based on HTTP2 in the core network for the control plane interface. Using this API, 5G core entities can interact to register a 5G user, or deploy a slice on the network. This revolution stems from the need for openness in the development of 5G services and avoidance of vendor lock-in from one side, and the ambition to deliver new 5G services with shorter time to market, inspired by the recent advent of agile methodologies and devops organizations in IT.
The adoption of HTTP-REST APIs in the 5G core is made explicit with the Service Based Architecture design that follows the 3GPP in defining the 5G Standalone (5G SA) architecture. If we dig deeper into the 3GPP deliverables, we can find a 5G APIs repository that gives every standard of Release 15 onward in an Open API v3 (formerly known as swagger) specification that can be shown on any swagger editor, in addition to the usual PDFs and Docs. If you are familiar with the Open APIs tooling chain, you should know that it is possible to export in an automated fashion a basic skeleton of the 5G service (client or server) just by few clicks given this precious swagger file specification in YAML or JSON format, or even include the generation in an automated Continuous Integration pipeline that would detect any regression in the 5G service given the standard spec. Unthinkable few years back.
The Linux Foundation through organized plugfests, helps network operators and vendors work on interoperability of the NFV ecosystem, and leverages these open standards to achieve this.
Telcos have also heavily adopted a new format of exchanging data, even more efficient than HTTP2 REST, wich is gRPC (google Remote Procedure Call). Notice how as an additional sign of the times changing, Google is also an indirect player in the Telco specification workflow by affecting a low layer protocol that is needed for things like xApps or rApps development in Open RAN, affecting how the Software Development Kits (SDKs) are delivered to service and standard developers. Using gRPC to specify the data model and versioning it departs from the usual Type, Length, Value (TLV) model from the IETF for these protocol options development allowing faster and more agile way of standard development.
Major change is in the air.
As you have seen, the constant improvements and innovations have blurred the lines between industries and allowed for productive intersections. And now that 2022 is almost over, what do you think the next years are preparing for us?
And by the way, if you’re interested in learning about the trending technologies that you should be looking out for next year, check out this blog post, I’m sure it will give you some insight.
And as usual, talk to you very soon!