The secret behind Internet protocol is that it has no idea what it’s carrying – it just a bag of bits going from point A to point B. So said Vint Cerf, vice president and chief internet evangelist at Google, speaking at the recent Open Networking Summit.
Cerf, who is generally acknowledged as a “Father of the Internet” said that one of the objectives of this project, which was turned on in 1983, was to explore the implications of open networking, including “open source, open standards and the process for which the standards were developed, open protocol architectures, which allowed for new protocols to be invented and inserted into this layered architecture.” This was important, he said, because people who wanted to do new things with the network were not constrained to its original design but could add functionality.
Open Access
When he and Bob Kahn (co-creator for the TCP/IP protocol) were doing the original design, Cerf said, they hoped that this approach would lead to a kind of organic growth of the Internet, which is exactly what has been seen.
They also envisioned another kind of openness, that of open access to the resources of the network, where people were free both to access information or services and to inject their own information into the system. Cerf said they hoped that, by lowering the barriers to access this technology, they would open the floodgates for the sharing of content, and, again, that is exactly what happened.
There is, however, a side effect of reducing these barriers, which, Cerf said, we are living through today, which includes the proliferation of fake news, malware, and other malicious content. It has also created a set of interesting socioeconomic problems, one of which is dealing with content in a way that allows you decide which content to accept and which to reject, Cerf said. “This practice is called critical thinking, and we don’t do enough of it. It’s hard work, and it’s the price we pay for the open environment that we have collectively created.”
Internet Architecture
Cerf then shifted gears to talk about the properties of Internet design. “One of the most interesting things about the Internet architecture is the layering structure and the tremendous amount of attention being paid to interfaces between the layers,’’ he noted. There are two kinds: vertical interfaces and the end-to-end interactions that take place. Adoption of standardized protocols essentially creates a kind of interoperability among various components in the system, he said.
“One interesting factor in the early Internet design is that each of the networks that made up the Internet, the mobile packet radio net, the packet satellite net, and the ARPANET, were very different inside,” with different addressing structures, data rates and latencies. Cerf said when he and Bob Kahn were trying to figure out how to make this look uniform, they concluded that “we should not try to change the networks themselves to know anything about the Internet.”
Instead, Cerf said, they decided the hosts would create Internet packets to say where things were supposed to go. They had the hosts take the Internet packets (which Cerf likened to postcards) and put them inside an envelope, which the network would understand how to route. The postcard inside the envelope would be routed through the networks and would eventually reach a gateway or destination host; there, the envelope would be opened and the postcard would be sent up a layer of protocol to the recipient or put into a new envelope and sent on.
“This encapsulation and decapsulation isolated the networks from each other, but the standard, the IP layer in particular, created compatibility, and it made these networks effectively interoperable, even though you couldn’t directly connect them together,’’ Cerf explained. Every time an interface or a boundary was created, the byproduct was “an opportunity for standardization, for the possibility of creating compatibility and interoperability among the components.”
Now, routers can be disaggregated, such as in the example of creating a data plane and a control plane that are distinct and separate and then creating interfaces to those functions. Once we standardize those things, Cerf said, devices that exhibit the same interfaces can be used in a mix. He said we should “be looking now to other ways in which disaggregation and interface creation creates opportunities for us to build equipment” that can be deployed in a variety of ways.
Cerf said he likes the types of switches being built today – bare hardware with switching capabilities inside – that don’t do anything until they are told what to do, he said. “I have to admit to you that when I heard the term ‘software-defined network,’ my first reaction was ‘It’s a buzzword, it’s marketing,’ it’s always been about software.”
But, he continued, “I think that was an unfair and too shallow assessment.” His main interest in basic switching engines is that “they don’t do anything until we tell them what to do with the packets.”
Adopting Standards
Being able to describe the functionality of the switching system and how it should treat packets, if standardized, creates an opportunity to mix different switching systems in a common network, he said. As a result, “I think as you explore the possibilities of open networking and switching platforms, basic hardware switching platforms, you are creating some new opportunities for standardization.”
Some people feel that standards are stifling and rigid, Cerf noted. He said he could imagine situations where an over-dependence on standards creates an inability to move on, but standards also create commonality. “In some sense, by adopting standards, you avoid the need for hundreds, if not thousands of bilateral agreements of how you will make things work.”
In the early days, as the Internet Engineering Task Force (IETF) was formed, Cerf said one of the philosophies they tried to adopt “was not to do the same thing” two or three different ways.
Deep Knowledge
Openness of design allows for deep knowledge of how things work, Cerf said, which creates a lot of educated engineers and will be very helpful going forward. The ability to describe the functionality of a switching device, for example, “removes ambiguity from the functionality of the system. If you can literally compile the same program to run on multiple platforms, then you will have unambiguously described the functionality of each of those devices.”
This creates a uniformity that is very helpful when you’re trying to build a large and growing and complex system, Cerf said.
“There’s lots of competition in this field right now, and I think that’s healthy, but I hope that those of you who are feeling these competitive juices also keep in mind that by finding standards that create this commonality, that you will actually enrich the environment in which you’re selling into. You’ll be able to make products and services that will scale better than they might otherwise.”
Hear more insights from Vint Cerf in the complete presentation below:
- How Blockchain Changes the Nature of Trust - 2019-01-22
- Machine Learning, Biased Models, and Finding the Truth - 2018-11-27
- AI in the Real World - 2018-11-15