IETF and Internet Hall of Fame 2013: Henning Schulzrinne
Articles,  Blog

IETF and Internet Hall of Fame 2013: Henning Schulzrinne

Henning Schulzrinne
I’ve been involved in the Internet technical community since the early ’90s, primarily
in my academic role as faculty at Columbia and previously as a researcher at Bell Labs
and in a German research lab here in Berlin, actually. And secondly, more recently, as
a staff member for the Federal Communications Commission. In that role, I have been participating
in traditional academic research, primarily in the networking realm, but also, working
primarily within the Internet Engineering Task Force on standards development for Internet
applications, primarily real-time applications. The topics I have worked on probably the most
are, as I said, the real-time Internet applications, Voice over IP and real-time streaming applications.
Voice over IP is the delivery of phone calls over the Internet. And that led to a number
of protocol developments that are now fairly commonly used in the industry. So this is
the real-time transfer protocol that transports audio and video content across networks, and
that’s often used for audio and video telephony within enterprises, but also increasingly
on the wide areas. So there are a number of Voice over IP providers as well as what are
known as 4G or Voice over LTE systems that use that type of technology. And then, a corresponding
protocol that is
commonly used again in the enterprise phase. Many of the new IP PBX’s are used as kind
of your desktop phones in offices, they typically use that in mobile phone carriers as part
of the Internet multimedia subsystem, IMS. I’ve also worked on a number of applications
in public safety, in how do you support emergency calls such as 112 or 911 in the new all-IP
environment. It’s really hard to answer that in generalities
because the Internet has become such a diverse ecosystem. It’s probably much more productive
to think of it as not like a single entity but like an ecosystem where some parts of
the ecosystem are quite healthy and others, not so much. So let me try to give you just
a few examples of that. We’re now seeing that when we talk about the Internet we’re
really talking about two somewhat separate things: the technology and the global infrastructure.
The technology involves protocols and software artifacts that use Internet protocols but
may not be actually used on the Internet, they may be used in private networks, in data
centers, in enterprises and homes, without necessarily touching the Internet. I think
that development has been robust and continues to progress pretty rapidly, where the major
problems are probably in terms of robustness and reliability, and security-related problems
as well, but the technology seems to be able to keep pace with demand. The other one is
the Internet as a network that you connect to, exchange data on, communicate with other
people on – and in that in many countries and many regions things are moving along quite
nicely, speeds are improving, the availability on mobile devices is dramatically increasing,
but we also have simultaneous challenges. Just to name a few: The security challenges
increasingly make it difficult, particularly for individuals and small businesses, to know
what information is truly secure and private – what of their bank account, of their private
data, medical data is at risk. Also, at a larger scale, for enterprises, being exposed
to theft of their intellectual property – and I’m not talking about music here and videos
primarily – I’m talking here about blueprints and chemical formulas and customer lists and
all the other things that companies maintain privately in order to maintain their competitive
position. That I think is a major challenge simply because it doesn’t seem possible
for ordinary individuals to keep up with deficiencies in both protocol design and implementation
to have a reasonable certainty that the tools they use won’t be used against them. There
are also other larger scale challenges, mainly the suppression of freedoms in a number of
countries, issues of privacy, how do we balance free access to information and services on
mobile devices with the desire to maintain private information as private. Let me talk about security as one. First of
all I think it’s important that I don’t want to fall into the trap of saying that
the Internet is insecure because that’s not really a helpful statement. It doesn’t
differentiate enough between the various components. I would look at that in three pieces. One
piece is the underlying technology. The second piece is the implementation software, primarily,
and hardware to some limited extent. And thirdly the operational practices. And there are problems
in all areas but they are very different problems. I think there generally has been for at least
a decade, a fairly profound awareness on the design and engineering side that a) you need
to design protocols for hostile environments and we have reasonable ideas on how to do
that and I would say at least most protocols that have been designed we somewhat recently
or have enhanced recently all have good to acceptable security mechanisms built in. So
it is not so much a problem that our protocols are insecure but there are some that certainly
could use strengthening particularly in the routing side and again on the access side
with the land protocols. But the other areas are far less encouraging and on the implementation
side we seem to have difficulty on two counts. Routinely we are designing reliable systems
– software engineering – often because it is not immediately obvious when something
is insecure because it works just fine until somebody attacks it. And secondly, and on
how to test it and how to incentivize or de-incentivize people from building secure and insecure systems.
Currently, there seems to be a problem that many software developers, particularly smaller
ones, but certainly not limited to those, seem to have difficulty – I don’t know if
it’s an engineering problem or a managing problem – to put enough resources into creating
secure systems, designing by good engineering practices, testing and particularly relying,
not just on internal testing, but also on external testing. We are used to it in other
areas where safety and security are at stake. Think of vehicles or electric toasters. We
have certifying bodies because we don’t want to rely on the manufacturers themselves,
as diligent as they may be, to completely trust them that they will know whether they
did a good job. So we have entities like the Underwriters Laboratory for electrical equipment,
for TV and Germany and other countries for safety on just about anything, whether it’s
elevators or cars or umbrellas that have any type of even remote security or safety implication.
We don’t do that for software and it is fairly obvious that it isn’t really working.
To give you one example, what I’ve encountered in my work, in my current line of work: In
the United States we have a system called the Emergency Alert System, EAS, which is
used to alert TV viewers on imminent threats to life and properties. Think storms or flash
floods, tsunamis, all of those. Every TV station and cable system is obligated to have a device
that allows a public safety authority to submit a request to send out a broadcast saying to
take cover to take appropriate actions. So it is obviously very important that this is
a reliable system. Until maybe five years ago these systems were not connected to the
Internet at all. There were some master stations that would broadcast it and they would retransmit
it down the line. More recently for convenience and operational purposes, they have designed
systems that use Internet connected devices. Recently in the past five years, these TV
stations have, for convenience and operational efficiency’s sake, installed boxes that
connect on one side of the Internet and the other side intercept the TV signal so that
they can inject a crawler on the bottom of the screen and audio into that TV signal because
emergencies could happen anytime, even when there is no engineer on staff, for example.
Well, unfortunately, these are fairly specialty devices and whoever designed those, didn’t
do a whole lot of testing. They have violated just about every guideline known for designing
secure systems, so what happened was someone discovered that you could search – you could
Google them on the Internet, you just searched for the logging string – and then use a default
password, which you could also easily Google, just by looking at the manual, and then they
then injected at about a dozen TV stations – primarily smaller TV stations – a fake emergency
alert about zombies emanating from the ground and warning that the population should take
cover. It was obviously kind of funny the first time around but it could easily be misused.
So in our case, it happened just before the State of the Union address by the President
of the United States, so there was grave concern that somebody would use that to start a panic,
like report a false terrorist attack that would occur. That was an example where somebody
had designed a system, not thinking that these would be connected to the Internet, that people
would not change the default password and that there would be no other security protections
in place and there’s many of these smaller systems -these could be home routers, it could
be electric meters, it could be car systems – where there doesn’t seem to be a true
appreciation as to the dangers that could occur if somebody gets access to those and
we don’t seem to have a good way of dealing with that. I’ll briefly talk about the operational
aspect as the third consideration. It used to be that in many computing systems, probably
most of them, they were operated by trained system administrators that at least had some
professional awareness. Skill levels probably varied, but at least many that worked in that
field had education in computer science work, maybe even some security training. But nowadays,
many if not most computers are operated by individuals that have no technical training
whatsoever and they shouldn’t have. This is true for home networks, it’s true for
small business networks – I mean your dentist, your baker type of thing – everybody has a
computer, generally connected to the Internet. Think of your doctor’s office – it probably
has one for electronic medical records. And none of those are operated by trained system
administrators. So it is very easy for these amateurs to make mistakes in operating those
types of systems. Again, we’ve designed systems not really well anticipating the kind
of users that would really use them, thinking or maybe not even thinking that they would
be used in the same way as they were in the 1980s and 1990s. That doesn’t mean we should
train everybody to be a system administrator, that just doesn’t work. We need to design
systems that are secure out of the box; you just can’t make them insecure without a
lot of effort and we haven’t really succeed and that’s been far too difficult. The type
of technologies people use, like passwords and so on, are becoming increasingly user
unfriendly and they become increasingly unmanageable and that’s what I see as one of the challenges
to make it easy to build secure systems and to operate secure systems. One particular one is the barrier to entry
to creating new businesses, new content has dropped dramatically. In the last decade or
so it is now possible for a much wider variety of individuals to not just consume content
– you could always do that with radio and TV and all that have existed for a century
– but you have a new possibility that ordinary individuals without a large budget, maybe
even without deep technical skill sets, could create interesting content of all kinds. Examples:
The Kahn Academy for training materials, individual small local groups that could distribute videos,
websites and web applications that could be built, apps on smart phones. All of those
are now accessible to many more individuals than there were even a relatively short while
ago. And that I think has probably been the greatest enabling capacity of the Internet,
not so much just as a distributor of high-cost, highly produced content – that’s always
been available – but as means for distributing low-cost, low-effort, much more democratic
content, both for cultural as well as just plain business uses as well as educational. One of the things I’ve been involved with
at the Federal Communications Commission is to ensure an
open Internet, mainly almost by physical design. While everybody can or most everybody can
create content and applications it is very difficult for most people to operate their
own network. You just can’t string your own fiber or run your own cell towers and
so the number of operators in almost every country, in a particular region, tends to
be very small – a handful even if you count wireless operators, typically you have your
copper base provider, your fiber or your coax base provider and then maybe a small number,
three or four wireless operators, satellite operators. Because it costs billions of dollars
to build a network, we can’t really rely purely on competition to ensure that users
have access to the legal content they want to get access to and create content that they
want to create because, in some cases – both for the content that they want to access and
the content that they want to create – they may well compete with ventures that the network
provider has. Most of the network providers – at least in the U.S. for example – also
distribute their own video content, they may have applications of their own and they’ve
certainly had voice applications, for example, and that’s very common for almost every
network operator. And so they have incentives to give themselves an advantage in order to
compete with other providers in content and applications. So I believe it continues to
be important to have rules and mechanisms in place so that providers cannot discriminate
against providers of applications and content, because in many cases that is essentially
our primary means of accessing information of all kinds. That remains a long-term challenge
– how to do that in ways that do not unduly interfere with the expansion of the network,
do not unduly increase costs. In the U.S. we have found, as one current mechanism, the
FCC Open Internet Order, which spells out some of the conditions kind of at a high level
on how that should work out. But other regions and countries in Europe are still trying to
find their way to find that balance. One of the other challenges that I see is
that as the network has become in both good ways and bad ways a commodity, mainly we all
rely on it. It’s something that we notice mainly when it’s not around – “I can’t
get Internet access. What’s going on here?” We expect it in every hotel, in every airport,
certainly in most homes, schools, wherever. One of the things that I think is in some
way danger is a robust research infrastructure. If you look at many of the major providers
of hardware and software and services, they used to all have significant-sized research
labs. Just to give you one example that I heard recently, Nokia – primarily they do
both network infrastructure and handsets – used to have 600 researchers in their lab. They
are now down to 60. Verizon, in its previous incarnations, used to have large research
labs and multiple facilities that did not just short-term but also long-term research.
Telecordia, the same thing. They all used to have long-term research. They have largely
discontinued that. There is only really a relatively small number of companies that
still do network-related research that more or less stay on a six-month time horizon.
Universities continue to do that. There is a vibrant research community, but it can’t
be universities by themselves, particularly because for a variety of reasons funding is
no longer nearly as available as it used to be – both funding through governments, as
well as – because of the downsizing of corporate research activities – funding available through
corporate sponsorship. If we don’t have a vibrant research community, the problems
I alluded to earlier – security, accessibility, the usage for content creation – will all
suffer. We won’t notice it because we won’t notice it directly, we won’t notice what
we’re missing, because we don’t see it, but if we don’t have that, I think it will
be much harder to solve those problems because in many ways most other research efforts have
often created artifacts that were widely distributed, had low cost to acquire, which means lots
of people could use those and adopt them, they tended to be non-proprietary, there tended
to be an emphasis on making sure it was available, and if you don’t have that any more, if
you just have a small-scale, venture capital-style research going on, we’re missing out on
something. 20:30 I think it’s partially the competitive
pressures, namely research almost by its definition doesn’t just accrue benefits to whoever
does it. It’s really hard to keep research secret because that’s how nobody else benefits.
You can do that in some areas such as pharmaceuticals where the output is a single drug that is
easily patented and you have a 20-year protection horizon on that and it’s very difficult
for somebody else to replicate exactly that prescription drug. But if you look at networking
or computer science research in general, most of the ideas that you generate are hard to
contain. They just distribute themselves so to speak, through students, through publications
and all the normal mechanisms – which is a good thing, we want that to happen, but it
had from a purely, local, economic optimization mechanism, where it’s easy to say, “Hey.
Somebody else should do the research. I just get the benefit.” But if everybody does
that, you don’t get any research done anymore. And in the old days, we always had – and this
was more an accident than anything else than any planning – we always had very strong government
funding that hadn’t been concerned about those issues. We don’t really care except
maybe on a national level about the benefits from research, which in itself is a problem,
and you have some people who say, “Well let the other countries – mainly the U.S.
– let the other countries do the research and we’ll just basically build the stuff
and – or we’ll just do shorter-term development work.” The other problem or maybe the other
issue is that in those environments you don’t really have the set of people who can really
continue to do that research because some other areas have become the go-to areas – big
data, say graphics in some cases, so we don’t have quite the same student population that
we had available. It’s partially because there aren’t as many research jobs out there
that people in the industry would go to. When people start a master’s or PhD program they
want to have some assurance they will find a job afterwards, and research was often industrial
research and was often a very attractive destination because people recognize that only a very
small fraction could become faculty or what else are you going to do? And industrial research
offered an opportunity for a creative outlet and so on. So that a kind of a feedback loop
that’s not working very well right now, and it’s not clear how we can get out of
this given that government funding in general for research in Europe and the U.S. isn’t
increasing, to put it very politely, and we have a decrease which diminishes the supply
of talented students who want to participate in that research.

Leave a Reply

Your email address will not be published. Required fields are marked *