Documentary coverage of IGF-USA by the Imagining the Internet Center

Posts Tagged ‘steve crocker

IGF-USA 2012: Critical Internet Resources (CIRs) – Evolution of the Internet’s Technical Foundations

leave a comment »

Brief session description:

Thursday, July 26, 2012 – Since the initiation of the Internet Governance Forum (IGF), Critical Internet Resources (CIR) and the evolution of the Internet’s technical foundations have been a central focus of ongoing Internet governance debates. Varied views can engender misunderstandings that influence the opinions of global stakeholders, and different views exist about how to advance CIRs. International governmental approaches are proposed by some, while others strongly support the present bottom-up, consensus-driven models. Three foundational technological changes – IPv6, secure Domain Name System (DNSsec) and secure routing – framed the discussion in this workshop. Deployment of these new technical and organizational approaches raises significant challenges to stakeholders, operations and governance arrangements.

Details of the session:

The moderator for the session was Walda Roseman, chief operating officer of the Internet Society. Panelists included:

  • Steve Crocker, chair of the board of the Internet Corporation for Assigned Names and Numbers
  • John Curran, president and CEO of the American Registry of Internet Numbers
  • Richard Jimmerson, director for deployment and operationalization, Internet Society
  • Vernita Harris, deputy associate administrator in the Office of International Affairs of NTIA, US Department of Commerce

Thursday’s IGF-USA conference at Georgetown Law Center featured an assembled panel of government and corporate experts who addressed the controversial issues concerning the control of critical Internet resources.

Walda Roseman, chief operating officer of the Internet Society (ISOC), chaired the discussion on the implementation and security of CIRs.

CIRs include IP addresses, domain names, routing tables and telecommunications, or what Steve Crocker, CEO and co-founder of Shinkuro Inc., Internet Hall of Fame member and chair of the board of ICANN, called the base of Internet architecture upon which everything else is built.

Moving from Internet Protocol Version 4 to IPv6

One of the most pressing concerns regarding CIRs is the revision of Internet Protocol (commonly referred to as IP) from version 4 to version 6, now the most dominant protocol for Internet traffic.

IPv4 used 32-bit addresses, allowing for approximately 4.2 billion unique IP addresses, but the growth of the Internet has exceeded those limits. IPv6 uses 128-bit addresses, allowing for about 3.4×1038  unique addresses. This number is equal to approximately 4.8×1028 addresses for each of the seven billion people alive in 2012.

John Curran speaks about critical internet resources during IGF-USA conference in Washington, D.C. on July 26, 2012.

Because headers on IPv4 packets and IPv6 packets are quite different, the two protocols are not interoperable and thus they are both being run in what is called a “double stack.”

However, IPv6 is, in general, seen to be a conservative extension of IPv4. Most transport and application-layer protocols need little or no change to operate over IPv6. The exceptions to this are the application protocols that embed internet-layer addresses, such as FTP and NTPv3. In these, the new address format may cause conflicts with existing protocol syntax.

Internet service providers, the Internet Society and many large Internet-based enterprises worked to support a World IPv6 Launch on June 6 this year to help accelerate the adoption of IPv6.

John Curran, president and CEO of the American Registry for Internet Numbers, said upgrading to IPv6 is a necessary step for “any enterprise that wants to still be in business in five years,” because it enables them to continue to reach new customers and grow.

When asked about the costs or burdens of upgrading to IPv6 for small businesses, Curran explained that in most cases the burden would fall on the hosting company through which they run their website.

Chris Griffiths, director of high-speed Internet and new business engineering for Comcast, confirmed this, stating his company would have to upgrade to continue to attract new clients.

Security issues always loom large in Internet evolution

The development of the Internet has led to a need for Domain Name System Security, or DNSSEC. Curran explained that DNSSEC maintains the integrity of the Internet by ensuring the information users obtain is from the source they believe they are corresponding with, essentially preventing redirection to fraudulent websites.

Redirection could come from hackers, hijackers and phishers, but also the US government, should initiatives such as SOPA or PIPA pass.

“My primary interest is keeping the open Internet alive,” said Richard Jimmerson, director of deployment and operationalization for ISOC. “Somebody in this room will want to invent the next Facebook or Yahoo! Today, that is possible, but if we do not pay attention to certain things, that may not be possible anymore.”

Griffiths said Comcast and other Internet technology companies work together through governance processes now in place to address, for example, the types of security vulnerabilities that can drive action to work to avoid future risk, and in making adjustments in infrastructure and dealing with other emerging challenges.

Steve Crocker speaks about critical internet resources during IGF-USA conference in Washington, D.C. on July 26, 2012.

Conflicts arise over the management of CIRs

The US government currently maintains the most control globally over CIRs. This is not well received by some critics around the world, as they fear that the United States may abuse its power. Some have also proposed that they would like to see a roadmap of the Internet for the next 20 years.

Curran addressed these concerns by stating that the US government has a positive track record regarding the respectful and neutral administration of its responsibility for CIRs, mostly leaving all of the operational details to multistakeholder global governance bodies such as the Internet Engineering Task Force and ICANN, and added that roadmap would not likely be effective as there are too many unknowns moving forward.

Vernita Harris, deputy associate administrator of the National Telecommunications and Information Administration, explained that the newest Internet Assigned Numbers Authority (IANA) contract indicates it expects that ICANN and aspects of control over the Internet architecture “will be multi-stakeholder driven, addressing the concerns of all users both domestic and international.”

— Brennan McGovern

Internet Governance Forum-USA, 2011 New Challenges to Critical Internet Resources Blocking and Tackling – New Risks and Solutions

leave a comment »

Brief description:

The security, stability and resiliency of the Internet are recognized as vital to the continued successful growth of the Internet as a platform for worldwide communication, commerce and innovation. This panel focused on domain name service blocking and filtering and the implementation of Internet Protocol version 6 (IPv6) and critical Internet resources in general. The panel addressed some of the implications of blocking and filtering; the implementation of DNSSEC and IPv6 on a national basis; and future challenges to the Internet in a mobile age.

Details of the session:

The Internet’s success well into the future may be largely dependent on how it responds and reacts to new challenges, according to panelists in a session on critical Internet resources at the IGF-USA conference July 18 in Washington, D.C.

The Internet continues to evolve. It is also growing, as it becomes accessible to billions more people. A major challenge now and in years to come is to make the Internet more secure while continuing to promote openness, accessibility, transparency, bottom-up decision-making, cooperation and multistakeholder engagement. It is important that organizations continue to retain these values as much as possible as they react to cybersecurity and cybertrust issues.

This workshop was conducted in two parts, both moderated by Sally Wentworth, senior manager of public policy for the Internet Society. Panelists included:

  • John Curran, president and CEO of the American Registry for Internet Numbers (ARIN)
  • Steve Crocker, CEO and co-founder of Shinkuro, and vice chair of the board for ICANN, the Internet Corporation for Assigned Names and Numbers
  • George Ou, expert analyst and blogger for High Tech Forum http://www.hightechforum.org/
  • Rex Bullinger, senior director for broadband technology for the National Cable and Telecommunications Association (NCTA)
  • Paul Brigner, chief technology policy officer, Motion Picture Association of America (MPAA)
  • Bobby Flaim, special agent in the Technical Liaison Unit of the Operational Technology Division of the FBI
  • David Sohn, senior policy counsel for the Center for Democracy and Technology (CDT)
  • Don Blumenthal, senior policy adviser for the Public Interest Registry (PIR)
  • Jim Galvin, director of strategic relationships and technical standards for Afilias

JULY 18, 2011- Sally Wentworth moderated the discussion at the New Challenges to Critical Internet Resources workshop at the IGF-USA conference.

Wentworth began the session by referring to it by an alternate title, “Blocking and Tackling.” The alternate title proved appropriate, as the two sets of panelists became more and more passionate in their assertions during the discussions on two topics that affect the future health of the Internet: the implementations of IP version 6, or IPv6, and especially Domain Name System (DNS) blocking and filtering.

IPv6

The first panel consisted of John Curran, Steve Crocker, Rex Bullinger, Jim Galvin and Bobby Flaim. It centered on a discussion of IPv6 and the difficulty in implementing a system that is viewed as the “broccoli of the Internet” – something that is technologically necessary, but for consumers is less of a priority because a switch is not incentivized.

The technological necessity is an inevitability. IPv4 has 4.3 billion independent IP addresses. The central pool of addresses ran dry Feb. 3, 2011. The last five blocks were passed out to the five regional address distributors. Depending on the rate of growth, Curran explained, those addresses may not last very long. In fact, the Pacific region has already handed out all its addresses. The 7 billion people in the world can’t fit into 4.3 billion addresses, especially when most have more than one address to their names.

“We have to move, the hard part is getting them to move,” Curran said, referring to consumers.

The biggest problem is that IPv6 and IPv4 are two languages that don’t talk to each other.

“You have to enable IPv6 in order to speak to IPv6 Internet users. It’s literally possible for us to break the global Internet if we don’t transition to IPv6,” Curran said.

The dual stack model was identified by the Internet Engineering Task Force (IETF) to be the most effective and efficient way to begin integrating IPv6 into the lexicon.

“What we have is a coexistence processes rather than a transition,” Crocker explained. “We’re going to have these two existing parallels. You’ve got pure IPv4 exchanges and conversations, pure IPv6 exchanges, then you’ll have somewhat more complex interchanges between IPv4 users and IPv6 systems and those will require some sort of translation.”

“The ISPs,” Curran said, “are stuck in the trap as to whether there’s enough demand to make the switch.”

Currently, most devices, such as laptops, are IPv6 enabled and have been for some time, so they’re coexisting, rather than transitioning directly from IPv4, Curran said.

“Things are not going to be replaced overnight,” Galvin said. “The providers are ready, but the customers aren’t. The laptops are ready, but the interaction is not ready.”

There are other problematic implications of enacting the new IP version, particularly as it relates to ensuring that how the IP addresses are being allocated and to whom and logging NATS, according to Flaim. Another element was addressed by an audience member – the possible advantages possessed by countries with less infrastructure than the United States. Is the United States being left behind because its Internet is too big? Crocker contended that it’s a possible scenario that some developing regions could leap frog over IPv4 and go directly to IPv6.

DNS Blocking and Filtering

The second panel of the afternoon consisted of Crocker, George Ou, Paul Brigner, Galvin, David Sohn and Don Blumenthal and centered on a lively discussion about the merits of DNS blocking and filtering as an answer to copyright infringement.

The panel was divided on its views – some felt that DNS filtering and blocking represented the perfect answer to copyright infringement and media piracy, while others felt that a solution based on technical adjustment was the wrong way to approach what they deemed a human problem, not a technical problem. While a consensus was not achieved, the discussion addressed serious concerns about the damaging effects of illegal downloading on the content production industry, as well as the greater constitutional, technical and political implications of the use of DNS filtering and blocking.

Panelists referenced the Protect IP legislation that is currently in the U.S. Senate. The legislation is aimed at off-shore websites that infringe copyright laws by hosting pirated media. One of the ways the bill works is to undercut the sites by going after their advertising and funding sources.

JULY 18, 2011- New Challenges to Critical Internet Resources workshop engages audience at the IGF-USA conference.

The trouble, Crocker explained, is that the blockages or filters are not only “trivial to get around” but the motivation is there. Sohn agreed that DNS filtering is not beneficial, especially in this case, because it is easy to evade. Using DNS filtering to prevent users from accessing entire domains that may contain illegal content rather than addressing the content itself on specific websites is too broad, Sohn suggested.

Galvin agreed: “While content providers have a legitimate need to protect their assets, there seems to be the automatic assumption that DNS filtering is the right way to do that. You’re concerned about content, so you want to do content filtering.”

Galvin cautioned the panel and the audience to be aware of consequential damages.

Sohn also raised concerns about prior restraint. “Under prior restraint law, you cannot restrict speech unless it’s actually proven illegal,” he said. “DNS filtering orders would often occur on a preliminary basis, as part of a restraining order, rather than after a full legal determination by a court.”

There was further discussion that on a technical level, the use of these tactics would be problematic because it would break DNSSEC and destabilize the Internet. Especially when DNSSEC was designed to detect just that type of illegal activity, Crocker maintained.

On the other side of the issue, Brigner explained that in using DNS filtering, criminal sites would be removed from the “global phonebook,” preventing individuals from accessing them and propagating the consumption of illegal media.

“We’re not asking for a new row of cannons,” he said in reference to Crocker’s earlier metaphor about the Vassa, a Swedish warship that sank because its poor design was based on the king’s desire for more firepower in spite of architectural faults. “Let’s use sound engineering principals.”

An audience member’s suggestion of the use of an industry seal program was also met with varying levels of support and dissension. Ou expressed that an industry seal program may be easily counterfeited, while others felt that using a “human” solution rather than a technical solution was a more appropriate answer to the problem.

In the end, the dialogue raised many new questions about the real world implications of IPv6 and DNS blocking technologies.

“It’s important that these various communities find a way to address the issues in a way that’s respectful of the technology, respectful of the global Internet and people’s need to carry out their business, and the freedom of expression,” Wentworth said in closing.

“There’s a lot of competing interests, but they don’t have to be mutually exclusive.”

– Bethany Swanson