Documentary coverage of IGF-USA by the Imagining the Internet Center

Posts Tagged ‘IPv6

IGF-USA 2012: Critical Internet Resources (CIRs) – Evolution of the Internet’s Technical Foundations

leave a comment »

Brief session description:

Thursday, July 26, 2012 – Since the initiation of the Internet Governance Forum (IGF), Critical Internet Resources (CIR) and the evolution of the Internet’s technical foundations have been a central focus of ongoing Internet governance debates. Varied views can engender misunderstandings that influence the opinions of global stakeholders, and different views exist about how to advance CIRs. International governmental approaches are proposed by some, while others strongly support the present bottom-up, consensus-driven models. Three foundational technological changes – IPv6, secure Domain Name System (DNSsec) and secure routing – framed the discussion in this workshop. Deployment of these new technical and organizational approaches raises significant challenges to stakeholders, operations and governance arrangements.

Details of the session:

The moderator for the session was Walda Roseman, chief operating officer of the Internet Society. Panelists included:

  • Steve Crocker, chair of the board of the Internet Corporation for Assigned Names and Numbers
  • John Curran, president and CEO of the American Registry of Internet Numbers
  • Richard Jimmerson, director for deployment and operationalization, Internet Society
  • Vernita Harris, deputy associate administrator in the Office of International Affairs of NTIA, US Department of Commerce

Thursday’s IGF-USA conference at Georgetown Law Center featured an assembled panel of government and corporate experts who addressed the controversial issues concerning the control of critical Internet resources.

Walda Roseman, chief operating officer of the Internet Society (ISOC), chaired the discussion on the implementation and security of CIRs.

CIRs include IP addresses, domain names, routing tables and telecommunications, or what Steve Crocker, CEO and co-founder of Shinkuro Inc., Internet Hall of Fame member and chair of the board of ICANN, called the base of Internet architecture upon which everything else is built.

Moving from Internet Protocol Version 4 to IPv6

One of the most pressing concerns regarding CIRs is the revision of Internet Protocol (commonly referred to as IP) from version 4 to version 6, now the most dominant protocol for Internet traffic.

IPv4 used 32-bit addresses, allowing for approximately 4.2 billion unique IP addresses, but the growth of the Internet has exceeded those limits. IPv6 uses 128-bit addresses, allowing for about 3.4×1038  unique addresses. This number is equal to approximately 4.8×1028 addresses for each of the seven billion people alive in 2012.

John Curran speaks about critical internet resources during IGF-USA conference in Washington, D.C. on July 26, 2012.

Because headers on IPv4 packets and IPv6 packets are quite different, the two protocols are not interoperable and thus they are both being run in what is called a “double stack.”

However, IPv6 is, in general, seen to be a conservative extension of IPv4. Most transport and application-layer protocols need little or no change to operate over IPv6. The exceptions to this are the application protocols that embed internet-layer addresses, such as FTP and NTPv3. In these, the new address format may cause conflicts with existing protocol syntax.

Internet service providers, the Internet Society and many large Internet-based enterprises worked to support a World IPv6 Launch on June 6 this year to help accelerate the adoption of IPv6.

John Curran, president and CEO of the American Registry for Internet Numbers, said upgrading to IPv6 is a necessary step for “any enterprise that wants to still be in business in five years,” because it enables them to continue to reach new customers and grow.

When asked about the costs or burdens of upgrading to IPv6 for small businesses, Curran explained that in most cases the burden would fall on the hosting company through which they run their website.

Chris Griffiths, director of high-speed Internet and new business engineering for Comcast, confirmed this, stating his company would have to upgrade to continue to attract new clients.

Security issues always loom large in Internet evolution

The development of the Internet has led to a need for Domain Name System Security, or DNSSEC. Curran explained that DNSSEC maintains the integrity of the Internet by ensuring the information users obtain is from the source they believe they are corresponding with, essentially preventing redirection to fraudulent websites.

Redirection could come from hackers, hijackers and phishers, but also the US government, should initiatives such as SOPA or PIPA pass.

“My primary interest is keeping the open Internet alive,” said Richard Jimmerson, director of deployment and operationalization for ISOC. “Somebody in this room will want to invent the next Facebook or Yahoo! Today, that is possible, but if we do not pay attention to certain things, that may not be possible anymore.”

Griffiths said Comcast and other Internet technology companies work together through governance processes now in place to address, for example, the types of security vulnerabilities that can drive action to work to avoid future risk, and in making adjustments in infrastructure and dealing with other emerging challenges.

Steve Crocker speaks about critical internet resources during IGF-USA conference in Washington, D.C. on July 26, 2012.

Conflicts arise over the management of CIRs

The US government currently maintains the most control globally over CIRs. This is not well received by some critics around the world, as they fear that the United States may abuse its power. Some have also proposed that they would like to see a roadmap of the Internet for the next 20 years.

Curran addressed these concerns by stating that the US government has a positive track record regarding the respectful and neutral administration of its responsibility for CIRs, mostly leaving all of the operational details to multistakeholder global governance bodies such as the Internet Engineering Task Force and ICANN, and added that roadmap would not likely be effective as there are too many unknowns moving forward.

Vernita Harris, deputy associate administrator of the National Telecommunications and Information Administration, explained that the newest Internet Assigned Numbers Authority (IANA) contract indicates it expects that ICANN and aspects of control over the Internet architecture “will be multi-stakeholder driven, addressing the concerns of all users both domestic and international.”

— Brennan McGovern

Internet Governance Forum-USA, 2011 New Challenges to Critical Internet Resources Blocking and Tackling – New Risks and Solutions

leave a comment »

Brief description:

The security, stability and resiliency of the Internet are recognized as vital to the continued successful growth of the Internet as a platform for worldwide communication, commerce and innovation. This panel focused on domain name service blocking and filtering and the implementation of Internet Protocol version 6 (IPv6) and critical Internet resources in general. The panel addressed some of the implications of blocking and filtering; the implementation of DNSSEC and IPv6 on a national basis; and future challenges to the Internet in a mobile age.

Details of the session:

The Internet’s success well into the future may be largely dependent on how it responds and reacts to new challenges, according to panelists in a session on critical Internet resources at the IGF-USA conference July 18 in Washington, D.C.

The Internet continues to evolve. It is also growing, as it becomes accessible to billions more people. A major challenge now and in years to come is to make the Internet more secure while continuing to promote openness, accessibility, transparency, bottom-up decision-making, cooperation and multistakeholder engagement. It is important that organizations continue to retain these values as much as possible as they react to cybersecurity and cybertrust issues.

This workshop was conducted in two parts, both moderated by Sally Wentworth, senior manager of public policy for the Internet Society. Panelists included:

  • John Curran, president and CEO of the American Registry for Internet Numbers (ARIN)
  • Steve Crocker, CEO and co-founder of Shinkuro, and vice chair of the board for ICANN, the Internet Corporation for Assigned Names and Numbers
  • George Ou, expert analyst and blogger for High Tech Forum http://www.hightechforum.org/
  • Rex Bullinger, senior director for broadband technology for the National Cable and Telecommunications Association (NCTA)
  • Paul Brigner, chief technology policy officer, Motion Picture Association of America (MPAA)
  • Bobby Flaim, special agent in the Technical Liaison Unit of the Operational Technology Division of the FBI
  • David Sohn, senior policy counsel for the Center for Democracy and Technology (CDT)
  • Don Blumenthal, senior policy adviser for the Public Interest Registry (PIR)
  • Jim Galvin, director of strategic relationships and technical standards for Afilias

JULY 18, 2011- Sally Wentworth moderated the discussion at the New Challenges to Critical Internet Resources workshop at the IGF-USA conference.

Wentworth began the session by referring to it by an alternate title, “Blocking and Tackling.” The alternate title proved appropriate, as the two sets of panelists became more and more passionate in their assertions during the discussions on two topics that affect the future health of the Internet: the implementations of IP version 6, or IPv6, and especially Domain Name System (DNS) blocking and filtering.

IPv6

The first panel consisted of John Curran, Steve Crocker, Rex Bullinger, Jim Galvin and Bobby Flaim. It centered on a discussion of IPv6 and the difficulty in implementing a system that is viewed as the “broccoli of the Internet” – something that is technologically necessary, but for consumers is less of a priority because a switch is not incentivized.

The technological necessity is an inevitability. IPv4 has 4.3 billion independent IP addresses. The central pool of addresses ran dry Feb. 3, 2011. The last five blocks were passed out to the five regional address distributors. Depending on the rate of growth, Curran explained, those addresses may not last very long. In fact, the Pacific region has already handed out all its addresses. The 7 billion people in the world can’t fit into 4.3 billion addresses, especially when most have more than one address to their names.

“We have to move, the hard part is getting them to move,” Curran said, referring to consumers.

The biggest problem is that IPv6 and IPv4 are two languages that don’t talk to each other.

“You have to enable IPv6 in order to speak to IPv6 Internet users. It’s literally possible for us to break the global Internet if we don’t transition to IPv6,” Curran said.

The dual stack model was identified by the Internet Engineering Task Force (IETF) to be the most effective and efficient way to begin integrating IPv6 into the lexicon.

“What we have is a coexistence processes rather than a transition,” Crocker explained. “We’re going to have these two existing parallels. You’ve got pure IPv4 exchanges and conversations, pure IPv6 exchanges, then you’ll have somewhat more complex interchanges between IPv4 users and IPv6 systems and those will require some sort of translation.”

“The ISPs,” Curran said, “are stuck in the trap as to whether there’s enough demand to make the switch.”

Currently, most devices, such as laptops, are IPv6 enabled and have been for some time, so they’re coexisting, rather than transitioning directly from IPv4, Curran said.

“Things are not going to be replaced overnight,” Galvin said. “The providers are ready, but the customers aren’t. The laptops are ready, but the interaction is not ready.”

There are other problematic implications of enacting the new IP version, particularly as it relates to ensuring that how the IP addresses are being allocated and to whom and logging NATS, according to Flaim. Another element was addressed by an audience member – the possible advantages possessed by countries with less infrastructure than the United States. Is the United States being left behind because its Internet is too big? Crocker contended that it’s a possible scenario that some developing regions could leap frog over IPv4 and go directly to IPv6.

DNS Blocking and Filtering

The second panel of the afternoon consisted of Crocker, George Ou, Paul Brigner, Galvin, David Sohn and Don Blumenthal and centered on a lively discussion about the merits of DNS blocking and filtering as an answer to copyright infringement.

The panel was divided on its views – some felt that DNS filtering and blocking represented the perfect answer to copyright infringement and media piracy, while others felt that a solution based on technical adjustment was the wrong way to approach what they deemed a human problem, not a technical problem. While a consensus was not achieved, the discussion addressed serious concerns about the damaging effects of illegal downloading on the content production industry, as well as the greater constitutional, technical and political implications of the use of DNS filtering and blocking.

Panelists referenced the Protect IP legislation that is currently in the U.S. Senate. The legislation is aimed at off-shore websites that infringe copyright laws by hosting pirated media. One of the ways the bill works is to undercut the sites by going after their advertising and funding sources.

JULY 18, 2011- New Challenges to Critical Internet Resources workshop engages audience at the IGF-USA conference.

The trouble, Crocker explained, is that the blockages or filters are not only “trivial to get around” but the motivation is there. Sohn agreed that DNS filtering is not beneficial, especially in this case, because it is easy to evade. Using DNS filtering to prevent users from accessing entire domains that may contain illegal content rather than addressing the content itself on specific websites is too broad, Sohn suggested.

Galvin agreed: “While content providers have a legitimate need to protect their assets, there seems to be the automatic assumption that DNS filtering is the right way to do that. You’re concerned about content, so you want to do content filtering.”

Galvin cautioned the panel and the audience to be aware of consequential damages.

Sohn also raised concerns about prior restraint. “Under prior restraint law, you cannot restrict speech unless it’s actually proven illegal,” he said. “DNS filtering orders would often occur on a preliminary basis, as part of a restraining order, rather than after a full legal determination by a court.”

There was further discussion that on a technical level, the use of these tactics would be problematic because it would break DNSSEC and destabilize the Internet. Especially when DNSSEC was designed to detect just that type of illegal activity, Crocker maintained.

On the other side of the issue, Brigner explained that in using DNS filtering, criminal sites would be removed from the “global phonebook,” preventing individuals from accessing them and propagating the consumption of illegal media.

“We’re not asking for a new row of cannons,” he said in reference to Crocker’s earlier metaphor about the Vassa, a Swedish warship that sank because its poor design was based on the king’s desire for more firepower in spite of architectural faults. “Let’s use sound engineering principals.”

An audience member’s suggestion of the use of an industry seal program was also met with varying levels of support and dissension. Ou expressed that an industry seal program may be easily counterfeited, while others felt that using a “human” solution rather than a technical solution was a more appropriate answer to the problem.

In the end, the dialogue raised many new questions about the real world implications of IPv6 and DNS blocking technologies.

“It’s important that these various communities find a way to address the issues in a way that’s respectful of the technology, respectful of the global Internet and people’s need to carry out their business, and the freedom of expression,” Wentworth said in closing.

“There’s a lot of competing interests, but they don’t have to be mutually exclusive.”

– Bethany Swanson

Internet Governance Forum-USA 2011 Potential-future scenario discussion: Regionalization of the Internet

leave a comment »

Brief description:

IGF participants broke into three different rooms to discuss three different, possible potential-future scenarios for the Internet in 2025. In this session, the brief description given to the discussants asked them to respond to the idea of the “Regionalization of the Internet”- a future in which the mostly global Internet we know today becomes more divided, with certain aspects isolated from others based on their geographic or economic similarities. The description noted that, “natural and man-made disasters could easily accelerate this process, leading to an alternate future where the differences between these islands is more pronounced and e-conflict between regions becomes a significant national security and economic development issue.”

Details of the session:

Garland McCoy of the Technology Education Institute and Andrew Mack and Alessandra Carozza of AMGlobal were at the front of the room to facilitate a wide-ranging discussion of the Regionalization of the Internet potential-future scenario at the Internet Governance Forum-USA 2011 at Georgetown University Law Center July 18.

This scenario sets up a divisive future for the Internet. You can read the full description used to launch this discussion in PDF format at the following link: http://www.elon.edu/docs/e-web/predictions/igf_usa/Regionalization_Internet_Scenario.pdf

The key drivers were to consider as causes for regionalization of the Internet:

  • National and corporate security concerns and increased pressure from non-state actors based in “failed state” regions of the world.
  • Global economic weakness, budget crises and significant, systemic unemployment.
  • Shortages of food and raw materials leading to rises in the prices for commodities, food and energy and supply chain/trade disruptions.
  • A rising “black market” dominated by narco/political/religious groups with increasing technical sophistication.
  • Expansion of IPv6 and the “Internet of things” creates an environment where citizens can be easily tracked within a region and where a market in false identities flourishes.

While it was considered a “bleak scenario” by its participants and moderators, the majority of the discussants in this possible potential-future scenario session indicated that the majority of the outcomes that were outlined are not only plausible, but that some are already occurring, and occurring at a faster rate than maybe previously anticipated.

JULY 18, 2011 - Members of the audience participated in Regionalization of the Internet, a session held during the Internet Governance Forum USA 2011. Conference attendees were encouraged to enter into discussion during the day's events.

Scenario facilitator Andrew Mack described the regionalization scenario as unique among the other scenarios presented today in that it is “the only scenario that is actually coming to pass.”

“A good chunk is plausible, said Leslie Martinkovics, an IGF participant from Verizon Communications. “When we’re looking at what’s happening today, there are a series of pressures, some economic, some security related. These are all real. There is a growing feeling that change is coming.”

Security is seen as the paramount concern for many areas of the world, prompting some regions to block certain domains, like the “Great Firewall of China.” The problem is that this blocking process is easily circumvented. George Ou of Digital Society maintained that the “Great Wall” is often considered “porous.” China was mentioned as a key player in the rising challenges facing the argument against regionalization. Other country governments listed as key “players” in the conversation included Brazil, Iran and India.

“Any attempts to isolate, to protect, fail,” said Bill Smith, a participant from PayPal. Attempts at blocking, he said, “are doomed to fail as well.”

The proliferation of the hacking group Anonymous in the Arab Spring was a catalyst for discussion surrounding the viability of regulating such isolated Webs, or “islands,” or whether a more unitary Internet is more desirable.

“In order to dissuade users from building up isolated Webs, it’s important to build up the single, unitary net and make it better,” Smith said.

“The Internet,” said Sally Wentworth of the Internet Society, “is a tool. It is not the cause, it’s an enabler. People want to communicate, people want to create. It’s very difficult to put that genie back in the bottle and carve it up.”

Because there is a fundamental need for communication across islands, it was asserted by a number of participants that regionalization may not even be possible. The Arab Spring, Wentworth and others explained, is an example of an inability to maintain separate communities within the greater Web.

The existence of dark nets was referenced as a refutation of the inherent nature of a unitary Web. Scott McCormick explained that dark nets, which are essentially intranets, have existed for quite some time. North Korea, he contended, is a dark net and has been for a while, with very few people who have access to it. Governments like North Korea’s have opted out of a global, unitary Web, but the moderators and panelists questioned whether that action is truly possible.

JULY 18, 2011 - Members of the audience participated in Regionalization of the Internet, a session held during the Internet Governance Forum USA 2011. Conference attendees were encouraged to enter into discussion during the day's events.

“Can you really opt out?” Mack asked. He noted that existing within the metaphorical “castle,” or within the isolated intranet, does not necessarily mean that there is still isolation within the castle itself. And living in the castle does not necessarily guarantee protection.

“If all your people don’t live in the castle, you can’t protect them,” Mack said.

There are technical hindrances to fragmenting the Web. When countries try, they are doing so at the DNS level, not at the IP level, according to McCormick. This is what makes it easy for users with means and motivation to work around the blockages. The introduction of IPv6 will greatly affect the nature of users to navigate those blockades because it will make it much harder to memorize IP addresses, which is the way most users avoid the blockages, McCormick said.

Those in the group in favor of regionalization felt that isolation might make security more plausible and more manageable. Tom Lowenhaupt, who advocates for the development of a .nyc TLD, explained that top-level domains (TLDs) are the way to enable regionalization. Applying security to those TLDs enables a more private, more secure and more manageable, intuitive Internet. Those against regionalization offered that it may open doors to a host of other more problematic issues—the goal is the minimum amount of regulation for the most effectiveness, Smith said.

The future governance of the Internet will be determined by three major players: general users who may not feel a personal stake in Internet governance; the criminal element, like Anonymous, which has a major stake in Internet governance, but that may be undesirable; and a disaffected group that may not feel it has a stake until circumstances start to change. What will come to pass remains to be seen, but the timeline, everyone agreed, is moving far faster than originally anticipated.

– Bethany Swanson

IGF-USA 2010 Workshop – Web security will define the future of the Internet

leave a comment »

Panelists discuss the different options, perspectives and issues surrounding web security.

Brief description:

This panel, moderated by Robert Guerra of Freedom House, focused on critical Internet resources and how to ensure that the underlying principles that have led to the Internet’s success persist in the face of security challenges. These principles include openness (open standards, open technologies), accessibility transparency, bottom-up decision-making, cooperation and multi-stakeholder engagement. Key to implementing these principles is also a broadened understanding of the role of the infrastructure providers, such as global and national Internet services/connectivity providers who build and operate the backbones and edge networks. The panel was also expected to address some of the implications for the implementation of DNSSEC and IPv6 on a national basis that contribute to the security and resiliency of CIR on a global basis.

Details of the session:

The Internet’s success well into the future may be largely dependent on how it responds and reacts to increasing security challenges, according to panelists in a critical Internet resources workshop at the IGF-USA conference July 21 in Washington, D.C.

The Internet continues to evolve. It is also growing, as it becomes accessible to billions more people. The major challenge of our generation is to make the Internet more secure while continuing to promote openness, accessibility, transparency, bottom-up decision-making, cooperation and multistakeholder engagement. It is important that organizations continue to retain these values as much as possible as they react to cybersecurity and cybertrust issues.

Panelists at this workshop included:

  • Moderator Robert Guerra, Freedom House
  • Trent Adams, outreach specialist for the Internet Society
  • Matt Larson, vice president of DNS research for VeriSign
  • Steve Ryan, counsel to the American Registry for Internet Numbers
  • Patrick Jones, senior manager of continuity and risk management for ICANN
  • Jeff Brueggeman, vice president for public policy for AT&T

Panelists all expressed a desire to continue to engage in multifaceted talks because a single governmental entity is not the solution; it takes many people working together. As Brueggeman put it, there’s no “silver bullet” for the issue of Internet security.

“What we do on a day-to-day basis is ensure that those conversations take place,” Adams said. “The (critical Internet) resource is not a thing you can touch. You have this mesh of interconnected components that is the critical resource. You can’t pull one of those components out. Everyone must be around that table.”

So what’s the solution? The answer to that question is still a little unclear because Internet service providers and other organizations are often reactive to issues. Brueggeman said it’s time to embrace a forward-thinking approach.

“Things can get complicated when you’re reacting to an attack,” he said. “The best way to deal with these things is to try to think about them up front. How do we detect and prevent rather than react after the fact? How can we have more cooperative information sharing before attacks to try to prevent them and have the best information we can?”

Ryan stressed, though, that not all government is bad. He said citizens and organizations need to think “carefully about what the role of the government is.” But still, there should be a symbiotic relationship.

“There’s become a sense in government policy circles, including in the most sophisticated, that somehow (the Internet) runs on its own and you can’t break it,” he said. “I have news for you: You can break it. We look at government as something that has an increasingly important role because the Internet has an increasingly important role in economies.”

Ryan continued by saying non-governmental organizations have a responsibility to work with governments and to educate the people who work in them. He and the other panelists agreed that an international governmental organization wouldn’t work, though, unless core Internet values are embraced and upheld. They said a set-up that would allow countries around the world to vote on how the Internet is governed would not be a favorable solution.

“Until we get it right,” Ryan said, “I think we’re muddling along rather well.”

Panelists weigh in on the debate surrounding web security.

DNS issues and DNSSEC

Larson spoke specifically about the security of the Domain Name System because he views the DNS as an absolutely critical Internet resource. “If you don’t have the DNS, you don’t have the Internet,” he noted. He said users can’t truly trust the DNS, though, which is a bit disconcerting because of its necessity.

He supports DNSSEC—Domain Name System Security Extensions—which give users digital signatures (origin authentication) and data integrity. “Once you have that, you can validate data and have a higher level of confidence that the data you’re getting back is valid,” Larson said.

(You can read more about DNSSEC here: http://en.wikipedia.org/wiki/Dnssec.)

He also said that DNSSEC makes DNS more trustworthy and critical to users as more applications—not just host names—depend on it. “We’re going to look back and realize it enabled a whole new class of applications to put information in the DNS,” Larson said. “Now you can trust the information coming out of the DNS.”

Going from IPv4 to a combination with IPv6

Ryan emphasized the importance of Internet Protocol version 6, IPv6, a new Internet layer protocol for packet switching that will allow a “gazillion numbers” vastly expanding the address space online. There is a rapidly decreasing pool of numbers left under IPv4. Ryan said the increased flexibility of IPv6 will allow for the continued growth of the Internet, but it won’t be a free-for-all.

“The numbers we have issued are not property,” he said. “We have a legal theory that’s embodied in every contract we’ve issued. They belong to community. If you’re not using them, you have to give them back. They are in essence an intangible, non-property interest, so over the next couple of years there will be some very interesting legal issues.”

ICANN in action

Jones said ICANN, which recently passed its 10-year milestone, has continued to work collaboratively with the community to take on major initiatives, such as the introduction of internationalized domain names in the root.

“We have taken requests from countries for internationalized country codes and approved 15,” Jones said.

“There’s a huge development in those regions of the world where you can now have domain names and an Internet that reflects their own languages and scripts. That will have an impact as discussion around critical Internet resources continues, especially in the IGF space.”

Physical critical resources

Brueggeman said AT&T has a broader perspective of critical Internet resources because the company is responsible for carrying Web traffic and for the underlying infrastructure, not just involved in issues tied to the DNS. He said the transition to IPv6 is daunting because it’s not backward-compatible. His main challenge has been in outreach efforts to customers.

“We have to deal with a lot of traffic that’s generated as we’re making changes to DNSSEC and IPv6,” he said. “In some cases, you might create some new security concerns, but overall both are important essential transitions.”

Brueggeman emphasized that multistakeholder discussions will be important in the coming years.

“We really need all of the parties who have the direct stake at the table to be part of the solution,” he said. “We need to have the resources handled in a way that promotes openness and promotes interoperability. There’s a huge policy risk of not managing these resources in a multistakeholder way.”

-by Colin Donohue, http://imaginingtheinternet.org