Documentary coverage of IGF-USA by the Imagining the Internet Center

Posts Tagged ‘washington dc

Internet Governance Forum-USA, 2011 Closing Remarks

leave a comment »

Details of the Session

Marilyn Cade, catalyst of IGF-USA, provided closing remarks to participants at this year’s conference in Washington, D.C., that mirrored, in some ways, her remarks from the previous year, as she described her view of the state of the Internet.

“We were at the very beginnings of the earthquake way out in the middle of the ocean,” Cade said, referring to one of her discussions from last year. “We were just beginning to detect some seismic activity that eventually, if not dealt with, could lead to a tsunami.”

This seismic activity, as Cade described, was a need to deepen the involvement of people around the world in issues related to Internet governance.

“It’s possible we’re on the threshold of some bad outcomes and we need to deal with those now,” Cade said.

In light of potential threats to the use and access of the Internet as we know it, Cade encouraged people who were not previously a part of IGF to stay in touch and remain involved in discussions surrounding Internet governance.

Cade, along with Chengetai Masango, representative of the United Nations Secretariat for the IGF, urged conference participants to attend the 2011 global IGF conference this fall.

The conference, which will be held in Nairobi, Kenya from Sept. 27-30, will be the sixth global IGF meeting. Masango said the main theme of the meeting will be “Internet as a catalyst for change: access, development, freedoms and innovation,” and will include more than 90 workshops, best practices, open forums, dynamic coalitions and an IGF Village—a space where organizations can display their Internet governance activities.

Masango stressed that not only are the event meetings themselves important to attendees, but that there is “value at the edges”—a benefit from meeting and dialoging with others who have concerns about Internet governance.

Various remote participation options will be available for those interested in being a part of the 2011 global conference but are unable to travel to Nairobi. Among the options are Webex, realtime transcription, a webcast, email, Twitter and HUBS—gatherings of interested people or national IGFs that can connect together through Webex to take part in the meeting.

More information on the 2011 global conference can be found at http://www.intgovforum.org.

– Natalie Allison

Written by andersj

July 19, 2011 at 3:20 am

Internet Governance Forum-USA, 2011 Review: Implications of Internet 2025 Scenarios

leave a comment »

Brief description:

Earlier in the day at IGF-USA, participants divided into three groups to discuss potential-future scenarios for the Internet in 2020. At this session, moderators briefed the plenary crowd on the discussions and they and other IGF-USA participants in the audience weighed in with their observations.

Details of the session:

Building upon an experiment that had succeeded at the previous year’s meeting, the Internet Governance Forum-USA presented a set of hypothetical situations, ranging between idyllic or dystopic depending on the preferences of those in attendance. Splitting off into three groups, panelists and members of the audience discussed the pros and cons of the end results of an imagined timeline, then moved on to figure out how best either to ensure or prevent said timeline.

As a part of the concluding ceremony of the IGF-USA, the lead moderators of every respective group presented their scenario to those caught unaware by a possible destiny and pointed out what the Internet community, along with governmental and business leaders, can do to in response to the potential future.

The first, Regionalization of the Internet, revolved around a prospective world in which individuals are either in or out on the Web, blocked off from those deemed to be outside of their own government’s sphere of influence. (You can find details from the earlier session that fed into this session here: http://www.elon.edu/e-web/predictions/igf_usa/igf_usa_2011_scenario_Internet_islands.xhtml.)

Andrew Mack, of AMGlobal Consulting and the session’s lead moderator, described it as, “interesting, but a bit depressing. We took the Angel of Death view on this.”

The idea of the Internet as an open highway, in this world, is replaced by one replete with tolls, as cross-regional access is limited, or in the worst cases, cut off entirely. Because of national security concerns, economic weakness, pressure from climate change and the massive new adoption rates of the “next billion” Internet users found in emerging markets, the Internet becomes a series of castles.

Some in the session actually thought the scenario fit the present more than an illusory future, and the more dire of descriptions could become the status quo within five years. To prevent it, governments were urged to be flexible and practice their own advice, industries were urged to increase their focus on the next billion users, who as of yet have no champion to advance their causes, and IGF was urged to resist the advance of ITU, the United Nation’s mass communications arm.

The second session, lead by Pablo Molina of the Georgetown Law Center, presented a more positive picture. “Youth Rising and Reigning” (You can find details from the earlier session that fed into this session here: http://www.elon.edu/e-web/predictions/igf_usa/igf_usa_2011_scenario_youth_rise.xhtml.)projected a world with the youth-led revolutions in the Middle East spreading to Europe and other disaffected masses taking up the call to utilize new Internet-based technologies to assert their own independence in light of continued economic and civil strife. And though many agreed that there’s a strong plausibility of “Youth Rising …” a key distinction that strikes at its heart was made.

“The defining factor is digital literacy and mastery, not age,” Molina told the audience, bringing to earth the notion that everyone younger than 30 is an Internet messiah, and bringing to light the fact that with the right competencies and skill, even the most elderly can have an influence on the Web. And despite the positive outlook of the scenario, an important distinction was made: Bad actors will always find ways to capitalize on new advances, and inadvertently, some innocents will be inconvenienced or, at worst, suffer as a result of those ill intentions.

JULY 18, 2011 - During an afternoon session of the Internet Governance Forum USA 2011, Steve DelBianco, executive director of NetChoice, shares what was discussed during the morning "Government Prevails" scenario.

To encourage, if not the revolutionary subtext of the hypothetical situation, the political and societal awareness of the youth, all means to promote participation in political discourse were advocated, be they through industry continuing its innovative advances, governments empowering instead of reigning in their citizens, or IGF supporting the participation of more and more stakeholders to ensure all voices are accounted for. And, of course, education, coming in the form of digital literacy, is a must for the youth to have the tools to, at most, incite a just revolution, and at the least, fight for their own causes in an effective way once the Internet further integrates itself within society.

The talkback that was perhaps the most pessimistic and grimly reminiscent of the most bleak of science fiction was “Government Prevails,” led by Steven DelBianco of NetChoice. (You can find details from the earlier session that fed into this session here: http://www.elon.edu/e-web/predictions/igf_usa/igf_usa_2011_scenario_government_prev.xhtml.)It depicts not victorious and noble governments deservedly beloved by its populace, but ones that, through a series of calamities, find themselves with the responsibility and power over maintaining surveillance over their entire citizenry.

Natural disasters of unimaginable magnitude and hacking sprees running rampant across the globe, in this scenario, coupled with rapid advancements in mobile and surveillance technologies, give the world’s governments both the mandates (since its presumed that they win the public trust after being the only entities capable of responding to such horrendous occurrences) and means to fulfill a vision reminiscent, albeit not quite as menacing, as that of George Orwell’s “1984.”

“I woke up this morning feeling fine, and now I’m afraid,” one member of the session said after hearing about the timeline.

Each of the elements of the prevailing government could be, as separate entities, taken as positives. Many responded warmly to the possibility of a more advanced version of London’s CCTV, scanning entire cities in the hopes of preventing crime, or smartphones that were not only mandated to keep tabs on your location at all times, but which could be used to turn in violators of strict anti-pollution legislation. But at the end of the day, it’s still a world in which the government is given the sole proprietorship of its people, with a seemingly omniscient awareness of their every little move.

To keep it from happening, the workshop decided, industries should obey the laws to avoid losing public trust, and they work together with the government to avoid the current philosophy of “government vs. private business interests.” Governments, obviously, shouldn’t grab the chance at such power and instead opt for a more open and decentralized Internet.

As for IGF? It should stick to its current duties and engage with all stakeholders, though such a future, while seemingly horrendous to Western minds, DelBianco mused, could be equally as appealing to those in countries such as Iran and China. This, in the end, illustrated one of the most evocative elements of the hypothetical exercise. Just as one man’s trash can be another man’s treasure, one man’s dystopia can be another man’s utopia.

– Morgan Little

Internet Governance Forum-USA 2011 The Changing Landscape of the Domain Name System

leave a comment »

Brief description:

The Internet and the Web are continuing to expand at exponential rates. When the board of the Internet Corporation for Assigned Names and Numbers opened up a whole new world of names for Internet addresses with its historic vote in June 2011, new gTLDs and their implications for users became extremely important. This session explored the Internet users’ experiences that might be expected as the Domain Name System (DNS) is prepared to under a massive expansion, adding hundreds or even a thousand new gTLDs to “allow for a greater degree of innovation and choice.”

Details of the session:

Every time an individual pulls up a webpage the Domain Name System is used. Moderators and industry leaders who met at an IGF-USA 2011 workshop say changes announced by ICANN this summer will bring new challenges and opportunities. Generic top-level domains, also known as gTLDs were previously quite limited. They included .com, .info, .net and .org. On June 20, 2011, the board of the Internet Corporation for Assigned Names and Numbers (ICANN) voted to allow companies and organizations to choose any fitting suffix for their domain names. The new gTLDs will be operational in 2013. Among the likely names are .sport, .bank and .app.

The moderator of the event was Frederick Felman, chief marketing officer for Mark Monitor, a major domain management company based in the United States. Panelists included:

  • Suzanne Radell, senior policy adviser in the office of international affairs at the U.S. National Telecommunications and Information Administration
  • Amber Sterling, senior intellectual property specialist for the Association of American Medical Colleges
  • Pat Kane, senior vice president for naming services for Verisign
  • Jon Nevett, co-founder and executive vice president of Donuts Inc. and president of Domain Dimensions, LLC, a consultancy on domain name issues
  • Brian Winterfeldt, partner at the Washington, D.C., law firm Steptoe & Johnson, where he is a member of the intellectual property group
  • Ron Andruff, president and CEO of DotSport, managing the new top-level domain .sport – http://www.dotsportllc.com/about

Details of the Session

The panelists speculated that as few as 500 and as many as 2,000 domain names could be added in the near future as ICANN opens its application pool up in January 2012. These new names can range from generic names like .pizza, brand names like .apple or geographic names like .London.

Andruff said that the most important things from any applicants’ point of view is to serve users and connect them across languages and countries.

“Sports is one of those unique things,” he said. “Like music, [it] transcends borders, transcends languages, transcends cultures. It is relevant.”

It is important that we allow multiscript applications so we can reach all people of all languages, he said.

ICANN’s decision to open up the applicant pool is still relevantly new to the general public, which could lead to confusion, said Felman.

But the general population is beginning to join in the conversations, Kane explained. But Radell cautioned that the government is very concerned about the potential for fraud and general user confusion. When something goes wrong, people are going to turn to their government to ask why this was allowed to happen, she said.

Members of the Governmental Advisory Committee (GAC) worked very closely with ICANN to make sure safeguards were put into place to protect the users, Radell added.

One audience member asked how something like .bank would affect his ability to access his bank’s website. He questioned how the URL would be structured, and how Google Chrome users, who don’t use a URL at all, only a search bar, would access the sites. The panelists agreed that expectations for end users are still being developed.

Non-profits are another group that could have some trouble with the new domain names, said Sterling. In the past 15 years, non-profits have seen more donations through the use of the Internet, but it has also seen the Internet abused in the process.

Just a week after the 8.9 magnitude earthquake that rocked Japan, the Red Cross had to combat phishing attacks from people trying to steal donations, she added.

Brand owners are concerned about the fraud that could occur in the future with increased domain names and if multiple groups apply for the same domain name, said Winterfeldt. There is mediation through ICANN and brand owners will be notified if their domain is being sought by another company.

Another concern is whether the increase in domain names would lead to another .com bubble and fizzle out. “In essence, whether they survived was not the point, said Hedlund. “It’s about adding competition and how the market responds.”

– Anna Johnson

Internet Governance Forum – USA, 2011 NTIA’s Larry Strickling’s afternoon remarks

leave a comment »

Brief description:

Larry Strickling, administrator of the National Telecommunications and Information Administration and assistant secretary for communications and information at the U.S. Department of Commerce, gave a mini-keynote talk at IGF-USA 2011. NTIA is the executive branch agency that is principally responsible for advising the U.S. president on communications and information policies. Prior to his work for the Obama Administration Strickling worked as a policy coordinator for Obama for America, as a regulatory officer at Broadwing Communications, a department head at the FCC, a VP for Ameritech and a litigation partner at the Chicago law firm Kirkland & Ellis.

Details of the Session:

To begin the final plenary session for the day, Larry Strickling, an administrator for the National Telecommunications and Information Agency (NTIA), took to the podium to discuss recent activity in the world of Internet governance, particularly the recent Internet Cooperation for Assigned Names and Numbers conference in Singapore and the Organization of Economic Cooperation and Development meeting in Paris.

“We are at a very critical time in the history of the Internet,” he said, mentioning disputes among international organizations, including some governments that have recently called for increased regulation of Internet activity.

Strickling said he contributes the success of the current Internet and the way it is, or isn’t governed, to the multi-stakeholder approach, which can only be sustained and advanced when there is participation.

Last December, Strickling said, he helped complete a review of ICANN and submitted 27 recommendations to their board, all of which have been adopted.

“Now the focus turns to ICANN’s management and staff,” he said.

He also applauded ICANN’s acceptance of proposals made by the Governmental Advisory Committee regarding generic top-level domain names.

“The fact that not all the proposals were adopted does not represent a failure of the process or a setback in progress but reflects the reality of the multi-stakeholder model,” he said.

At the OECD’s meeting in June, representatives from government, the private sector, civilians and the field of technology met to discuss and develop the “Internet economy.”

“Participants at the meeting agreed to a communiqué on policy making principles and will create the conditions for an open, interoperable, secure and continually innovating Internet,” he said.

Strickling added that the intent was not to harmonize global law, but was to provide a global framework.

He then moved on to where the world could go next after the advancements of the past few months.

“More importantly, what’s the call of action for all of you?” he said, later concluding that the audience’s job was to advocate for a multi-stakeholder approach, not a treaty-based approach to developing policy.

Strickling reminded participants about the approaching July 29 deadline for comments on NTIA’s IANA Functions Contract, the first time that NTIA has sought public input.

He then concluded that the U.S. government is committed to multi-stakeholder solutions, and then reiterated the need for international cooperation and a focus on the process, not necessarily the outcome and adherence to developments already made, while taking questions from Cade and Michael Nelson of Georgetown University.

“If all that happens with the OECD principles and people file them away in a filing cabinet, then we’ve failed,” Strickling said. “These are only useful if they become a tool that we can now use as an advocacy basis for the rest of the world.”

In 2009, Strickling was appointed by the Senate to serve as assistant secretary for communications and information at the U.S. Department of Commerce.

During her introduction, Marilyn Cade said that Strickling’s reach went far and wide.

“The scope of his responsibility extends to impact on global decisions and global actions,” she said.

As an administrator with NTIA, Strickling is responsible for advising President Barack Obama on matters related to communications and information. He has extensive experience in technology policy and telecommunications both for the government and in the private sector.

– Rachel Southmayd

Written by andersj

July 18, 2011 at 10:32 pm

Internet Governance Forum-USA, 2011 Workshop: A Plethora of Policy Principles

leave a comment »

Brief description:

This session delved into recently announced policy statements with future implications including those made by the Organisation for Economic Cooperation and Development, the U.S. International Strategy on Cyberspace, the G8 and Others – Are principles a feasible approach to underpin Internet governance? If so, which ones? Should principles be applied by codification in law, MOU, or treaty? The workshop consisted of a mini analysis of currently proposed sets of principles. Because the Internet and online services are global, the perspective of the workshop was a global view.

Details of the session:

This is a placeholder for a lead sentence that encapsulates a key point that is the lead according to panelists in a workshop on Internet principles at the IGF-USA conference July 18 in Washington, D.C.

The co-moderators for the session were Fiona Alexander of the National Telecommunications and Information Administration (NTIA) and Shane Tews of Verisign. They hosted a session in which the following people first presented briefings on recently announced sets of principles.

Heather Shaw, vice president for ICT policy for the United States Council for International Business (USCIB), shared details of the OECD Communique on Principles for Internet Policy-Making: http://www.oecd.org/dataoecd/40/21/48289796.pdf.<

Chris Hemmerlein, a telecommunications policy analyst for NTIA, spoke about the sections of the May 2011 G8 Declaration that focus on the Internet: http://www.g20-g8.com/g8-g20/g8/english/live/news/renewed-commitment-for-freedom-and-democracy.1314.html.

Sheila Flynn, of the cyber policy office of the U.S. State Department, briefed participants on the U.S. International Strategy on Cyberspace: http://www.whitehouse.gov/sites/default/files/rss_viewer/internationalstrategy_cyberspace.pdf.

Leslie Martinkovics, director of international public policy and regulatory affairs for Verizon, introduced the concepts of the Brazilian Principles for the Internet: http://einclusion.hu/2010-04-17/internet-principles-in-brazil/.

Sarah Labowitz, U.S. State Department, shared details of the Council of Europe’s Internet Governance Principles: http://www.coe.int/t/dghl/standardsetting/media-dataprotection/conf-internet-freedom/Internet%20Governance%20Principles.pdf.

The introduction of the principles was followed by a roundtable discussion moderated by Iren Borissova of Verisign. Participants were:

  • Jackie Ruff, vice president for international public policy and regulatory affairs for Verizon Communications
  • Milton Mueller, Syracuse University (participating over the Internet from a remote location)
  • Jeff Brueggeman, vice president for public policy at AT&T
  • Cynthia Wong, director of the Project on Global Internet Freedom at the Center for Democracy & Technology
  • Liesyl Franz, vice president for security and global public policy for TechAmerica
  • Mike Nelson, research associate for CSC Leading Edge Forum and visiting professor at Georgetown University
  • Robert Guerra, director of the Internet Freedom program at Freedom House
  • Susan Morgan, executive director of the Global Network Initiative

For all of the Internet-focused principles laid out by the OECD, G8, U.S. State Department and the Brazilian government, the lists of tenants and guidelines, the debate at the 2011 Internet Governance Forum on “A Plethora of Policy Principles” boiled down to one question: Can the principles be successfully converted into actionable concepts?

Governmental parties, whether they are sanctioned by presidential administrations or are the result of a multistakeholder process, are seeking to list the boundaries in which they wish to act when the next contentious issue hits the web. The problem with these lists, which by themselves could perhaps act effectively within a singular cultural, regional or governmental context, stretch across all boundaries in a way similar to that of the Internet itself.

The policy principles included in the discussion, which in no way represent the entirety of idealized lists, were as follows:

-The OECD Communique on Principles for Internet Policy-Making, which is the most recent set, agreed upon by 34 member states, that seeks to promote the free flow of information, promote the open nature of the Internet, promote investment and the cross-border delivery of services, encourage multistakeholder cooperation and a litany of others, ranging from security concerns to liability issues for an affront to any of the contained principles.

-The G8 Renewed Commitment to Freedom and Democracy, which isn’t solely focused on Internet rights issues, but nonetheless deals heavily with digital issues. The list segments Internet users into three groups: citizens, who seek to use the Internet as a resource and as a means to exercise human rights; businesses, which use it to increase efficiency and reach consumers; and governments seeking to improve their services and better reach their citizens. The G8 list also considers the Internet as the “public forum” of our time, with all of the associated assembly rights applied.

-President Barack Obama’s U.S. International Strategy for Cyberspace focused on the concepts of prosperity, transparency and openness. It represents an effort on the part of the U.S. government to approach Internet issues with a singular vision and seeks to establish an international framework to deal with these issues in the future. Interestingly, it was also the only list of principles discussed during the session that asserts a sort of “digital right to self-defense” in the instance of an attack on the United States’ own digital resources.

-The Brazilian Internet Streering Committee’s Principles for the Governance and Use of the Internet in Brazil differed from the other lists in that it was created after a series of discussions between interested governmental, NGO, private and scientific parties. The committee’s principles also stood for greater universality to the Internet, particularly a breakdown of linguistic barriers and a strict adherence to maintaining diversity in the digital domain. For those questioning why Brazil, given the sheer number of countries with vested interests in Internet issues, Leslie Martinkovics, the director of international public policy and regulatory affairs for Verizon, said, “Brazil is seen as an opinions leader in the Americas. … they would like to take the high ground and lead the discussions going forward.”

-The Council of Europe’s Internet Governance Principles is the product of 47 member states with an expressed focus of “affirming the applicability of existing human rights on the Internet,” according to Sarah Labowitz of the U.S. State Department. In addition to those concerns, the principles call for a clear series of planning, notification and coping mechanisms in place in the event of a cyber disaster.

Once the particulars and intricacies of the various plans had been laid out, the critiques began to fly in. Mike Nelson, research associate for CSC Leading Edge Forum and visiting professor at Georgetown University, played the self-admitted role of the skeptic.

“The first thing you do is hold a meeting, and we’ve been doing that for five years,” Nelson said, describing how meetings lead to research, research leads to a lengthy span of time, during which the public becomes discontented, after which a list of principles emerges to placate the masses.

Nelson did not seek for the topic of discussion to be “do you or do you not stand for freedom,” but instead, a fundamental  debate on so-called “flashpoints,” which are actual, specific points of policy, the results of a debate, which could result in legitimate action, as opposed to simply more principles.

Rebecca MacKinnon soon followed Nelson in critiquing the concept upon which the entire panel was devoted, noticing a trend for the principles and conclusions reached by disenfranchised groups, including those who aren’t in the post-industrial West or in the increasingly powerful emerging economies, to be at best given lip service, and at most outright ignored both by interested parties and IGF itself.

“What’s changed between 2004 and now?” MacKinnon asked. “How do people interpret these principles that have been, less or more, set in some degree of stone for quite some time?”

For the Chinese or Iranian dissident, she posited, rouge groups such as Anonymous and Wikileaks do more for their cause than institutional bodies like IGF simply because they rely entirely upon action instead of dialogue, action that is particularly focused on powerful entities.

For all of the critiques piled on the notion of principles and the efficacy of IGF, there was an equal counter of support.

“The role of the IGF is exactly what it was set out to do. There has been discussion, and it has encouraged awareness,” said Heather Shaw, vice president for ICT policy for the United States Council for International Business.

She added that many of the principles outlined in the State Department report published by the Obama administration contains many of the same concepts that were actively discussed at the previous year’s IGF meetings.

“The fact this discussion is happening everywhere points to the success of the Internet Governance Forum,” said Fiona Alexander of the National Telecommunications and Information Administration. “IGF is spurring these kinds of conversations.”

But the unanswered question lingering at the end of the session’s discussion was whether those conversations, those discussions and that awareness is enough in this day and age, with the Internet’s rapid advancement now being met with an equally rapid growth in governmental interest in its inner workings.

– Morgan Little

Internet Governance Forum-USA, 2011 New Challenges to Critical Internet Resources Blocking and Tackling – New Risks and Solutions

leave a comment »

Brief description:

The security, stability and resiliency of the Internet are recognized as vital to the continued successful growth of the Internet as a platform for worldwide communication, commerce and innovation. This panel focused on domain name service blocking and filtering and the implementation of Internet Protocol version 6 (IPv6) and critical Internet resources in general. The panel addressed some of the implications of blocking and filtering; the implementation of DNSSEC and IPv6 on a national basis; and future challenges to the Internet in a mobile age.

Details of the session:

The Internet’s success well into the future may be largely dependent on how it responds and reacts to new challenges, according to panelists in a session on critical Internet resources at the IGF-USA conference July 18 in Washington, D.C.

The Internet continues to evolve. It is also growing, as it becomes accessible to billions more people. A major challenge now and in years to come is to make the Internet more secure while continuing to promote openness, accessibility, transparency, bottom-up decision-making, cooperation and multistakeholder engagement. It is important that organizations continue to retain these values as much as possible as they react to cybersecurity and cybertrust issues.

This workshop was conducted in two parts, both moderated by Sally Wentworth, senior manager of public policy for the Internet Society. Panelists included:

  • John Curran, president and CEO of the American Registry for Internet Numbers (ARIN)
  • Steve Crocker, CEO and co-founder of Shinkuro, and vice chair of the board for ICANN, the Internet Corporation for Assigned Names and Numbers
  • George Ou, expert analyst and blogger for High Tech Forum http://www.hightechforum.org/
  • Rex Bullinger, senior director for broadband technology for the National Cable and Telecommunications Association (NCTA)
  • Paul Brigner, chief technology policy officer, Motion Picture Association of America (MPAA)
  • Bobby Flaim, special agent in the Technical Liaison Unit of the Operational Technology Division of the FBI
  • David Sohn, senior policy counsel for the Center for Democracy and Technology (CDT)
  • Don Blumenthal, senior policy adviser for the Public Interest Registry (PIR)
  • Jim Galvin, director of strategic relationships and technical standards for Afilias

JULY 18, 2011- Sally Wentworth moderated the discussion at the New Challenges to Critical Internet Resources workshop at the IGF-USA conference.

Wentworth began the session by referring to it by an alternate title, “Blocking and Tackling.” The alternate title proved appropriate, as the two sets of panelists became more and more passionate in their assertions during the discussions on two topics that affect the future health of the Internet: the implementations of IP version 6, or IPv6, and especially Domain Name System (DNS) blocking and filtering.

IPv6

The first panel consisted of John Curran, Steve Crocker, Rex Bullinger, Jim Galvin and Bobby Flaim. It centered on a discussion of IPv6 and the difficulty in implementing a system that is viewed as the “broccoli of the Internet” – something that is technologically necessary, but for consumers is less of a priority because a switch is not incentivized.

The technological necessity is an inevitability. IPv4 has 4.3 billion independent IP addresses. The central pool of addresses ran dry Feb. 3, 2011. The last five blocks were passed out to the five regional address distributors. Depending on the rate of growth, Curran explained, those addresses may not last very long. In fact, the Pacific region has already handed out all its addresses. The 7 billion people in the world can’t fit into 4.3 billion addresses, especially when most have more than one address to their names.

“We have to move, the hard part is getting them to move,” Curran said, referring to consumers.

The biggest problem is that IPv6 and IPv4 are two languages that don’t talk to each other.

“You have to enable IPv6 in order to speak to IPv6 Internet users. It’s literally possible for us to break the global Internet if we don’t transition to IPv6,” Curran said.

The dual stack model was identified by the Internet Engineering Task Force (IETF) to be the most effective and efficient way to begin integrating IPv6 into the lexicon.

“What we have is a coexistence processes rather than a transition,” Crocker explained. “We’re going to have these two existing parallels. You’ve got pure IPv4 exchanges and conversations, pure IPv6 exchanges, then you’ll have somewhat more complex interchanges between IPv4 users and IPv6 systems and those will require some sort of translation.”

“The ISPs,” Curran said, “are stuck in the trap as to whether there’s enough demand to make the switch.”

Currently, most devices, such as laptops, are IPv6 enabled and have been for some time, so they’re coexisting, rather than transitioning directly from IPv4, Curran said.

“Things are not going to be replaced overnight,” Galvin said. “The providers are ready, but the customers aren’t. The laptops are ready, but the interaction is not ready.”

There are other problematic implications of enacting the new IP version, particularly as it relates to ensuring that how the IP addresses are being allocated and to whom and logging NATS, according to Flaim. Another element was addressed by an audience member – the possible advantages possessed by countries with less infrastructure than the United States. Is the United States being left behind because its Internet is too big? Crocker contended that it’s a possible scenario that some developing regions could leap frog over IPv4 and go directly to IPv6.

DNS Blocking and Filtering

The second panel of the afternoon consisted of Crocker, George Ou, Paul Brigner, Galvin, David Sohn and Don Blumenthal and centered on a lively discussion about the merits of DNS blocking and filtering as an answer to copyright infringement.

The panel was divided on its views – some felt that DNS filtering and blocking represented the perfect answer to copyright infringement and media piracy, while others felt that a solution based on technical adjustment was the wrong way to approach what they deemed a human problem, not a technical problem. While a consensus was not achieved, the discussion addressed serious concerns about the damaging effects of illegal downloading on the content production industry, as well as the greater constitutional, technical and political implications of the use of DNS filtering and blocking.

Panelists referenced the Protect IP legislation that is currently in the U.S. Senate. The legislation is aimed at off-shore websites that infringe copyright laws by hosting pirated media. One of the ways the bill works is to undercut the sites by going after their advertising and funding sources.

JULY 18, 2011- New Challenges to Critical Internet Resources workshop engages audience at the IGF-USA conference.

The trouble, Crocker explained, is that the blockages or filters are not only “trivial to get around” but the motivation is there. Sohn agreed that DNS filtering is not beneficial, especially in this case, because it is easy to evade. Using DNS filtering to prevent users from accessing entire domains that may contain illegal content rather than addressing the content itself on specific websites is too broad, Sohn suggested.

Galvin agreed: “While content providers have a legitimate need to protect their assets, there seems to be the automatic assumption that DNS filtering is the right way to do that. You’re concerned about content, so you want to do content filtering.”

Galvin cautioned the panel and the audience to be aware of consequential damages.

Sohn also raised concerns about prior restraint. “Under prior restraint law, you cannot restrict speech unless it’s actually proven illegal,” he said. “DNS filtering orders would often occur on a preliminary basis, as part of a restraining order, rather than after a full legal determination by a court.”

There was further discussion that on a technical level, the use of these tactics would be problematic because it would break DNSSEC and destabilize the Internet. Especially when DNSSEC was designed to detect just that type of illegal activity, Crocker maintained.

On the other side of the issue, Brigner explained that in using DNS filtering, criminal sites would be removed from the “global phonebook,” preventing individuals from accessing them and propagating the consumption of illegal media.

“We’re not asking for a new row of cannons,” he said in reference to Crocker’s earlier metaphor about the Vassa, a Swedish warship that sank because its poor design was based on the king’s desire for more firepower in spite of architectural faults. “Let’s use sound engineering principals.”

An audience member’s suggestion of the use of an industry seal program was also met with varying levels of support and dissension. Ou expressed that an industry seal program may be easily counterfeited, while others felt that using a “human” solution rather than a technical solution was a more appropriate answer to the problem.

In the end, the dialogue raised many new questions about the real world implications of IPv6 and DNS blocking technologies.

“It’s important that these various communities find a way to address the issues in a way that’s respectful of the technology, respectful of the global Internet and people’s need to carry out their business, and the freedom of expression,” Wentworth said in closing.

“There’s a lot of competing interests, but they don’t have to be mutually exclusive.”

– Bethany Swanson

Internet Governance Forum-USA, 2011 Best Practices Forum ICTs for Disaster Response: Transforming Emergency Management

leave a comment »

Brief description:

Responders are driving innovative uses of ICTs to transform emergency planning, intermediation and management. The Internet and social networking are being harnessed by search and rescue teams to locate and bring vital support to victims. ICTs are reassuring loved ones, bringing help to the stranded, raising financial aid, managing communications for responders and supporting rebuilding. This workshop explored the role communications, Internet and Internet-based applications play in disaster response and recovery operations and steps that can be taken to ensure continuity of operations following a disaster. It also considered the connection between disaster preparedness and Internet governance.

Details of the session:

Information and communication technologies are connecting public safety officials, allowing the efficient coordination of response operations and keeping citizens informed in new ways every day. Responders are driving innovative uses of ICTs to transform emergency planning, intermediation and management.

The Internet and social networking are being harnessed by search and rescue teams to locate and bring vital support to victims. The new Internet-based tools, mobile applications and social media that are transforming disaster relief efforts and empowering citizens were the focus of this workshop at the IGF-USA conference July 18 in Washington, D.C.

This session was moderated by Kelly O’Keefe, director of the Washington office of Access Partnership, a consultancy in international telecommunications trade, regulation and licensing. O’Keefe has a global knowledge base in the topic as she is also a rapporteur for an International Telecommunication Union study group on emergency communications.

The session’s panelists included:

  • Joe Burton, counselor for technology and security policy, Communications and Information Policy, U.S. State Department
  • Jim Bugel, assistant vice president for public safety and homeland security for AT&T
  • Corbin Fields, Sparkrelief, a non-profit Internet-based organization empowering communities to provide disaster relief, http://sparkrelief.org/#
  • Roland A. LaPlante, senior vice president and chief marketing officer, Afilias
  • Keith Robertory, manager, disaster services technology, American Red Cross
  • Tim Woods, technical leader, Cisco Systems

Kelly O’Keefe started the discussion by referring to recent global disasters, from the earthquake in Haiti to the earthquake and resulting tsunami in Japan.  These events have demonstrated the importance not only for disaster response, but for relief communication, especially for developing countries, she said.

The biggest trend in disaster communication has been the migration toward Internet-based communications, said Tim Woods of Cisco.  The influence and increased use of technology has become more widespread, and increasingly people turn to the Internet, particularly social media, to receive updates on events.  Social media, in particular, allow users to send updates to followers immediately in real time.

But despite the widespread prevalence of technology and response services across the globe, the United States does not have the authority to simply step in and start setting up an information system in any country experiencing a disaster.  There are differences when responding to a disaster in another country that aren’t problems in the United States.

When the Red Cross responded to the earthquake in Haiti, Keith Robertory said, “We didn’t just say, ‘Hey, let’s get our suntan lotion and see what’s happening.’”

In addition to disseminating information to the public, the Red Cross had a responsibility to talk to the Haitian government and coordinate their needs among Red Cross organizations from other nations.

During such coordination efforts, U.S. organizations cannot make the same kinds of assumptions that they would usually make at home.  There are differences in technology cultures that must be taken into account when setting up a communications network during a disaster, said Robertory, of the American Red Cross.

Communicating during an emergency

There should be an interest in the swift restoration of communications infrastructures to save lives in a country experiencing a natural disaster, Joe Burton said.  There is a global trend toward catastrophic disasters.  In recent history, with the rise of the Internet and social networks, the Internet and text messaging are efficient uses of communications networks.

In terms of the big picture, there are more people with a basic phone, even a lower-end version, than there are who own PCs and TVs combined, said Vance Hedderel, of Afilias.  This bigger picture allows disaster response communications to understand how to reach people.  At this point in time, the phone is more effective than the Internet.  SMS data reach larger numbers of people.

Additionally, the goal of disaster communications should be to inform people experiencing the disaster first-hand.  “

A major gap currently exists where those people aren’t getting the necessary information and the outside world seems to know much more,” Hedderel said.  “Those issues become so paramount when there is little infrastructure in place.”

When sending out information over the Internet, Robertory said it is critical to hit all social media sites.  Since the emphasis is on getting information to the largest number of people possible, the disaster response teams have to reach their audiences across many platforms.

Establishing a network

From the service provider’s perspective, there is an emphasis on critical infrastructure during and after a catastrophic event, Woods said.  The networks to be used for information sharing should be reliant and resilient to disruption.  A capacity plan needs to be in place to handle an emergency.  What often happens is that networks become oversaturated immediately after a disaster, with users attempting to assure others of their safety or provide updates to the state of those affected.

Robertory likened establishing network capacity to a gym membership: “You hope that not everyone comes in to use the treadmills on the same day at the same time,” he said.

Although being able to handle the enlarged capacity that happens after a disaster event is important, a network is not sustainable if preparation for overcapacity becomes slow and expensive.  The goal is a balance of capitalism and altruism, life-saving and economy, to make money with the most efficient use of resources possible.

Despite the importance of developing effectively working technology systems, these will be largely useless if various agencies involved cannot work together.  Part of preparation is building relationships between agencies and determining who will communicate with whom.

“If you can build those relationships ahead of time, you have a better chance of getting through when disaster strikes,” Burton said.

Another side to preparedness involves having technology that works even in smaller situations, Robertory said.  Attempting to prepare a system for a big event from the start leaves too much room for errors when such a situation actually occurs.  If the system works for everyday emergencies, it allows time to test it and improve it for smaller upcoming events.

“It’s about being proactive, not reactive,” said Corbin Fields, of Sparkrelief.

– Carolyn VanBrocklin

Internet Governance Forum-USA, 2011 Youth Roundtable: Digital Natives? Mythbusting Assumptions

leave a comment »

Brief description:

According to the Pew Research Center’s Internet & American Life Project more than 93 percent of teens (ages 12-17) and young adults (18-29) are currently online. Many Internet governance debates are held in the name of youth and many Internet policy decisions are made to guard or guide the young. But what do we really know about how young people use the Internet and what impacts it may have on them? What are the common claims about the influence of the Internet on children and young adults that fuel the Internet governance debate? How do young people really use new communications technologies and what issues do they see as most important? This roundtable explored some of the common myths about young people and the Internet, bringing together a group of college-aged participants from several U.S. universities to engage in a peer-moderated discussion.

Details of the session:

What do you know about how young people use the technology tools available online? Moderators Colin Donohue, a journalism instructor and student media adviser from Elon University, and Ali Hamed, from Cornell University led a guided discussion with a roundtable of panelists and forum attendees about this point and more at IGF-USA at Georgetown University July 18.

The young people who participated in the roundtable were:

  • Ronda Ataalla, 19, rising junior at Elon University
  • Kellye Coleman, 21, rising senior at Elon University
  • William O’Connor, rising senior at Georgetown University
  • Chelsea Rowe, rising sophomore at Cornell University
  • Jeff Stern, 19, rising sophomore at Elon University
  • Kristen Steves, Cornell University student, blogger for End Slavery Now
  • Nick Troiano, rising senior at Georgetown University
  • William Vogt, rising senior at Georgetown University

Their discussion and the title of the workshop stem from a number characteristics often assumed by the public about youth online that have been contradicted by research, including:

– All young people are highly active users of the Internet.
– Young people don’t care about their privacy.
– The Internet is a dangerous, dangerous place.
– All teens are naturally tech-savvy and adept at creating online content – “digital natives.”
– The virtual world of online communications is isolating young people.
– Social media leads kids to be deceptive.
– Social media is addictive to everyone who uses it regularly.
– The Internet is the great equalizer.

The Youth Roundtable discussed privacy, which generated conversations about the youth’s behavior on the Internet, how the youth online define friendships and to what extent privacy issues should be incorporated into education.

The panelists agreed that the youth value privacy, but have different views concerning what content is private or deserving of privacy.

“It’s not about knowing (about privacy),” O’Connor said. “It’s about younger generations’ values about what’s private and what’s public is different.”

JULY 18, 2011 - William O'Connor from Georgetown University (right) discusses the way online choices are affected by personal values during the Digital Natives Panel at the Internet Governance Forum USA 2011.

He discussed how he criticized his younger sibling’s activity online but acknowledged his parents’ similar sentiments about his own use of the Internet.

Vogt, Ataalla and O’Connor agreed that they are aware of, and content with, public access to content that is willingly posted.

“I think privacy is the wrong word for things that are public on Facebook,” O’Connor said.

The students argued against the myth that the youth are not aware of one’s own privacy. Those on the panel explained the benefits of sharing information, and Ataalla said her professors at Elon University encourage students to keep Twitter accounts public in order to attract employers.

Private social media accounts indicate you have something to hide, she said.

When youth expect privacy

Hamed asked the panel if there is a different level of thinking regarding something willingly posted on a social network site compared to information protected by a password.

Although the panelists agreed that they value technical privacy, they also agreed that “digital natives” are more likely to trust that corporations will protect their information.

“I guess maybe I’m a little too trusting,” Steves said. “I’m skeptical, but Google, for instance, I would look at the ratings and assume that maybe because everyone uses it I’ll be safe, but that may not necessarily be the case.”

Troiano said he believed it was in the company’s interest to protect the consumers’ information, which makes him assume a successful company is trustworthy.

On the other hand, Stern explained that when FireFox stores a user’s password, anyone that uses that computer has access to that information.

“It’s just something to think about,” he said.

The conversation suggested it is not that the youth do not value privacy, but rather that they distinguish between value-based privacy and technical privacy.

“We don’t understand this idea of privacy, the whole idea of data protection never crosses many of our minds,” Coleman said.

Nevertheless, Troiano said that he was not concerned with information released to advertisement companies because that transaction improves the lives of the users.

JULY 18, 2001 -- Will Vogt participates in "Digital Natives: Myth-busting about Youth in the Online World"

“I don’t think that’s an invasion of our privacy. I think that’s the efficiency of the Web,” Troiano said.

Cautionary relationships on the Web

While those on the panel admitted to trusting seemingly popular companies on the Internet, they expressed more skepticism concerning Facebook friend requests, Twitter followers and other more personal interactions online.

When receiving a request from a Twitter follower, Ataalla referenced Facebook to confirm the individual’s existence and questioned mutual friends to verify the person’s intentions.

Mutual friends and photographs help determine whether it is safe to accept someone’s request, Troiano said.

“We don’t get credit for thinking these things through,” Rowe said.

Although the panelists’ caution regarding relationships online counters the myth that youth are susceptible to dangers on the Internet, O’Connor identified meeting new people as one of the benefits of the Web. Coleman uses social media as a way to contact those interested in similar topics. She uses Twitter to find experts in journalism, her field of study.

“I have learned so much from the people I follow on Twitter, and reading articles and blogs and even having conversations with them,” she said. “It’s an opportunity to kind of learn from, not only experts, but also peers and have discussions with them about different things through those social networks.”

Hamed asked the panelists if social networks invite new connections or solidify the network users already belong to.

“It’s both isolating and opening,” Vogt said.

O’Connor first used social networks to maintain relationships with classmates he met at boarding school, he said. But he also said those same networks can create homogenous lifestyles and choices.

“There can be a discussion abut some of the things that are lost when surrounding yourself with people only like you,” he said.

How to communicate proper online behavior

Hamed asked the panelists if they believe the youth should be educated about proper online behavior.

Although some advocated formal education, others thought the Internet is simply life expressed in a new platform. In other words, the same values you learn that guide you in your everyday life are applicable, in some ways, to how you interact online.

“You can apply the same values you have in life (to the Internet),” O’Connor said.

For example, children are taught not to talk to strangers. Well, that concept is applicable to the Web, too. New social networks continue to enable the Internet to mirror real life.

“Our parents had the luxury of having a life where they could separate friend life and church life and family life, but information we’re putting out all has one shared life,” Troiano said.

Facebook categorized parents and friends in the same network, so all had access to the same content and information. Security controls were buried in the network, causing youth online to hesitate becoming “friends” with their parents.

But now, circles on Google+ divide social groups online similarly to how social groups are divided in real life. These fragmentations also help youth control who sees what content, giving users greater control over privacy.

Who should control Internet security?

Hamed posed another question about who should inherit responsibility of controlling privacy online.

Troiano said he doubted whether the government would have the right answers concerning privacy, but believed that the market would regulate itself.

“There are bad things that happen, but the Internet, in its free form, can counteract those things,” Troiano said.

Filters communicate what is appropriate to access, Stern said, but Rowe argued there should be formal education to teach young people how to search and find credible resources on the Internet.

Nevertheless, education does not need to come from a formal setting. The youth online have the ability to standardize online behavior, Coleman said.

“We have an opportunity, as younger people, to be a part of educating, not only younger people, but our peers about these things,” she said.

– Melissa Kansky

Internet Governance Forum-USA 2011 Potential-future scenario discussion: Regionalization of the Internet

leave a comment »

Brief description:

IGF participants broke into three different rooms to discuss three different, possible potential-future scenarios for the Internet in 2025. In this session, the brief description given to the discussants asked them to respond to the idea of the “Regionalization of the Internet”- a future in which the mostly global Internet we know today becomes more divided, with certain aspects isolated from others based on their geographic or economic similarities. The description noted that, “natural and man-made disasters could easily accelerate this process, leading to an alternate future where the differences between these islands is more pronounced and e-conflict between regions becomes a significant national security and economic development issue.”

Details of the session:

Garland McCoy of the Technology Education Institute and Andrew Mack and Alessandra Carozza of AMGlobal were at the front of the room to facilitate a wide-ranging discussion of the Regionalization of the Internet potential-future scenario at the Internet Governance Forum-USA 2011 at Georgetown University Law Center July 18.

This scenario sets up a divisive future for the Internet. You can read the full description used to launch this discussion in PDF format at the following link: http://www.elon.edu/docs/e-web/predictions/igf_usa/Regionalization_Internet_Scenario.pdf

The key drivers were to consider as causes for regionalization of the Internet:

  • National and corporate security concerns and increased pressure from non-state actors based in “failed state” regions of the world.
  • Global economic weakness, budget crises and significant, systemic unemployment.
  • Shortages of food and raw materials leading to rises in the prices for commodities, food and energy and supply chain/trade disruptions.
  • A rising “black market” dominated by narco/political/religious groups with increasing technical sophistication.
  • Expansion of IPv6 and the “Internet of things” creates an environment where citizens can be easily tracked within a region and where a market in false identities flourishes.

While it was considered a “bleak scenario” by its participants and moderators, the majority of the discussants in this possible potential-future scenario session indicated that the majority of the outcomes that were outlined are not only plausible, but that some are already occurring, and occurring at a faster rate than maybe previously anticipated.

JULY 18, 2011 - Members of the audience participated in Regionalization of the Internet, a session held during the Internet Governance Forum USA 2011. Conference attendees were encouraged to enter into discussion during the day's events.

Scenario facilitator Andrew Mack described the regionalization scenario as unique among the other scenarios presented today in that it is “the only scenario that is actually coming to pass.”

“A good chunk is plausible, said Leslie Martinkovics, an IGF participant from Verizon Communications. “When we’re looking at what’s happening today, there are a series of pressures, some economic, some security related. These are all real. There is a growing feeling that change is coming.”

Security is seen as the paramount concern for many areas of the world, prompting some regions to block certain domains, like the “Great Firewall of China.” The problem is that this blocking process is easily circumvented. George Ou of Digital Society maintained that the “Great Wall” is often considered “porous.” China was mentioned as a key player in the rising challenges facing the argument against regionalization. Other country governments listed as key “players” in the conversation included Brazil, Iran and India.

“Any attempts to isolate, to protect, fail,” said Bill Smith, a participant from PayPal. Attempts at blocking, he said, “are doomed to fail as well.”

The proliferation of the hacking group Anonymous in the Arab Spring was a catalyst for discussion surrounding the viability of regulating such isolated Webs, or “islands,” or whether a more unitary Internet is more desirable.

“In order to dissuade users from building up isolated Webs, it’s important to build up the single, unitary net and make it better,” Smith said.

“The Internet,” said Sally Wentworth of the Internet Society, “is a tool. It is not the cause, it’s an enabler. People want to communicate, people want to create. It’s very difficult to put that genie back in the bottle and carve it up.”

Because there is a fundamental need for communication across islands, it was asserted by a number of participants that regionalization may not even be possible. The Arab Spring, Wentworth and others explained, is an example of an inability to maintain separate communities within the greater Web.

The existence of dark nets was referenced as a refutation of the inherent nature of a unitary Web. Scott McCormick explained that dark nets, which are essentially intranets, have existed for quite some time. North Korea, he contended, is a dark net and has been for a while, with very few people who have access to it. Governments like North Korea’s have opted out of a global, unitary Web, but the moderators and panelists questioned whether that action is truly possible.

JULY 18, 2011 - Members of the audience participated in Regionalization of the Internet, a session held during the Internet Governance Forum USA 2011. Conference attendees were encouraged to enter into discussion during the day's events.

“Can you really opt out?” Mack asked. He noted that existing within the metaphorical “castle,” or within the isolated intranet, does not necessarily mean that there is still isolation within the castle itself. And living in the castle does not necessarily guarantee protection.

“If all your people don’t live in the castle, you can’t protect them,” Mack said.

There are technical hindrances to fragmenting the Web. When countries try, they are doing so at the DNS level, not at the IP level, according to McCormick. This is what makes it easy for users with means and motivation to work around the blockages. The introduction of IPv6 will greatly affect the nature of users to navigate those blockades because it will make it much harder to memorize IP addresses, which is the way most users avoid the blockages, McCormick said.

Those in the group in favor of regionalization felt that isolation might make security more plausible and more manageable. Tom Lowenhaupt, who advocates for the development of a .nyc TLD, explained that top-level domains (TLDs) are the way to enable regionalization. Applying security to those TLDs enables a more private, more secure and more manageable, intuitive Internet. Those against regionalization offered that it may open doors to a host of other more problematic issues—the goal is the minimum amount of regulation for the most effectiveness, Smith said.

The future governance of the Internet will be determined by three major players: general users who may not feel a personal stake in Internet governance; the criminal element, like Anonymous, which has a major stake in Internet governance, but that may be undesirable; and a disaffected group that may not feel it has a stake until circumstances start to change. What will come to pass remains to be seen, but the timeline, everyone agreed, is moving far faster than originally anticipated.

– Bethany Swanson

Internet Governance Forum-USA 2011 Potential-future scenario discussion: Government Prevails

leave a comment »

Brief description:

IGF participants broke into three different rooms to discuss three different, possible potential-future scenarios for the Internet in 2025. In this session, the brief description given to the discussants noted that “Government Prevails” scenario imagines a future affected by man-made and natural challenges and disasters – wars, civil strife, an aging world and interventionist governments. This scenario assumes that while “the ICT industry, media companies and NGOs” are the leading players on the Internet stage today [some people might disagree with this assumption], by 2025 governments and inter-governmental organizations will have come to rule the Internet as a result of actions taken to protect particular interests from negative exposure.

Details of the session:

A small group of Internet stakeholders from various sectors met to discuss the Government Prevails potential-future scenario at the Internet Governance Forum-USA 2011 at Georgetown University Law Center.

The audience-participation session was led by facilitators Pam Covington of Verisign, Walda Roseman of the Internet Society, Steve DelBianco of NetChoice and ex officio leader Marilyn Cade of ICT Strategies.

This scenario sets up a closed-off future for the Internet. You can read the one-page PDF used to launch this discussion here:

http://www.elon.edu/docs/e-web/predictions/igf_usa/Government_Prevails_Scenario.pdf

The potential future drivers of change people were asked to consider included:

  • Manmade and natural disasters push governments to exert more control over Internet resources.
  • Changes in the Domain Name System force intergovernmental organizations to impose new global regulatory regimes.
  • Networked image sensing through devices such as Kinect and GPS are used to identify and track people, with positive and negative effects, but the net result is a global surveillance culture.
  • Governments limit bandwidth for video conferencing when they find revenues for hotels, airlines and other travel-related economic entities in sharp decline.
  • Lawsuits and other developments cause governments to create blacklists of websites prohibited from Internet access.
  • Anonymity on the Internet is brought to an end as a response to viruses, worms and credit card fraud and user authentication is required.
  • Governments take every opportunity to coordinate and consolidate power under various mandates for global solutions and by 2025 governments and law enforcement are deeply embedded in all aspects of the Internet.

JULY 18, 2011 - Steve DelBianco of NetChoice was a participant during Government Prevails, one of the morning breakout sessions of the Internet Governance Forum USA 2011.

NetChoice Executive Director Steve DelBianco began the session by sharing the drivers of this future and what the Internet might look like in 2025.

“The scenario at its key is an attempt to be provocative about a potential future,” said DelBianco, who emphasized this session was supposed to search for what could be plausible and to develop opinions on the possible benefits and disadvantages of a future and what could be done to mitigate its impact.

“Is this the George Orwell scenario where it is a question of not whether but when?” Roseman said.

Although there was a list of questions the leaders intended to discuss, the session quickly turned into a running debate, bouncing from topic to topic as the participants introduced them. Two main themes quickly emerged.

The first was the conflict between security versus privacy.

Carl Szabo cited the situation in London, where hundreds of security cameras were added to city streets with the intention of reducing crime. The result was criminals adapting to the increased surveillance by wearing hooded sweatshirts.

“As we give away these rights and privileges for alleged increased security, it’s not necessarily going to return with security,” he said.

Slava Cherkasov, with the United Nations, brought up the recent case of Brooklyn boy Leiby Kletzky, who was allegedly abducted, murdered and dismembered by a stranger, Levi Aron. In that case, it was a security camera outside a dentist’s office that led to Aron’s arrest, confession and the recovery of the boy’s body within an hour of viewing the footage.

Judith Hellerstein, with the D.C. Internet Society, said that government use of data is acceptable when there is an understanding about privacy and intent.

“You also have to sort of figure out how governments are going to use that technology in hand,” she said.

In the scenario, an issue was introduced, based on reality, where pictures of protesting crowds were tagged, allowing for the identification of people at the scene of a potential crime.

Elon University student Ronda Ataalla expressed concern over limiting tagging in photographs, because it was a limit on expression.

But David McGuire of 463 Communications reminded the room that civil liberties traditionally don’t poll well.

“Free speech isn’t there to protect the speech we all like,” he said.

JULY 18, 2011 - Walda Roseman of the Internet Society shares her knowledge during Government Prevails, one of the morning breakout sessions of the Internet Governance Forum USA 2011.

DelBianco expanded the tagging issue to raise the issue of “vigilante justice,” people using debatably privacy-violating practices to identify people they consider wrong-doers, and brought up Senate Bill 242 in California, which would alter the way social networks create default privacy settings for users. This bill was narrowly defeated 19 to 17 June 2.

Chris Martin with the USCIB talked about how not all companies are interested in using their technology for ill or personal gains, listing Google and their withholding of the use of facial recognition technology to protect people’s privacy.

This subject is also related to the second main discussion topic: the government versus industry and the private sector.

Covington questioned Martin about whether he saw governments developing that same facial recognition technology, as described in the scenario, and using it to monitor citizens.

“Some,” was his reply, before adding that all Internet governance was about maximizing good and minimizing evil.

There was then a brief discussion about the Patriot Act and relinquishing civil liberties online in the circumstances of a national emergency. Who decides when the emergency has passed?

Szabo and others questioned if the government was even the right organization to take over in the event of a disaster.

“It’s much easier to say, ‘Let them deal with it so I don’t have to,’ but the question is, ‘Will they do it better?’” he said.

Cherkasov said not necessarily, mentioning that when Haiti was struck by the severe earthquake in January 2010, it took two weeks for government organizations to develop a database to search for missing people, but in Japan in March 2011, it took Google only 90 minutes to come up with the same technology. He then returned to the security camera situation, concluding that citizens were the first line of response and information in a disaster scenario.

“There will always be maybe an ebb and a flow but it’s the power of the people that will ultimately be able to create that balance,” Roseman said. “But it’s going to have to be a proactive effort to get and keep that balance.”

Roseman also said one of the benefits of the industrial and private sector was an ability to use funds more freely than the government, which, presumably, does operate on a limited budget.

“When you have governments and the private sector and industry working together, you generate a lot more money and opportunity to drive change,” she said.

McGuire, though, expressed concern that industry and the private sector have some misconceptions about the power of the Internet, believing that it is too powerful for any law or government to cut it down. He said many, including those in the area of Silicon Valley, Calif., think the Internet will always be able to circumvent policy.

Most session participants seemed to agree that the potential scenario was troubling.

“It makes me want to move to somewhere where there are more sheep than humans,” joked Covington.

But Brett Berlin, of George Mason University, said that the Internet, and the choices that are made about governing it, are ultimately people-driven decisions, reminding the rest of the room that technology works for people and not the other way around.

“If we are foolish enough to think that open Internet will fundamentally allow us to be better, we are making a mistake.”

– Rachel Southmayd