Hi, please

Category Archives: reading summary

Augmented Reality and the Future

Overview of Augmented Reality

Augmented Reality (AR) is a very important technological application that can be applied to different mediated interfaces e.g. cell phone, video game, television, etc.  As a result, the technology functions by enhancing one’s current perception of reality.  We are beginning to see more developments in a variety of different media platforms where augmented reality technology is being introduced.  The relevancy and importance of this topic to new/digital media pushes the envelope and current paradigms of how we interact with our current models of media and technology.  In the following articles, several people help to explain this growing phenomenon and its possible impact on our future.

How Augmented Reality Works by Kevin Bonsor

In this article Bonsor outlines five key points to augmented reality of its role in different interfaces like cell phones, video games, and the military as well as its limitations and its future.  He mentions, “Augmented reality adds graphics, sounds, haptic feedback and smell to the natural world as it exists. Both video games and cell phones are driving the development of augmented reality…Augmented reality is changing the way we view the world — or at least the way its users see the world.”  A rather simplistic definition is to superimpose audio-visual and other sensory graphics over our real-world environment in real time he exclaims.

One example that he references is called “Sixth Sense” utilizing some basic components like: a camera, small projector, smart phone, and a mirror tied around a lanyard that hangs from the users neck.  The user than has the ability to manipulate his reality with the help of this device.  “If he wants to know more about that can of soup than is projected on it, he can use his fingers to interact with the projected image and learn about, say, competing brands. SixthSense can also recognize complex gestures — draw a circle on your wrist and SixthSense projects a watch with the current time.” Bonsor goes on to offer some amazing examples of how cell phone apps which can be downloaded on the iPhone or Android can perform amazing functions.  One example, Layar, uses the phone’s camera and GPS capabilities to gather information about the surrounding area.  Another, Yelp’s Monocle will provide the user with information about the surrounding restaurants.  Next, Bonsor discusses the uses of AR in military technology and video games.

Total Immersion is AR software that allows baseball cards to interact in a very unique way by making the player on the card a 3D model that performs a specific action like throwing the ball.  Even with military technology, a squad in enemy territory doing reconnaissance can wear a “AR-enabled head-mounted display that could overlay blueprints or a view from a satellite or overheard drone directly onto the soldiers’ field of vision“.

Lastly, Bonsor concludes with some of AR’s limitations and challenges that must be overcome like GPS’ accuracy, the reliance on using cell phones, the concern for too much/an overload of information, and of course, issues dealing with privacy and security are mentioned.  He states, “The future of augmented reality is clearly bright, even as it already has found its way into our cell phones and video game systems.

Video: Bruce Sterling’s Keynote – At the Dawn of the Augmented Reality Industry

Bruce Sterling is as excited as a ‘kid in a candy store’ as he goes through some tips, predictions, and advice for the industry.  He describes three features to augmented reality 1) it combines the real and the virtual 2) it’s interactive in real time 3) and it registers in 3D.  People think they know what it is.  There’s too many companies, games, ads, applications, webcam, projected video technology, head mounted displays, and so much more that’s developing.  Along with these, there’s so much designing and skill sets that are required.  It’s a profitable business and AR looks “cool”.  It’s not too hard to understand, it’s not too geeky or remote.  It’s the most exciting thing happening in the tech industry.

  1. There’s a lot of hype that’s happening and awaiting.
  2. You are insulting the term’s pioneers when you try to change or neglect the term.
  3. It’s a tag.  A hashtag that you can look up on Google.  Where are people interested 1) Seoul, South Korea 2) Singapore 3) Munich 4) Kaula, Lam pour 5) Auckland… etc.  Augmented Reality is magic.  It works like magic. Yet, magic can be ‘cheezy’ and deceitful.
  4. It’s sleazy and is involved in pop.  It’s involved in porn, sells tampoons, sci-fi, comic books, politics, medicine, museum culture.
  5. Security advice – criminals are going to come.  Security is important to build first.  You are going to have trouble.  You are also going to get publicity of panics.  You are going to the ‘four horse men of infopocalypse’.  How do you deal with the political implications of AR?  You’re going to need an industry journal and code of ethics to help.
  6. Be prepared that the other guy will buy you out.  The major companies will buy you out.
  7. Host of problems: batteries will fail, screens are too small, environmental problems, roaming fees, walled gardens, opacity in pricing, etc.
  8. You need to have a look, an image.
  9. Everything changes for the better or everything becomes abandon for the worse.  Either case, you are in for a wild ride.

Can Augmented Reality be a Commercial Success for E-Commerce by James Gurd

Despite it’s buzzword appeal and social media’s increasing relationship with commercial planning, Gurd boldly asks the question of whether or not there is a commercial model that could make AR a practical tool in the e-commerce armoury?

Gurd answers his own question with a quaint YES.

He begins by briefly and simplistically explaining what augmented reality is.   Then, Gurd examines the current landscapes of different businesses and interface applications that are using AR in some examples of retail, publishing, and automotive.  Again, Gurd asks another question, “What will drive the uptake of AR?” and then adds that the increased usage of smart mobile devices like the iPhone, iPad, Kindle, Blackberry, Android, etc.  will be driving forces for uptaking AR technology.

Lastly, he proposes some plans where AR can be applied to in retail and asks if it can add value to consumers and drive commercial value.  Here are some of his suggestions:

“The savvy marketers will deliver content and solutions that people didn’t even know they wanted but subconsciously always desired. I think retail can tap into this latent demand in several ways:

  • High street retailers can develop a Store Finder mobile app that overlays local store information on interactive maps – perhaps an aggregation of all major brands would provide cost efficiency.
  • Dynamic contextual advertising that displays offers and promotions based on the location and profile of the mobile user (e.g. iPhone user gets different message than Blackberry user) – next step on from voucher code sites.
  • Serving customer reviews to mobile devices to facilitate decision making on the move.
  • Dynamically generating cross and up-sell recommendations based on scanning a barcode in-store on your mobile phone.
  • For the fashion industry, improving modelling of clothes from home to help make purchase decisions – increased accuracy should also help reduce returns.

If You’re Not Seeing Data, You’re Not Seeing by Brian X. Chen

Quotes taken from this article >

  • “Augmented reality is the ultimate interface to a computer because our lives are becoming more mobile,” said Tobias Höllerer, an associate professor of computer science at UC Santa Barbara, who is leading the university’s augmented reality program. “We’re getting more and more away from a desktop, but the information the computer possesses is applicable in the physical world.”
  • “Augmented reality is stifled by limitations in software and hardware” Examples are batter life, prices in hardware,
  • “The smartphone is bringing AR into the masses right now,” Selzer said. “In 2010 every blockbuster movie is going to have a mobile AR campaign tied to it.”
  • “This is the first time media, internet and digital information is being combined with reality,” said Martin Lens-FitzGerald, co-founder of Layar. “You know more, you find more, or you see something you haven’t seen before. Some people are even saying that it might be even bigger than the web.”
  • “This industry is just getting started, and as processing speeds speed up, and as more creative individuals get involved, our belief is this is going to become a platform that becomes massively adopted and immersed in the next few years.”

Cultural Institutions and Participation / Reading Summary

I have chosen this topic to combine a major ongoing topic of this class -participation- with my interest for cultural institutions. The Web is a challenge to institutions. This book demonstrates how Social Media could be the interface that turns museums into platforms dedicated to fruitful interactions.



In the preface of her book Nina Simon explains the reasons that pushed her to focus on the development of a new strategy for museums.

She starts by making an objective statement: “Over the last twenty years, audiences for museums, galleries, and performing arts institutions have decreased, and the audiences that remain are older and whiter than the overall population.” In other words it seems to be pretty clear now that cultural institutions are no longer very good at fulfilling their educational mission.  They would have better to question their strategy and redesign it to attract a broader and more diverse audience.

If cultural institutions do not adapt their strategy they put themselves at risk to be supplanted by the Web: “increasingly people have turned to other sources for entertainment, learning, and dialogue. They share their artwork, music, and stories with each other on the Web.”

Obviously museums have lost their connection with the public. How to retrieve it? In Nina Simon perspective, the Web is not the enemy of cultural institutions, on the contrary she sees it as a great opportunity to “enhance cultural institutions”. Museums should recognize that people are no longer willing to be a passive audience: they expect to have their say in the learning process provided by museums. They want to actively participate.

Nina Simon strongly emphasize on the change in the visitor status: “Visitors expect access to a broad spectrum of information sources and cultural perspectives. They expect the ability to respond and be taken seriously. They expect the ability to discuss, share, and remix what they consume.” This point seems particularly interesting to me as I believe that this is the most challenging requirement, the one that is going to give the hardest time to cultural institutions. Cultural institutions are used to provide people with a discourse full of information and resources but they are not used to be open to question. In other words they are used to the one to many type of communication. They only work with experts and do not consider people’s insight. But this is not working anymore.

Museums have to change to become a place to SHARE.According to Simon it requires three changes in museums attitude:

  1. To be audience centered that is to say providing a place designed to meet visitors’ expectations
  2. To let visitors construct their own experience, respect their freedom
  3. To take into account users’ voice and allow them to provide information and to invigorate the place

As we can see, the main change lies in the role attributed to the visitors. To attract visitors, museums should include them in their activities.

So far so good but how to practically achieve this major change in cultural institutions  that are used to traditional practices?

Simon stands for a participatory strategy and argues that museums should rely on the Web to take on the challenge of redefining the role of their visitors. Implementing a participatory approach could help solving five forms of public dissatisfaction in experiencing cultural institutions:

  1. Museums are often said to be irrelevant in people’s daily lives.
  2. They are said to never change, to be kind of frozen
  3. A place where you only get one authoritative discourse
  4. Not a creative place
  5. Not a comfortable place to interact with people

Nina Simon explains that her goal with this book is to provide museums with practical tips that will enable them to organize this change.


I have chosen to provide you with an abstract of this chapter because it brings back to the ongoing tensions in the relationships between institutions and networks. The participatory Web has resulted in an increase in the development of diverse networks. Institutions used to be the only authority but now the situation has completely changed and the emergence of networks has generated a power of resistance. The knowledge that cultural institutions offer to people is not only likely to be analyzed but also questioned.

Nina Simon starts by establishing that a participatory strategy can only be successful if the institution stops rejecting the visitor’s input and accept to be open to establish a partnership. She stresses on 3 required principles:

  1. “Desire for the input and involvement of outside participants
  2. Trust in participants’ abilities
  3. Responsiveness to participants’ actions and contributions”

In other words, the institutions have to be in the right mindset. Once these 3 principles are secured within the institution, there is a lot of ways of implementing participation.

The question is: How to chose the best kind of participation for your institution?

  • Models for participation

To address the question Nina Simon aims at creating a typology of the different models of participation.

She relies on a comparison between science labs and refers to the scientist Rick Bonney. “In 1983 Bonney joined the staff at the Cornell Lab of Ornithology and co-founded its Citizen Science program, the first program to professionalize the growing participatory practice. Over the course of several projects at the Lab, Bonney noted that different kinds of participation led to different outcomes for participants.” In 2008 Bonney and his team managed to defined three models of participation. In Simon’s perspective these models are applicable to museums as “like science labs, cultural institutions produce public-facing content under the guidance of authoritative experts.” Here are the different levels of participation established by Bonney and one added by Nina Simon:

  1. Contributory projects = Visitors collect data that are processed by the experts
  2. Collaborative projects = Visitors collect and analyze data together with experts in a kind of partnership
  3. Co-creative projects = Visitors are included in the development of the project from the very beginning. Visitors’ concerns are seriously taken into account.
  4. Hosted project = The institution provides a portion of its facilities to support project developed by visitors
  • Finding the right model for your institution

Which model of participation suit you the best?

The answer comes down to the culture of the institution. Is its staff very likely to actually involve participant in the development of the museum? “Institutional culture helps determine how much trust and responsibility the staff will grant to community members, and forcing an organization into an uncomfortable model rarely succeeds.” It is key to understand the institution’s culture and to adapt the participation model to it. To be able to determine which model will suit you the best Nina Simon recommends a set of questions:

× What kind of commitment does your institution have to community engagement?

× How much control do you want over the participatory process and product?

× How do you see the institution’s relationship with participants during the project?

× Who do you want to participate and what kind of commitment will you seek from participants?

× How much staff time will you commit to managing the project and working with participants?

× What kinds of skills do you want participants to gain from their activities during the project?

× What goals do you have for how non-participating visitors will perceive the project?

  • Participation and mission

Constantly refer to the mission of your institution and propose projects according to it. “Speaking the language of the institutional mission helps staff members and stakeholders understand the value of participatory projects and paves the way for experiments and innovation.” Be careful to design projects that remain consistent with your institution culture and identity.

  • The Unique educational value of participation

Education is the corner stone of museums. In this specific area, participatory techniques have proven to be the more efficient “to help visitors develop specific skills related to creativity, collaboration, and innovation.”

Nina Simon states that “participatory projects are uniquely suited to help visitors cultivate these skills when they encourage visitors to:

  1. Create their own stories, objects, or media products
  2. Adapt and reuse institutional content to create new products and meaning
  3. Engage in community projects with other visitors from different backgrounds
  4. Take on responsibilities as volunteers, whether during a single visit or for a longer duration”

  • The Value of giving participants a real work

While visitors develop their skills, museums can also benefit directly from participatory strategies if they entrust visitors with real projects.

  • The strategic value of participation

Participation can enhance the value of your institution in its community. It can improve its image and gain credibility in the society. “Participatory projects can change an institution’s image in the eyes of local communities, increase involvement in fundraising, and make new partnership opportunities possible.” Nina Simon encourages cultural institutions to focus on local communities and be more relevant in people’s everyday lives.

New Media and The Digital Natives – Reading Summary

Born Digital – John Palfrey

If you have any interest in Digital Natives – this 1 hour talk is very informative about what a digital native is, and the godfather of this topic, John Palfrey goes into great detail on his definition and how this generation will change the nature of how we see the internet in the future. It is a population of young people who are will impact they we think, work, and function on a day to day basis.

The Digital Natives are a group of people who are comfortable with sharing their daily lives on the net (ie flick, twitter, facebook) and were exposed to these technologies at a very young age. This population is typically born after 1980, have never known life without a computer, TV without a remote control, and never dialed on a rotary phone (not true since I was born after 1980!).

Presentation by John Palfrey – “As part of the Google D.C. Talks series, and in partnership with Harvard’s Berkman Center for Internet & Society, Professor John Palfrey offers a sociological portrait of “digital natives” — children who were born into and raised in the digital world — with a particular focus on their conceptions of online privacy.”

There are a few points he clarifies in this video  -

  • This is a POPULATION, not a GENERATION
  • Born after 1980 – because this is when the advent of technology began
  • They have access to these technologies
  • 1 billion who have access (number is low due to digital divide)
  • This is not a DUMMY generation – they are very tech savvy.
  • Young people are INTERACTING, but in a different way – remixed, made in a different way.
  • We must teach digital media literacy

We are Digital Natives – Barrett Lyon

“A new class of person has emerged in the online world: Digital Natives. While living in San Francisco, I also live on the Internet. The Internet is now a place: a two dimensional world that has transcended the web; there is no government, and the citizens are Digital Natives.”

Lyon’s main point is that people are no longer citizens of the United States, or France, but also citizens of the internet. There are specialized groups within these digital natives such as game players, hackers, developers, and the social etiquette that is involved is much different than the physical reality we live in.

Some people choose to define themselves by the activities they take part in on the web – such as social online movements – ie Green Movement, Tea/Coffee Party, which are branches from physical political movements, but these started on the net.

“This scares the crap out of Governments all over the world, because they are ill prepared to deal with these situations. To government regimes that are comfortable asserting their control, this concept is terrifying. How do they counteract the changes online and the movements? Do they need to change their politics, defense, propaganda, and warfare?”

This statement displays that some of these online movements do have an affect on how governments think about the web. Many countries have harsh restrictions on what their citizens can view on the net, ie China, Iran, etc.

The Future of The Internet and How to Stop it – Jonathan Zittrain – Short Summary

This title is actually a book that JZ has wrote which is actually available on amazon if anyone would like to purchase. His main point is that collaboration is key in the survival of a productive internet and cites wikipedia as the main example. The first generation of products that have spear headed the internet have been Tivo, Ipods, and Xboxes, which are tethered appliances, meaning they are using net as their connection to their content/databases.

“The Internet’s current trajectory is one of lost opportunity. Its salvation, Zittrain argues, lies in the hands of its millions of users. Drawing on generative technologies like Wikipedia that have so far survived their own successes, this book shows how to develop new technologies and social structures that allow users to work creatively and collaboratively, participate in solutions, and become true ‘netizens.’

New Media in (Outer) Space: Additional Reading Summary

A Better Network for Outer Space – By Brittany Sauser

Astronauts & robotic spacecraft presently stay connected to Earth via point-to-point radio links, specifically made for each new mission. Google’s vice president Vint Cerf designed the networking protocols that launched the Internet is looking to change this, though; he wants to put this same type of network in outer space. In hopes of making this a reality, he is currently working with NASA and MITRE Corporation on the Interplanetary Internet project. The project was set to be tested in 2009 aboard the International Space Station.

In an interview with Technology Review, Cerf further explains the project, where he notes that it began 10 years ago at the time of the interview (10/27/2008). He notes that one problem with space communication has been the “limited use of standards.” New communication software tends to have to be written every time a new spacecraft is launched, making it inefficient. Thus, the project was created to help develop a set of communication standards in space, much like ones already being used on the Internet.

One of the main challenges Cerf found in building this network is the delay time. Because of the vast distance in space between planets, it can take long periods of time for information to travel. Another major problem he found is that planets are constantly moving and rotating. Because of this, communication can not only be delayed, but also disrupted. Because of these dilemmas, part of the project involved designing a “delay- and disruption-tolerant networking system (DTN).” So far, no new equipment has had to be launched into space in order to facilitate this new network; only new software has had to be uploaded to already existing spacecraft.

These new standardized protocols could enable better communication between spacecraft launched by all nations in space. Over time, as new missions are launched, a better backbone for the system will start to be created. Cerf notes that, “every time you put up a new mission, you basically are putting up another potential node in the network.”

The Origins and basics of the Interplanetary Internet Project – By Vint Cerf

1. Node

If this video, Cerf notes that the Internet’s utility is in part a consequence of the standardization of communication protocols, making it easy for anyone from anywhere to instantly connect to the Internet. Because of this, Cerf and his team asked theirselves what type of standardization would be beneficial within the context of space? He explains that in 1964, the Deep Space Network was built, which consist of 3 antennas (one in California, one in Australia, and one in Spain) in varying locations. As the Earth rotates, at any one time, one of the antennas should be able to see a large amount of the solar system & interact with spacecraft. But, each time a new space craft is launched, the communications system must be tailored to this new space craft. Thus, Cerf and his team is looking for a more efficient way of communicating with spacecraft.

2. Frequency

The data rate that information can be moved at from spacecrafts to antennas on Earth is currently very low, as a result of the spacecrafts having little power and little antennas. To help boost power for new spacecraft, the project is looking into whether or not current spacecraft already launched can be used to help facilitate communication between Earth and space. The common answer has been “no,” since there is no standard set of communication protocols between the spacecrafts. But, over the past 20 years, there have been small attempts at standardizing certain parts of the spacecraft communication systems. There are many different levels this can be done at, with the 3 typical levels being: the bottom level of  “actual transmission over radio length,” 2nd layer being “link management,” and the 3rd level up being the network level, consisting of routing traffic. The 1st layer of radio transmission has been standardized. Furthermore they are also beginning to standardize the 2nd layer of link management. But, they have not been able to standardize too much above this 2nd level.

3. Standardization

Here, he talks about the theory that with more standardization comes the ability to more easily use previous spacecraft within the scopes of the new space mission. He uses an example of 2 rovers that were sent to Mars, which has radios attached in order to send information between the rovers and the Deep Space Networks antennas. But, these radios had to be shut down after 20 minutes of use, otherwise they would overheat. Three orbiters were surrounding Mars, though, that, because of standardization, allowed the Mars rovers to send information to the orbiters, which could then be sent to the DSN antennas at higher speeds & longer periods of time.

Vint Cerf Mods Android for Interplanetary Interwebs – By Cade Metz

This article discusses Cerf’s work in trying to bring his Interplanetary Interwebs protocol to mobile networks on Earth. At first, Cerf and his team had tried to make his Interplanetary Interwebs protocol work using the Internet TCP/IP protocol, noting that it did not work because of, “a little problem called the speed of light” and the rotation of planets. Instead, the created and launched the Delay-Tolerant Networking (DTN) protocol. A main difference between TCP/IP and DTN is that, “unlike TCP/IP, DTN does not assume a continuous connection.” With DTN, if there are delays in transmission, nodes will not send out information until there is a safe connection.

Now, Cerf and his team is looking to bring DTN to earth. It has been tested in Sweden through using laptops in moving vehicles. Furthermore, the protocol has already been added to, “Google’s Android open source mobile stack as an application platform – ie it sits on top of the OS.” Cerf sees DTN helping out with mobile connections, since it is a “dense and hostile environment,” as a way to increase coverage.

NASA Launches Astronaut Internet in Space – By Tariq Malik

As of January 22, 2010, astronauts on the International Space Station have a live Internet connection, and have even been using Twitter.

While astronauts have used Twitter during space missions before, the tweets were dispached through Mission Control and posted by a third party.

The space Internet uses the station’s high-speed Ku-band antenna, making the Internet functioning whenever the station is connected through this. “To surf the Web, astronauts can use a station laptop to control a desktop computer on Earth. It is that ground computer that has the physical connection to the Internet.”

NSSA Applauds Presidents Commitment to the Mission of NASA and the Role of Space in Providing for the Future

In this article, the “National Space Society applauds President Obama for his expression of firm commitment for human spaceflight, and for moving forward in refining the administration’s plan for space exploration” during his speech on April 15, 2010.

Within his plans, Obama mentioned the importance of extending the life of the International Space Station. He also explained the importance of the, “critical role of breakthrough technologies in enabling NASA and our nation to create the future we wish to see come to pass.”

This week’s readings – Nationalism, Postnationalism and Digital Power

[Frost] Internet Galaxy Meets Postnational Constellation

Being a nation is about being on the same page. Nationalism developed, according to Benedict Anderson, in the 18th century when print media began “fostering a new sense of attachment, in this case among those who read the same newspapers, or imagined the same fictional communities through novels”. This allowed:

  1. The conception of community – a sense of attachment with people you have never met
  2. A shared worldview – a common sense of meaning of the experiences of life (with secular nationalism replacing myth and religion)
  3. A new, population-centric, mode of political engagement – replacing religious or dynastic authority
  4. A new set of social and political relations – new modes of inclusion and exclusion

And therefore, linguistic bondaries crystallized into national boundaries.

If the print media provided the landscape in the 18th century on which various historical factors crossed to develop a sense of community that what we now call nationalism, then can the new media also create “a new sense of attachment” and generate “a common political culture” that would make possible a “just” and “well ordered” post-nationalist order.

Habermas hopes it can. Because the Internet allows being on the same [web]page! “In both cases,” says Frost, “a powerful new medium arrived into an environment already experiencing shifting political, economic, and social ideals, and was adopted at an unprecedented rate.”

Note: Anderson looked at history in retrospect. The Internet is too new for us to be able to look at it that way. It is still evolving.

Why ask the question? “If we are to make responsible decisions today, we need to think about what might lie ahead”

Frost looks at the advent in the context of the four factors in community formation that she identifies above:

1- Conception of Community

“Communities exist in a symbiotic tension with identities (whether self-defined or ascriptive). Without one, it is difficult to develop the other, because there is no reference point for differentiation or affiliation.” But online interaction is either anonymous or identities are “disposable”. There is no commitment and mutual obligation. “The internet… is a site of great social flux and uncertainty.”

“In fact, anonymity not only makes the growth of new communities less likely,” believes Frost, “it can act to dismantle existing social bonds.”

Question: when anonymity and privacy are no longer the norm, how will this change?

2 – Systems of Meaning

According to Hannah Arendt, this requires:

  • a) a basis for mutual understanding, “the sharing of words and deeds”
  • b) boundaries or “stabilizing protection” to hold together this shared experiences

As “a vehicle for social or collective projects”, the Internet “can provide a basis for shared norms and meanings in those instances”, but what it “currently offers in these regards is insufficient”. Citing Lessig, Frost says “the increasing trend towards commercialization online may simply be too strong for such projects to resist”.

“It is not clear, therefore, how any new political or social solidarity associated with the Internet would manage to resolve the problem of meaning”. Any transformation to post-nationalism will require this.

Question: The newspapers and novels that laid the foundation for nationalism were also commercial. Anderson calls the phenomenon Print Capitalism. Does commercialization hinder community building or make it sustainable?

Question: Can the boundaries that hold together common experiences be drawn on a new cultural plane – e.g. Can the copy left movement or the free software movement compete with nationalism for loyalty?

3 – Political Engagement

While more people can participate in a democracy when it is “internet-enabled”, Frost thinks “what matters in democracies… is not just the volume of participation, but its quality”. In order to be the site of a new “public sphere” the internet has to:

  • a) be equally accessible for all
  • b) allow equal participation

The internet “fails the requirement for inclusivity and is “not necessarily more equal in its treatment of participants than you would find in an offline setting”.

While the Internet can “free the individual from the restrictions of ascribed identity and communal attachments” and “replace them with more voluntary associations”, these “loose constituencies of shared interest cannot lay the groundwork for the demanding task of political life”.

4 – Social Inclusiveness

Since “the internet favors loosely bounded communities characterized by loosely democratic and non-democratic social relations”, a major problem for a new social order would be of cohesion.

  • Solidarity can arise out of new innovations made possible by the new communications practices
  • The Internet can deepen the existing experience of exclusion or just enhance its awareness, thus becoming a source of new solidarities

The digital divide – the fact that some countries or demographic groups do not have as easy and efficient internet access then others – is key. “The need for expensive computing resources and telecommunications infrastructure to support the medium means that it will inevitably favor the developed and affluent populations over others”. Similarly, “the language barrier, which played such a large part in the birth of nations, is still as singificant as ever”.

But “the Internet’s capacity to heighten the experience of exclusion… represents its greatest potential for change”.


“The difficulty in assessing the prospects for post-nationalism in the wake of the Internet then is not that new political forms or social ideals are unlikely to arise. The problem is that we might be looking for change in the wrong places and with the wrong expectations”.

According to Frost, “It may not be the people with the most extensive access or highest profile online who will champion deep social and political change… it is the grounds with limited access judt enough to see what they are missing out on, who may have the most to gain from pioneering new modes of social relations, meaning and engagement.”

She explains the scenario in our second reading, in a response to a chapter in Collaborative Futures titled “Solidarity” [that cites her article].

Catherine Frost’s response to Mike Linksvayer

On the post Collaborative Futures 5

“Could the collaboration mechanisms discussed in this book aid the formation of politically salient postnational solidarities?” Mike Linksvayer asks. His thesis: “If political solidarities could arise out of collaborative work and threats to it, then collaboration might alter the power relations of work.” Therefore:

  • a) Despite ease in international trade barriers, workers cannot simply move between jurisdictions for better salaries or working conditions. But an increasing share of wealth via distributed collaboration does mitigate some inequalities of the current system
  • b) When knowledge is locked in through intellectual property rights, a worker cannot afford access to it. But with the GPL license, “the means of production are handed back to the labor”, and that makes possible “a feeling of autonomy that empowers further action outside the market”.
  • c) Collaboration allows workers more autonomy in the market or the ability to stand outside it, but it also gives significant autonomy to communities outside the market. Some such communities, eg wikipedia, “are pushing new frontiers of governance” and could lead to community governance and postnational solidarities

Frost’s intention…

… was not to say that the solidarities generated by the Internet echo the nationalist solidarities of the past. Anderson had looked at the emergence of the nation state in retrospect and the same is not possible with the internet. “Consciousness very often follows real life realities”.  Her concern, she insists, “was to see whether we could learn FROM the rise of national solidarities to understand how any new orders might take form”.

One lesson she learns is that “exclusion is a powerful force for forging solidarity”. She explains this more precisely with the following scenario:

“If the global future really belongs to the developing world with huge populations of well educated people who by and large don’t relate to the glossy consumerism of the internet, then they may use this very versatile tool in their own, more innovative ways. Which leaves everyone else playing catch up. And that catch-up process shifts power subtly but consistently in a new direction.”

[Morozov & Shirky] Digital Power and its Discontents

At the time when the nation states were emerging – a time that Habermas celebrates for “cafes and newspapers” which “were on the rise all over Europe” and “a new democratized public sphere was emerging” – Kierkegaard was concerned that with so many opinions floating around, people could be made to rally behind a number of shallow causes with no strong commitment to anything. This concern is shared by Shirkey and Morozov.

In the words of Morozov, “there was nothing to die for”. Online activism, he says like Kierkegaard, “cheapens our commitment to political and social causes that matter and demand constant sacrifice”. Citing Habermas, Shirkey says that those newspapers “were best at supporting the public sphere was when freedom of speech was illegal, so that to run a newspaper was an act of public defiance”. And so, “a protest which is relatively easy to coordinate at relatively low risk” is “less of a protest”, and “draws off some of the energy that could go elsewhere.”

Discussing an example from failed flash mob protests in his home country Belarus, Morozov asserts that a virtual movement was, for those protesters, a way to avoid “the dirty and bloody business of opposing a dictator, a business that often entails harassments of all kinds, as well as bloodshed, intimidation, expulsion from universities”. “They thought they could just blog the dictatorship away.”

“Does a movement need a martyr?” asks Shirky. “Does it need an intellectual focal point that’s willing to take a hit in order to make the point? And the second question is does that have to be one person?” Morozov believes a movement does need a charismatic leader, but “my fear is that a Solzhenitsyn would not be possible in the age of Twitter.”

The discussion…

… includes the potential of the internet to provide the landscape for the emergence of this new public sphere that could make possible post-national communities, as well as how nation states are coping with this potential threat. I began with a topic that develops towards the end, only to connect the discussion to the previous readings. I have focused on the questions when they arise during the discussion, instead of re-cycling them in the end.

  • Shirkey and Morozov agree that the cyberspace is not “a separate sphere unconnected to the rest of the planet” which would transform politics in a way “the internet utopians” think it would. Citing his critique of John Perry Barlow’s 1996 text “A Declaration of the Independence of Cyberspace”, which he calls one of the seminal texts of cyber-libertarianism, Morozov says “we are currently facing a huge intellectual void with regards to the Internet’s impact on global politics”. But this “lack of a coherent framework does not really prevent us from embracing the power of the Internet”, he says, and both democratic and authoritarian governments are trying to harness the power for political purposes. A lot of the earlier theories were developed in a context that is no longer relevant however, and so, “We do need a new theory to guide us through all of this”.
  • Morozov believes the State Department should use the potential power of the Internet to promote freedom, but is critical of its alliance with Google, Twitter and other commercial organizations. “We’re promoting Internet freedom for freedom’s own good,” he says. “So the real question is how to leverage the undeniable power of these companies without presenting them as extensions of the U.S. foreign policy.”
  • “There is definitely a greater level of politicization attached to the use of Twitter, Google, and Facebook in authoritarian conditions,” he says however. “People who are now using Twitter in Iran are marked as potential enemies of the state.” Asked by Shirkey if the Iranian ban on Facebook even before the elections meant it was over-politicized, Morozov says “the fact that they blocked Facebook doesn’t mean anything” to him. “All it means is that they could block Facebook — and they did.” Citing the example of the three-day ban on texts in Cambodia in 2007, Morozov says there is a “symbolic value attached to censorship” as it helps a government “signal to the rest of the world that they are still in charge”.

But Shirkey cites the examples of Burma and Ukraine to argue that the regimes are also trying to “dampen the public sphere” by censorship because these technologies allow the citizens to better coordinate their protest movements. “Conditions under which a public that can self-identify and self-synchronize,” he says, “even among a relatively small elite, is in fact a threat to the state.”

Morozov responds by saying that the “very vibrant” online campaign of Iranian protests, did not extend into real world coordination. “There was synchronicity of online actions, I’m not sure that it translated well into coordinated protests in the streets.” Shirkey said one way the coordination manifested itself on the streets was through the participation of women. But Morozov points out that Iranian women had been using social media for a decade, and therefore “most social media activity is just epiphenomenal: it happens because everyone has a mobile phone”. The Iranian government, he says, was brutal despite the social media hype.

Shirkey says his focus is on the coordination made possible between otherwise uncoordinated groups, but they can’t be as organized as hierarchically-managed groups. He agrees that such political engagement can make the regimes even more brutal rather than being more tolerant towards change.

Other similar questions, according to Morozov, include Whether it is “making people more receptive to nationalism” or if it could drive them away from “meaningful engagement in politics” by promoting certain hedonism-based ideologies? Or whether it could empower certain non-state entities that might not be “conducive to freedom and democracy?” – in short -  Who will get empowered by these better coordination opportunities and by the Internet in general?

So, “if the question we are asking is, ‘How does the Internet impact the chances for democratization in a country like China?’, we have to look beyond what it does to citizens’ ability to communicate with each other or their supporters in the West,” Morozov says. Compared with the $70m China had spend by 2003 on censorship, it had spent $120 billion on e-government. “Will it modernize the Chinese Communist Party? It will. Will it result in the establishment of democratic institutions that we expect in liberal democracies? It may not.”

  • Shirkey mentions his “bias” that “non-democratic governments are lousy at managing market economies over the long haul. That’s a baseline assumption, and it affects the context of digital publics.” Morozov says this was true even before Twitter, and most previous revolutions such as against communism in Poland, were not a result of such interventions as smuggling in of Xerox machines, but because of economic collapse. Referring to Iran’s announcement to ban Gmail and replace it with a national service, Shirkey says that by placing such bans, authoritarian regimes are “acquiring a kind of technological auto-immune disease. They are attacking their own communications infrastructure as the only way to root out the coordination among the insurrectionists.” But Morozov thinks that announcement should be seen in the context of the revelation of Google’s ties with the NSA. They want to be seen as: “We absolutely want to make sure that our citizens are not being watched by NSA”, which can be effective domestic propaganda.
  • Since the dawn of the Internet, Shirkey says, “in overestimating the importance of the value of the access to information, and we’ve underestimated the importance of the access of value to people.” “If we could lower the censorship barriers between the West and China, could just remove the Golden Shield altogether, while the Chinese retain the same degree of control over citizens and citizen communication, not much would change. If the Golden Shield stays up in its full form, but the citizen communication and coordination gets better, a lot will change.” Asked if this change will be good or bad, he accepts that “there will be national movements whose goals are inimical to the foreign policy objectives of the West”, but adds what really matters is that these countries are democracies.
  • But what comes first?” asks Morozov, “Democracy or Internet-based contention?” And when democracies are new, they are vulnerable. “If you have a weak state entering a transition period — and it’s fair to say the Internet would mobilize the groups that would make a weak state even weaker — chances are you would not end up with a democracy in the end.”

Responses to Shirky and Morozov:

Rebecca Mackinnon

the changes brought about by the Internet cannot be exclusively good or bad.“It’s everything all at once because it’s an extension of human activity and an amplification of human nature.” While Shirkey’s arguments on how the Internet empowers people to organize themselves sounds true, Morozov is also doing an important job of deflating utopia fantasies. “The Internet’s future — technically, culturally, politically, and content-wise — is up to each and every one of us who uses and inhabits it.”

Nichaolas Karr

the Internet is both a tool of control (as a computer network) and of emancipation (as a medium of personal expression). “We are at the beginning of a long cat-and-mouse game between those who would use the Net to exert central control and those who would use it to break that control.” Whether the Internet “might be promoting a certain (hedonism-based) ideology that may actually push [people] further away from any meaningful engagement in politics?”

“As far as opiates of the people go,” he says, “the Internet is a particularly intoxicating one.”

Geroge Dyson

“Tis considerable, that it does not only teach how to deceive, but consequently also how to discover Delusions,” Bishop John Wilkins, founding secretary of the Royal Society, said about digital communications in 1641. “Wilkins was concerned with the case where the good guys are within the government, and the bad guys without,” Dyson says. Shirky and Morozov are talking about the case in which the bad guys are in the government.

Douglass Rushkoff

“Neda was still killed despite the fact that there were people taking those videos,” but “the function of the Net may not have been to save Neda’s life”, he belives, but “to allow the entirety of networked society to bear witness to the atrocity. Neda did not die alone, unnoticed and undocumented.”Similarly, “the function of Twitter in Iran may not have been to launch a successful challenge to a corrupt election — but rather to help those in Iran experience at least momentary solidarity with one another and the rest of the world.”

“It’s not that the Net doesn’t allow for the creation of the required charismatic leader,” Rushkoff believes. “It’s such a leader is no longer necessary. The ground rules have changed with the landscape.”

Jaron Lanier

“It seems apparent, alas, that Facebook, Twitter, etc. have not improved American democracy, and yet we expect these tools to promote democracy elsewhere.” According to Lanier, “The basic problem is that web 2.0 tools are not supportive of democracy by design. They are tools designed to gather spy-agency-like data in a seductive way, first and foremost, but as a side effect they tend to provide software support for mob-like phenomena.”

“Governments oppress people, but so do mobs,” he warns. “You need to avoid both to make progress.”

Cyberterrorism: Additional Reading Summary

What is cyberterrorism? Even experts can’t agree

By Victoria Baranetsky, The Harvard Law Record

Published: Thursday, November 5, 2009

No Consensus on a Definition

  • “We even lack a unified definition of cyberterrorism and that makes discourse on the subject difficult.”
  • “The FBI alone has published three distinct definitions of cyber-terrorism: “Terrorism that initiates…attack[s] on information” in 1999, to “the use of Cyber tools” in 2000 and “a criminal act perpetrated by the use of computers” in 2004.”
  • Two explanations on why it is difficult to agree on a definition:
    • “The interest in cyber issues only started in the nineties so the terms are still nascent.”
    • “The meaning [of cyberterrorism] depends on differing interests.”
  • Some believe that “terrorists will use any strategic tool they can” so “cyber” terrorism is no more important then other forms.

What is the goal and who is affected by cyberterrorism?

  • Like any form of terrorism, cyberterrorism aims to “cause severe disruption through widespread fear in society.”  Because we are so dependent on digital material and systems, we are very vulnerable to this type of terrorism.
  • The U.S. is particularly dependent on online systems.  Countries that don’t depend so strongly on digital systems have an opportunity to attack without the risk of suffering from similar counterattacks.

Richard Clarke On The Growing ‘Cyberwar’ Threat

From Fresh Air on NPR

April 19, 2010

Richard Clarke served as a counterterrorism adviser to Presidents Bill Clinton and George W. Bush.  Clarke predicted the 9/11 attacks but was not taken seriously.  Now he is focusing on the possibilities of computer-based terrorism attacks.

What kind of harm could a cyberattack cause?

According to Clarke, here are a few examples:

  • Disable trains all over the country
  • Blow up pipelines
  • Cause blackouts and damage electrical power grids so that the blackouts would go on for a long time
  • Wipe out and confuse financial records so that we would not know who owned what
  • Disrupt traffic in urban areas by knocking out control computers
  • Wipe out medical records

Where can attacks come from and how are they executed?

Cyberattacks are not limited by national boundaries, and just one person can cause much harm.  A large team is not necessary to successfully complete this type of attack.  “Malicious code may infect a computer via a security flaw in a Web browser, or it could be distributed through secret back doors built into computer hardware.”

The government does have security set up to protect military and intelligence networks, but Clarke “worries not enough is being done to protect the private sector — which includes the electrical grid, the banking system and our health care records.”

“One common attack is for hackers to take over a series of home computers through backdoor security exploits. For example, malicious software can be downloaded onto a hard drive after you accidentally visit a compromised website. Your computer can then be used in conjunction with other compromised computers to engage in a large-scale attack. The average computer user may not realize when their computer has been drafted into a cyberattack.”

Clarke’s recommendations on how to reduce your risk of an attack

  • Never use your work computer at home, where it may be unintentionally compromised by another member of your family.
  • Make sure your online banks have more than just a password for security protection.
  • If you’re going to buy things online, have a credit card for that purpose with a low credit limit.
  • Don’t do banking or stockbrokering online and have a lot of money at risk — unless your stockbroker gives you a two-step process for getting in.

Assessing The Threat of Cyberterrorism

From Fresh Air on NPR

February 10, 2010

James Lewis is a senior fellow at the Center for Strategic and International Studies and the co-author of the report “Security Cyberspace in the 44th Presidency.” He predicts that within a decade, Al Qaeda will develop capabilities to carry out attacks on the web.

“Every single day, sensitive information is stolen from both government and private sector networks as criminals become increasingly more sophisticated…

Recent breaches at Google and the Department of Defense have illustrated that the United States is not yet ready to deal with a large scale cyber-attack.”

The battle against cyberterror

By John Blau, Network World

November 29, 2004

The Good News

Experts “don’t think [would-be terrorists] have the technical ability yet – in other words, the combined IT and control system skills needed to penetrate a utility network.

The Bad News

Hackers “are beginning to acquire some of these skills… and in many parts of the world [people] are willing to peddle their expertise for the right price or political cause.”

The Worse News

  • “Few, if any, of the industrial control systems used today were designed with cybersecurity in mind because hardly any of them were connected to the Internet.”
  • “Many of the “private” networks now are built with the help of competitively priced fiber-optic connections and transmission services provided by telecom companies, which have become the frequent target of cyberattacks.”
  • Moreover, security isn’t necessarily related to a country’s wealth.  Levels of protection vary from country to country.


Required Reading:

What is cyberterrorism?  Even experts can’t agree

The government has failed to convene its various departments to forge a single definition. The FBI alone has published three distinct definitions of cyber-terrorism.

Required Listening:

Richard Clark on the Growing “Cyberwar” Threat

Clarke says that cyberattacks can come from another country — or from a lone individual. Malicious code may infect a computer via a security flaw in a Web browser, or it could be distributed through secret back doors built into computer hardware. And though the government has set up security measures to protect military and intelligence networks, he worries that not enough is being done to protect the private sector — which includes the electrical grid, the banking system and our health care records.

Recommended Listening:

Assessing the Threat of Cyberterrorism

Lewis says that an attack can be simple and crude: malicious software placed on a thumb drive and left in a parking lot can wreak havoc on a computer system. He predicts that within a decade, Al Qaeda will develop capabilities to carry out attacks on the web — but says that terrorists may not bring down the entire Internet because they also realize the benefits.

Recommended Reading:

The battle against cyberterror

The cyberthreat to the electricity we use and the water we drink is real, experts say, but there’s no need to panic – at least not yet.

Weekly Summary: Representation, Simulation and Fun

The readings of this week address the tension between reality and fiction, representation and simulation. Why are video games so appealing, engaging and addictive?

Raph Koster (2004), Book excerpt: A Theory of Fun for Game Design- What Games Aren’t

Book author and game designer Raph Koster explores the nature of games, and explains what makes them highly attractive. He argues that the essence of a game is very different from the story it is packed into. The author responds to the controversy about violence in video games and the effect of media on behavior. According to Koster:

  • Games are about teaching underlying  patterns.  Metaphors are used to help the player understand the logic of a game. The story/plot of a game is only a “side dishes for the brain.” It is the underlying pattern or challenge that makes it interesting.
  • Differences between PacMan or Deathrace are only formal.  Games train people to look beyond fiction and learn underlying (mathematical) patterns.
  • Games aren’t stories:
    • Games involve experiential teaching processes (learning by doing), whereas stories teach vicariously (lessons learned from a character)
    • Games objectify, whereas stories evoke empathy (=identification)
    • Games categorize and simplify realities, whereas stories admit complexities
    • Games focus on people’s actions, whereas stories deal with emotions and thoughts
  • Are stories superior? Or when does a gamer cry? Games generally evolve around emotions related to mastery and don’t involve overcoming complex moral challenges.

However, Koster points out that games can be really fun (stories not always).

  • Fun is the act of mastering mentally an aesthetic, physical, or social problem.
  • Flow can lead to fun (though it is not a condition): the flow of a game lies in between boredom (to easy) and frustration (to difficult). A challenge should push gamers towards their edge;  this is what keeps them hooked and reward them with triumph and pleasure.
  • Fun is a key evolutionary advantage; our brain gives us positive feedback for learning and practicing survival tactics- in a context where there is no pressure.

Gonzalo Frasca (2001), SIMULATION 101: Simulation versus Representation

Gonzalo Frasca is a researcher and game developer. Besides commercial games for Cartoon Network, he also likes to create videogames based on news event, like Newsgaming or Howard Dean for America. Like Koster, Frasca argues that the essence of games differs from stories. Videogames aren’t “interactive fiction,” but are built on simulation.

  • Magritte’s painting of a pipe represents a pipe. However, it is not a real pipe. Representation is a traditional form of narrative.
  • Simulation goes beyond representation, as it can also model the behavior of the system or object represented. SimCity simulates a city (for example London). The game is less complex than the actual city, but retains some key characteristics and behaviors.
  • To an external observer, the outcome of a simulations appears as narrative. However, gamers feel like they are experiencing events first hand (cf. Koster’s observation on experiential teaching).
  • Narrative (=a story) works bottom-up: it induces general rules from a particular case. Simulation is top-down: it applies general rules to a particular event.
  • Are simulations superior to stories? Simulation allows experimentation of complex dynamic systems (for example, driving a car).

Let’s discuss reality, representation and simulation!

Have a look at the following three cases:

I. PeaceMaker YouTube Preview Image

II. RapeLay YouTube Preview Image

III. Article: Prescription For Iraq Vets Dealing With Trauma? Video Game


  1. Applying Koster’s and Frasca’s definitions, how would you distinguish between representation and simulation? Do you see any differences?
  2. As game technology becomes every day more sophisticated and can involve a player’s entire body (and senses), do you think games are pushing towards ignoring fiction and learning underlying patterns? Are stories just “side dishes” for the brain?
  3. Do good games make the player cry? What do you think about games that demand overcoming controversial moral challenges in order to get to the next level (for example, becoming a suicide bomber)?
  4. Can you think of cases where reality turns into simulation?
  5. As a teaching method, what do you consider superior: representation or simulation?

Raph Koster, The Core of Fun – Presentation at Etech

In this presentation for the 2007 O’Reilly Media Emerging Technology Conference, Koster continues his analysis and reveals the magic ingredients of a fun game.

  • Games are made out of games: each micro-game or sub-activity must be entertaining!
  • Different types of fun must be mixed in (typology according to Nicole Lazarro):
    • Hard fun (the dominant characteristic of most games): you meet a challenge, figure out the pattern, and experiment until you master it
    • Easy fun: moments of aesthetic delight
    • Visceral fun: roller coaster stomach feeling
    • Social fun: schadenfreude (= gloating feeling when a rival fails)
  • All aspects of a game are important :
    • Where and when? Context matters- platforms and past interactions influence the experience
    • How? The more sophisticated skills are needed for the challenge, the better! Shopping on eBay is more fun than on Amazon. There should be also different tools (sword or arrow?)
    • Which one? There should be a broad range of challenges.
    • What for? Feedback is essential. Success must have different outcomes. In addition, gamers shouldn’t always get what they want; loosing is important, as fun results from learning.
    • Against who? Gamers like multi-layer competition. They want to play against the game, against themselves and against each other.


  1. What do you think about Koster’s recipe for fun? Take a game you like and think it through. Which elements give you endorphin flashes?
  2. In his presentation, Koster criticizes social media. Yes, they are fun, but are they driving to participation? Let’s think again about Clay Shirky’s ideas on organizing without organizations. Are collaborative actions an interface problem, or in other terms, should they be more fun?

Alexander R. Galloway and Mushon Zer-Aviv, Kriegspiel booklet

The open source computer game Kriegsspiel is based on Guy Debord‘s 1978 board game called “The Game of War.” Debord, situationist, filmmaker and author of the Society of Spectacle, was disillusioned with the possibilities of cinema and representation, and turned toward the field of simulation.

Debord’s conceptual game design involves both elements of  classic warfare inspired by Napoleon and Clausewitz, as well as postmodern war strategics, like “counter-insurgency, urban conflict, the growing inability to distinguish between civilians and enlisted soldiers” (inspired by the Algerian war).

Kriegsspiel reinterprets Debord’s game, translating it from French to Java, and integrating contemporary “network-centric warfare,” in which “soldiers are reorganized into flexible, interconnected pods, and networks themselves are deployed as weapons on the battlefield.”

Debord believes that the game “reproduces the totality of factors that deal with war, and more generally the dialectic of all conflicts.” According to Tosca, game simulations work by a top-down approach. However, Galloway and Zer-Aviv point out that “games are both abstract totality and empirical practice. A game designer is always a legislator, an enforcer, but a game player is always something of a hacker.”


  1. Is Debord’s approach still an effective way to study the nature of conflict? What do you think about network-centric warfare (connectivity as a kind of weapon)?
  2. Playing the game in the 1970′s required a pen and a pencil, with Kriegspiel, the computer establishes a set of rules. Do you see differences in the thinking and learning process?


It seems that there delicious doesn’t work properly on our blog, so here are the links to articles that could be interesting for our class discussion:

Piano Stairs- TheFunTheory. Can we change people’s behaviour for the better by making it fun to do?

Modern Warfare 2- video game keeps players hooked:  Short video that breaks with some gamer stereotypes. Interview with gamers that are sportive, have girlfriends and make 10.000USD on gaming!

CNN.com- He married a video game character. A gamer so loves his video game that he married a character in the game.

A Rape in Cyberspace. This article by Julian Dibbells analyzes the repercussions of a “cyberrape” in a multi-player computer game called LambdaMOO (for those who haven’t read it in the MCC course).

Controversial video game mimics one of the deadliest battles in Iraq. Developers and marines are working on part game, part documentary called ‘Six Days in Fallujah.’

Shoot an Iraqi‘ : Artist Wafaa Bilal talks about his project called ‘Domestic Tension’.

Weekly Summary : Interface!

The major theme that ties this week’s material together includes how on the Web, interface (“a point of interconnection between two independent systems” Mushon) is being shaped in a way that break the balance of power depriving users (as one side of the 2 systems) of their power. The Web is often considered as an open and free media yet users’ experience does not seem to be under their control…

Dan Ariely, Are we in control of our own decisions?

Israeli Professor Dan Ariely teaches Behavioral Economics at MIT. Passionate about rationality, he is the author of Predictably Irrational. Ariely performed this presentation in December 2008. It is obviously meant to push his audience to question itself. He wants people to recognize and understand their limitations…

  • Visual illusions are a physical limitation people are well aware of. They can demonstrate it yet they cannot escape it. Therefore they adapt to it.
  • Cognitive illusions would also be mistakes that we cannot avoid but worst as we cannot demonstrate and understand them.

However some people well aware of this weekness take advantage of it to influence others… Using different examples (organ donation forms, tour operator advertisings, doctors’ instructions and the hottest guy to date…) Ariely demonstrates how you can shape the message you send in such a way that you help people figure out what they want. Here are little tips : working on the format of the question you ask, emphasizing the default option, presenting a worst option than yours etc. While everybody remains in the illusion that they decide, you almost decide for them.

Ariely concludes on a very positive note: what if we put our pride aside and aknowledged our cognitive limitations? Then we would be able to design a better world.


  1. Ariely takes for granted that understanding the cognitive illusion we are submitted to would allow us to adapt to it. But these two illusions are not the same at all: visual illusions are very specific and defined while cognitive illusions come down to rationality which is much harder to demarcate and control… Do you still think Ariely’s argument is relevant?
  2. Also, how to raise awareness on cognitive illusions when it could be the mean for some people to acquire so much power over others?

Chris Messina, The death of the URL

Chris Messina is a designer who believes in the open web. He is a member of Open ID and maintains a blog, he works at Google (for the record!). In this post Messina makes a plea on behalf of the URL. He wants to make people realize that URL could disappear which would put our freedom on the Web in jeopardy. To make his point the designer uses 6 examples:

  1. Web TV. A simplified, toned version of computer : no browser, no keyboard, no mouse. It will be “user friendly” but allow no flexibility at all.
  2. Litl, chromeOS, JoliCloud, and Apple Tablet… The design of these tools is  definitely “cool”. Yet it leads to “a  predetermined set of options” always restricting our freedom on the Web.
  3. Top Sites. This features provides you with a selection of the websites that you visit the most. As convenient as it is it prevents us from thinking. We don’t even need to think about the most accurate website to find what we are looking for. Everything on our browser tend to be preset, predetermind. We are becoming passive users.
  4. Warning interstitials and short URL frames. The annoying format of those warnings that we experience everyday contributes to deter us from clicking through certain link. Another way of restricting our freedom.
  5. The NASCAR or this tendency to turn everything into logos for the sake of simplicity. Another abstraction of URL
  6. App Stores or “a cleaved out and sanitized portion of the web”. Big business has the power. Companies, brands are taking control of the digital environment. “The hardware makers got into the content business” and are turning the Web into a shopping mall.

Messina concludes by reminding why there is so much at stake with URL: it allows anyone to create a website and to propagate it. URL empowers users, if users loose access to it they will be cast out of the Web.

Messina also cleary stresses on the interface that are the key issue of the Web: the battle to win “the universal interface for interacting with the web is just now getting underway ”

Questions :

  1. What do you think of Messina’s plea? Do you think the Web will be just like TV, reducing its audience to passivity?
  2. Do you feel that you lose control, that you are driven to a predetermined set of options?
  3. As Messina, do you think companies are responsible for the death of the URL and that they have interest in it?
  4. I feel that the discrepancies between different types of users will be increased and that some people are going to be able to preserve their freedom while other will lose the freedom of their experience. What about you?

Andrew Rasiej & Micha L. Sifry, Social networking, new governing

This article written in March 2009 clearly defers from the two other documents as it is mostly optimistic regarding the power of users on the Web.

The authors draw their argument on Facebook. The social network has reach such a number of users that it plays a key role in our societies: “it is a meaningful platform for political engagement”. But “is Facebook a public square or a private mall?”. In response to users complaints about a unilateral control of the site, Zuckerberg decided not to change the website but to include users in the website policy and organized a “virtual town hall”. Zuckerberg said he wantes to develop “new models of governance”. So far so good but in reality this seem a bit fake:

  1. It is very unlikely that Facebook will mobilize 30% of its users to take part in the company’s governance.
  2. Facebook did not promote this new development at all. (Indeed, who heard about that?)

It seems that Facebook took very little risk. However, the two social entrepreneurs  founders of Personal Democracy Forum consider Zuckerberg’s proposal as the first step towards “an overall change in expectation about the relationship between digital landowners and digital tenants.”


  1. A year after their article, I am wondering what the authors would have to say about Zuckerberg’s declaration “privacy is no longer a social norm”? This declaration give me very little hope in the new democracy Facebook could provide us with…

Mushon Zer-Aviv, Interface as a conflict of Ideologies

This essay dives into the very question of interface.

Interface as “the point of interconnection between two independents systems” is all about balance. The design, the way the interface is built should aim at respecting and protecting the equilibrium between the two sides. However, interfaces are often used by one system to gain power over the other. Therefore interfaces are at the center of a major conflict on the Internet.

  • Encoded/ Decoded. The Web highlights the importance of interfaces yet we have been using them forever to communicate and interact between us. Languages for instance are a major interface. Referring to Ferdinand de Saussure M. Zer-Aviv explains how language has been conceived as a circuit on which messages could be exchanged as long as the interface is equally shared. However Stuart Hall has demonstrated that language relies on a system of codes and that “the codes used for encoding and decoding are often different”. There are 3 defined types of codes:
  1. Dominant Code: the sender shapes the interpretation od the receiver (Mass Media, advertising do that all the time as we cannot change the message)
  2. Negotiated Code: the receiver understands the message but does not completely buys into it
  3. Oppositional Code: the receiver understands the message but refuses it and uses another code to decode the message in oppositon to the goal of the sender.
  • The Web’s Communication Diagram. In theory “the Web is a revolutionary tool for gaining ownership of media” as it provides different types of communication: one to one, many to many, one to many. But it has also made the hierarchy at work in those communication system much more complicated.  Indeed the identities of the systems interacting are harder to clearly indentify on the web. The identities are somehow blurred. While the comment interfaces on blogs seem to leave room for users, “the only identity represented through the dominant interface (the website) is that of the publisher.” Most of the time on the web, interfaces fail to maintain the equilibrium between the two independent systems.
  • Commons-Based Peer Production – A new Ideology. The example of Wikipedia the free encyclopedia based on Benkler’s principle of Commons-Based Peer Production: “no one person controls how the resource is used, they are either open to the public or a defined group”. There is not one single author and the quality of content is protected by the moderation.
  • The Revolution will not be verified. Wikipedia is a wonderful proof of what common based peer production can achieve. However, Wikipedia’s strength relies in its “tightly policed ideology“. When people edit in Wikipedia they accept and relay Wikipedia’s ideology. The system works because wikipedia’s editor are strong advocates of Wikipedia’s identity (the respect of the power editors have been entitled to in the benefit of “the greater good”). And indeed, the system has proven to fail when reproduced in the LA Times. Even if the control is distributed there is always “one side who holds the key” and has the power to break the balance. The interface is the carrier of an identity and therefore carries a message in itself.
  • Unknowns Knowns in On-line Urban Space. Even though in theory HTML is simple and accessible to everybody, for practical purposes we experience the web through web pages that are “in the hands of the identity behind it”. Everything on the Web is privately owned and therefore under control. Because of these web pages, “the web has never had any public place” directly accessible. This part relates a lot to Ariely’s presentation: as well as we cannot aknowledge our cognitive limitations, there are things we “don’t know we know”. We don’t know we could think of the web in a different way that the one we get.
  • Cracks in the walls. Even if everything is under control, some things are a bit flexible and give hope for a little bit more of openness on the web.
  1. The RSS feed which gives mobility and visibility to content
  2. Application Programming Interface (API) when “the powers of one software can be shared by another”
  3. Social Bookmarking

Those new features are participating to the development of the metaweb which creates “a public space on the web” leading to more flexibility, mobility and participation. Through metaweb users could “retrieve their agency in the interfaces”. Interfaces would not be freezed anymore but the result of an on-going process in which all users can take part.

After having analyzed the interface and all that is at stake, the author suggests to enter into conflict to retrieve the balance in the interconnection between systems through two approaches . A tactical approach consits in destabilizing by questioning something established. It enables able to trully modify and improve the system (the example of Google bomb). A less spectacular but efficient approach is the strategic media one. It is much more sustainable and consits in “influencing the system from within”. Greasemonkey for instance allow users with coding skills to add, remove or fix features on the page, as well as it allows to insert content from other sites into the page.

And indeed you can contribute to the metaweb!

Mushon as contributed to the creation of ShiftSpace “an open source browser plugin for collaboratively annotating, editing and shifting the web”. It allows users to move out of their passivity for a much more active and interactive experience of the web. They have the opportunity to react, produce content and share it among Shiftspace users.

Questions :

  1. This text bring us back to the role of design. What is good design? Is it what prevents us from thinking?
  2. Private interests seem to be responsible for the loss of control of the users on the web. Can we think of a another Web (Web 3.0?) which could not be privately owned?

Weekly Summary: Networking, Notworking, and What to do Next?

Networks – The Science-Spanning Disciplines - Anna Nagurney

Dr. Anna Nagurney is a professor in the Department of Finance and Operations Management at the Isenberg School of Management at the University of Massachusetts Amherst. She is the Founding Director of the Virtual Center for Supernetworks. You can read more about her on her blog here.

In Nagurney’s presentation (from 2005), she enthusiastically discusses the pervasiveness of networks in people’s every day lives and how they’re essential to the functioning of societies and economies. She notes that networks are imperative parts of business, social systems, science, technology, and education by providing their very infrastructure.

Background of Networks

Transportation is one of the most essential forms of networks, and can also be one of the most complex. Nagurney uses the concept of the transportation network throughout her presentation to help explain a number of different points. This network is so important because transportation is used not only to facilitate face-to-face communication, but also to provide access to other networks. Anna notes in her speech that there are 3 basic network components:

  • Nodes
    • Ex. Transportation intersections, homes, work places
  • Links or Arcs
    • Could have direction or be bidirectional or just represent connections without any type of direction
    • Ex. Roads, railroad tracks
  • Flows
    • Means various things within different contexts and applications
    • Without these, (with just nodes and links), one is essentially talking about a graph
    • Ex. Cars, trains

The Study of Networks

From a scientific methodology standpoint, to her, the beauty of studying networks lies in finding problems where one might think no network exists. Much like we talked about last week concerning the sense that there’s a plethora of virtual interconnections taking place every day on the street that go unnoticed, Anna searches for these happenings and looks to study how they interact as a network. She explains that, “the study of networks is not limited to only physical networks, but also to abstract networks in which nodes do not coincide to locations in space.” More specifically, the study of networks involves:

  • Forming these applications as mathematical units
  • Studying these models from a qualitative perspective
  • Creating algorithms to solve the ensuing model

The studying of networks has elicited 3 classic problems:

  • The Shortest Path Problem
    • The search to move flows in the most efficient way from an origin to one or more destinations
    • Ex. Transportation; minimizing storage needed for books in a library
  • The Maximum Flow Problem
    • Figuring out the capacity of the network
    • Ex. Network reliability testing; Building evacuation
  • The Minimum Cost Flow Problem
    • The search to find the flow pattern that minimizes the total cost, without exceeding capacity
    • Ex. Warehousing & distribution; biology; finance- asset liability management

This scientific approach to studying networks seeks to determine patterns within networks, which can then aid in unifying a variety of applications.

Characteristics of Today’s Networks

In the past, congestion was not such a huge problem, but now it is becoming more and more so. This can even be considered when talking about social networks, with Nagurney explaining that with, “a push of a button, you can reach 10s of thousands of millions” of people.

The behavior of users is also an important characteristic to consider. Users, both on an individual and group level, can behave in a variety of ways within a network. This can even lead to alternative behaviors and paradoxes, such as the Braess Paradox. The paradox highlights the cost to society concerning user optimization vs. system optimization.

The Supernetwork

Nagurney postulates that it’s time for a new paradigm: that of the supernetwork. These supernetworks can be connected, multilevel, or even multi-criteria. It’s important to not only study individual decision-making, but “the effect of many competing, collaborating, cooperating.”

With these supernetworks, come new tools to study them, including game theory and optimization theory. She also lists a few common applications of these supernetworks, including knowledge networks, teleshopping decision-making, and electronic transactions.

Nagurney then explores how these supernetworks can integrate social networks, by looking at types of relationships. The value and strength of the relationships that are fostered become the “flows” in social networks. She explains that establishing relationships incurs costs, but with higher relationship levels comes a reduction in costs and risk and an increase in value. The belief in social responsibility of the users and the fact that social networks are dynamic and ever-changing are important factors to consider when studying these networks.

The Principle of Notworking - Geert Lovink

Dr. Geert Lovink is a Research Professor of Interactive Media at the Hogeschool van Amsterdam and an Associate Professor of New Media at the University of Amsterdam. His book The Principle of Notworking was published in 2005.

Throughout the first section (“Multitude, Network and Culture”) of Lovink’s book The Principle of Notworking, Lovink mainly quotes George Yudice, Antonio Negri, and Michael Hardt. (In 2003, Yudice wrote the book The Expediency of Culture: Uses of Culture in the Global Era, where he theorizes about the changing role of culture in a world that’s becoming more global-oriented. Negri and Hardt co-wrote the books Empire (2000) and Multitude: War and Democracy in the Age of the Empire (2004). While Empire was about corporations and global institutions coming to the forefront, Multitude centered on the population of the ‘empire,’ explaining that this body is defined by its diversity.)

Lovink begins his book by explaining the importance of analyzing culture as a resource, rather than a commodity, which he argues is especially important when discussing Internet culture. He believes that the commercial efforts of the dotcom models during the late 1990s were “wrong.” He argues that the, “culturalization of the Internet is at hand,” and, like Nagurney, seeks to present the importance of the user over the system.

Much like Nagurney stated in her presentation, Lovink also recognizes that an important aspect of Internet culture is that it is in, “a permanent flux.” He explains that experts on the Internet are still having trouble comprehending this, though, mentioning that it is a “cultural turn.” He notes that those having trouble seeing the Internet as something constantly changing still see the Internet as a commodity and tend to hold theories of “religious nature.”

In accordance with his belief of the importance of the user over the system, he believes that more sufficient research is required on the subject and does not believe Nagurney’s scientific approach is adequate. With this, he thinks that new media needs a language of its own, which will be more inclusive of his idea of networks as “post-human.”

Lovink also explains the importance of having different communities come together (similar to a point Nagurney makes). He sees this happening with the outsourcing of IT, which allows for the chance of “cultural mingling.” But, while networks have the opportunity to foster creativity, cooperation, and a sense of liberation, they can also be used for the purpose of control. This is mentioned through his discussion of the ‘protocol’ theory and Gilles Deleuze’s ideas of ‘the control society.’

What Lovink believes defines today’s networks, he describes through the term “notworking.” It is elements that go awry within the make-up of the network from yesterday that help to shape the network of today. These examples of “notworking,” such as spam and viruses stem from the “frustrated mind” – those, “who breach the consensus culture,” and are pushed to the outer boundaries of the network.

Review of The Exploit: A Theory of Networks (2 Reviews + 1 Response)

The Exploit: A Theory of Networks is a book co-written by Alexander Galloway and Eugene Thacker, which was published in 2007. It is a theoretical book about how networks operate, their political implications, and how flaws in the system can lead to positive change. Galloway is an associate professor in the Department of Culture and Communication at New York University. Eugene Thacker is an associate professor of new media in the School of Literature, Communication, and Culture at the Georgia Institute of Technology.

Review 1: Daniel Gilfillan

Daniel Gilfillan is Associate Professor of German Studies and Information Literacy, and Affiliate Faculty in Film and Media Studies and Jewish Studies at Arizona State University. Read more about him and his work on his Academic Portfolio site.

Gilfillan’s review of The Exploit mainly focuses on commending Galloway and Thacker for presenting a contemporary understanding of networks. Like Lovink, Gilfillan, Galloway, and Thacker recognize that networks are used for control purposes and consumerism (also referencing Deleuze and his “control societies” and “dataveillance” concepts).

What Gilfillan is mainly concerned with is the concept of pushing past this, “system of control,” by taking advantage of openings within it, which can lead to something new and progressive. Similar to Lovink’s point of what makes networking is the “notworking,” Gilfillan agrees with Galloway and Thacker that it is these “flaws” within networks that makes progressive change possible. In relation to this, Gilfillan discusses Galloway and Thacker’s belief that there is a new balance between networks- an “alliance between ‘control’ and ‘emergence.’” But, a new type of asymmetry must be found that takes advantage of inconstancies within a network; Galloway and Thacker call this need both the “antiweb” and “an exceptional topology.”

While networks need hierarchical systems of control, it is also important to have aspects of a decentralized system of distribution. This helps to allow for this asymmetry, and hence, flaws within the system. Gilfillan notes that it’s here that allows for the possibility for “counterprotocol practices,” making advancement possible: “it will be sculpted into something better, something in closer agreement with the real wants and desires of its users” (from Galloway & Thacker).

He gives the following definitions as a guide to the exploitation of these flaws:

  • Vector: The exploit requires a medium where an action or motion can take place
  • Flaw: The exploit needs weaknesses within the network, enabling the exposure of the vector
  • Transgression: The exploit then creates a change within the ontology of the network, making the “failure” of the network an alteration in its topology

Review 2: Nathaniel Tkacz

Nathaniel Tkacz is a PhD candidate at the University of Melbourne, where he’s researching the, “political dynamic of Open Projects (projects influenced by the principles and production models of Free and of Open Source Software, but translated into different domains).” Read more about him and his work on his research site.

While protocol was a minor detail in the overall message presented by Gilfillian, this was the main topic of discussion for Tkacz. He explains, “protocol is a set of rules or codes that enables, modulates, and governs a specific network and also a general logic of governance for all networks.” It is a form of control and a way of, “directing flows of information,” which he equates to the Panopticon in Foucault’s disciplinary society.

But, this protocol allows for the exploitation of the flaws within it- it becomes the “target of resistance.” Rather than changing existing technologies to promote transformation, “protological struggles,” emerge that entail, “discovering holes in existing technologies and projecting potential change through these holes.” These “holes” are called “exploits” by hackers.

From here, Tkacz goes on to explain a number of ‘limitations” he feels the book has. Tkacz believes that the way the book was structured created some limitations in itself (the book was written as a ‘network,’ which Tkacz believed left things underdeveloped). Another problem that Tkacz sees is that the book relies too heavily on the “old centralized/decentralized dichotomy,” rather than holding firm to one of the main claims of their book: networks can take numerous forms. A third dilemma he had is that he found the idea behind the authors’ protocol/exploit argument less persuasive as it moved from the specific, more important details to the general points.

Author Response: Alexander R. Galloway and Eugene Thacker

The authors begin their response by noting that Gilfillan mentioned one of the key points of the book: “the uncannily anonymous, network tactics demonstrated by ‘pliant and vigorous nonhuman actors.’” They explain their interest in the view that networks are, “something beyond the human altogether.” While networks might have once originated from human means, in their functioning as a network, they have lost their most essential human qualities. Viruses on networks don’t thrive because the network is “down” and not working properly; rather, they excel because of the very fact that the networks are working just as they should be. This is similar to a point that Lovink makes of networks being “post-human.”

Looking at both Gilfillan’s and Tkacz’s mention of Foucault and Deleuze being used in The Exploit, Galloway and Thacker clear up their reasoning behind using Foucault’s ideas. The two authors were not looking at Foucault’s work concerning discipline-surveillance; rather, they looked to build upon his work in biopolitics and security. Similarly, the authors note that the influential aspects of Deleuze did not just lie in his essay on “control societies.” Rather, it was in connecting that concept to his interest in the notions of immanence and univocity (the belief, expanded upon from Spinoza, that there are no numerically separate substances).

The authors ultimately ask: what should be done concerning these networks? “Should we as humans learn to be more like nonhumans?” They explain that there have been a number of responses to their question throughout philosophy. But, there are three in particular that they deem important. The first being the “’master of the universe’ attitude.” This says that exploits, such as viruses, must be eliminated. The opposite of this viewpoint is that of the agnostic. Here, it is accepted that, “the world is lost in the hands of technology, dry and lifeless after the passage into modernity.” The third thought process is that within this “dry and lifeless” world, lies something new and emergent at the core.

The authors leave us with the question, “Can there be an ontology of networks?” Must there always be an outside mediator to the network? Can a network topology express itself from within?