Hi, please

Tag Archives: Open Source Code

Travelogue 3 Conclusion: It’s up to YOU to develop Living Stories

LIVING STORIES: What is it?

These past several weeks I decided to investigate Google’s experimental interface for experiencing news online called – Living Stories.  From December of 2009 – February 2010, it experimented by utilizing the help of The NYTimes and The Washington Post to find out if people preferred and enjoyed this new way of experiencing online news.  Since the experiment, there has been a growing optimism with the future of possibilities of how it could change the nature and interface of online news.

Watch the video to understand what exactly is Living Stories:

YouTube Preview Image

Open Sourcing Living Stories:  What happened to it?

Around the time that I started to research it, Google went ahead and announced that they were open sourcing it to the public in hopes that people would find their own unique ways to develop and implement it.  So, my focus revolved around the question of, Who and what were developing the newly open sourced Living Stories???” This question lead me to dig around the website, Google Blog, online articles, and the discussion forum in hopes to find out who were some people besides Google’s guinea pigs (The NYTimes and The Washington Post: two of the most heralded newspapers in the US) that were trying to cultivate the program on their own…

According to Google, Living Stories was preferred over reading traditional news formats by 75 % of the people surveyed.  It was considered a success and so with that, Google released it to the public on February 17th, as an open source code.  Therefore, I set out to find out more about the silent success of this amazing new concept for online news.  Here is a recapitulation of my focal points for investigation:

“There are times when silence has the loudest voice” – Leroy Brownlow

  • My research and journey will be to figure out what I’m able to on where the project is going since its release to the public.  I have already contacted some owners of the experiment from Google that were in charge of Living Stories and even some people at the New York Times and the Washington Post to see what they are continuing to do with the format.
  • In addition, I will try to seek out some developers who are working with it to see what they have been able to do with it.
  • Lastly, I will also attempt to contact various news agencies and inquire about whether or not they would implement such a format to their online site.

What was I able to find out it… It’s hibernating for now

After sending out several emails to some leads that I garnered perusing around the discussion forum for Living Stories, I was able to get a hold and interview Neha Singh, software engineer for Google and another person, using a pseudonym  Eugene, at Nature Publishing Group who is attempting to develop it further for an online scientific articles like those of Naturenews.

Eugene told me that, “We’re looking at experimenting with it to show both science news and the human stories behind important scientific discoveries published in the journal”.  He was enthusiastic about working to develop the code despite running into a couple minor problems with content manager timing out, but for the most part, was hoping to develop a time line interface of historical articles with the same topic.

Mr. Singh was pretty helpful in taking the time to answer my questions, but could not divulge any information that would lead me to developers or other people who might be working on the code.  He also couldn’t provide me any contacts from the NYTimes or the Washington Post without their permission.  All he could tell me was mostly the same information that he had written on the official Google Blog nor could he answer (which I assumed) some harder questions like – Was this a political move to develop better relations with news companies and the general public by open sourcing it? He could not comment.

Another lingering question was whether or not Google’s decision to open source the code for Living Stories was planned from the beginning or was it something that was considered after the experiment was over.  After verifying the Living Stories blog post from December when it originally started and the answer that I received from Neha, I learned that Google’s intentions from the start were to open source it after the experiment finished.

Paul Bradshaw, of the online journalism blog, on his report of Google’s Living stories.  Bradshaw asks two very important questions that I thought were worthy of including>>
  • How much of the construction of the page is done automatically, and how much requires someone to input and connect data?

This question addresses the extent and ingenuity behind the code itself.  The code creates an interface that allows for an updated version of the stories to continue to funnel down the page with several key features to choose from along the sides e.g “most popular”.  However, from talking to Eugene, he did mention that the content manager kept timing out.  So I would presume that the construction of the page is formatted somewhat automatically, but also needing someone to input and connect further stories of course.  From what I did find out about some of its features, it is capable of filtering out information that you (as a reader) have previously read and highlighting what information is new.

  • How does this address the advertising problem?

Of course, advertising is very important for publishers.  There were no advertisements on Living Stories as of yet, but publishers who adopt it could potentially post advertisements alongside the articles.  While Google announced its revenue sharing project with publishers with Fast Flip, it should be able to equally implement advertisements for revenue purposes with Living Stories “if” publishers decide to appropriate it. 

Conclusions and a lingering curiosity:

I was holding out a little longer because I was hoping to get a response from a contact at the NYTimes.  Unfortunately, he didn’t respond to my email, but if he replies in the next couple of days I’ll post an update on Living Stories. I believe that this experimental new format for online news raises some interesting questions about the simple but profound reality that it is open source.  Moreover, the silence does speak volumes based off of the fact that it was a success according to Google and their pervading optimism.  Although I wasn’t able to find out much with how people are developing the code, I would be remiss if I did not believe that we have seen the last of Living Stories. I really want to know what the NYTimes and The Washington Post are doing with it.

For one thing, profit is the driving force behind businesses and so I wonder how using the free open source Living Stories format would compare with something like the Times Reader 2.0 where the reader pays a weekly subscription of $3.45?

graffiti from a distance– I see, therefore I create.

new media art

Artists have always been amongst the most rapid adopters -and adapters- of the new. The emerging media technologies all the way back from the printing press in the 16th century to film and video in the 20th were welcomed by the likes of Goya, Dürer, Buñuel, Nam June Park and Andy Warhol and quickly incorporated into their work, coining new artistic languages and settinga aesthetic boundaries to cross.

According to Wikipedia,

New media art is a genre that encompasses artworks created with new media technologies,including digital art, computer graphics, computer animation,
virtual art, Internet art, interactive art technologies, computer robotics, and art as biotechnology.

Zachary Lieberman is a new media artist based in NYC. According to himself he uses technology in a playful way to explore the nature of communication and the boundary between the visible and the invisible. He creates performances, installations, and on-line works that investigate gestural input, augmentation of the body, and kinetic response. He collaborates with Golan Levin, and recently they were nominated for Wired’s artist of the year award.

Take a look at his recent work: he helped create visuals for the facade of the new Ars Electronica Museum in Linz, Austria; wrote software for an augmented reality card trick, helped develop a 3d drawing tool and a typeface designed by a racing car’s tracks on the asphalt. In addition to making artistic work, Lieberman teaches graphics programming at Parsons School of Design.

eyeWriter

The project that I want to focus goes like this— In 2003, Tony Quan, LA graffiti writer, publisher and activist was diagnosed with Amyotrophic lateral sclerosis, (ALS) which resulted in total physical paralysis, except for his eyes. In August of 2009, Lieberman and a team of collaborators from London, Hong Kong, Madrid, and Amsterdam, met in California and began to work on something that would allow Tony, aka Temptone, to draw again.

They developed a low-cost, open source eye-tracking system + custom software that allows graffiti writers and artists with paralysis to draw with light projected on large surfaces, using only their eyes. They worked for 10 days in a design that consists roughly of this:

EyeWriter diagram-- formed by cheap components

Before Tempt could start tagging, the system had to recognize his unique eye-alphabet. Check out the process.

In August 2009 TemptOne drew his tag again after more than 5 years without drawing. You can check out the video of the first eyetag on my last post.

phase 2

The next phase of the project began last August. The long-term goal is to create a professional/social network of software developers, hardware hackers, urban projection artist and ALS patients from around the world who are using local materials and open source research to creatively connect and make eye art.

Kyoto City Hall- tagged

Since the first eyetag, TemptOne has been tagging nonstop—and since digital tagging erases the geographical limitations of physical graffiti, his art has gone way beyond L.A. Take a look at his work in Kyoto’s City Hall

next step

I want to learn what is going on with EyeWriter— how has the project evolved? How has it influenced other new media artists? What are the limitations of the medium? How does it allow the artist from an aesthetic perspective? If it is open source, who else has worked on it, and how have they used it?

I will, of course, try to contact Zack Lieberman.  Stay tuned.

Reading Summaries for Week 5

Wealth of Networks: Yochai Benkler

Economics:

Between 1835 -1850 the cost of starting a mass circulation of a daily paper was $10,000. Currently that number has become $2.5 million. The latter figure you would need to implement a business model . This displays two distinctions between producers and consumers, mainly, largely professional consumers which is based on a main model that the capital used  is supplied by either the state in some countries, or the market in other countries.  The evolution began with marketing in radio, television, satellite, main frame then to personal computing – 150 years later. The first acclaimed super computer was NEC earth simulator in June of 2002 which was then beat by the IBM Gene Blue.  Since this achievement there have been 500 super computers, developed by large scale collaborations and funded by the wealthiest companies.  Benkler’s main point is radical decentralization of capitalization through computation, storage, and communications capacity and networked information economy.  Every individual in this world is connected, roughly 600 million to 1 billion, has the physical capital necessary to make and communicate information knowledge & culture.   This causes a new and different situation since the industrial revolution, most importantly the input into economic  activities of the most advance economies are widely distributed to the population

  • Computation /communication/storage
  • Inside creativity/experience/these cannot be bought

Main point: Moving people from the peripheries of the economy (changing the motivation) into the very core as alternative source of production.

COMMONS BASED PRODUCTION

Production without exclusion either from the inputs or outputs, individual or collaborative, it can be commercial or non commercial. Practical capacity is decentralized, commons locates authority to act where capacity resides. (ie Britannica, windows )

The Subset of commons is peer production/sharing, through large scale collaboration. Among human beings it has been mainly done traditional industrial production (state, market, price signals).

PEER PRODUCTION examples:

  1. For the past 12 yrs  – web server (Apache vs. Microsoft server)
  2. NASA map – take the same exact output – structure the work differently, and you can harness mass amounts of energy to collective tasks. (Groups of images put together, dramatically different if one was done mars click worker.)
  3. Efforts beginning to go into the non commercial/commercial entities. The creation of educational materials through social motivations.
    1. i.      Peer production allows self selection by tapping into diverse insights, capabilities and makes it possible for people to spend a certain amount of time to complete a task . Benkler thinks he is seeing more design levers, task reconstruction in this type of development.
    2. ii.      We also have to factor in self selection and humanization characteristics, similar to Game Theory. People understand that when they are engaging in a human action with others the same exact person with the same exact material pay off structures turns out to behave differently.
    3. iii.      Norm Creation: We map the presence of money on a set of norms, because cooperation has usually been non market. There are discreet places that the introduction of money makes this whole structure different especially in stabilization.
    4. iv.      As long as physical capital is large scale to work effectively and centralized , we are left with centralized firms or non government firms.
    5. v.      With the decentralization and non market we are finding a new form of production called social sharing and exchange.
    6. Important to remember for ECONOMICS: The New opportunities
      1. Finished information and cultural goods – platforms for self expression and self collaboration – as a business opportunity instead of a challenge.
      2. Ex: BBC – Citizen Journalism: Only images the news agency possess were of those in the subways captured by mobile phones.  The BBC nows has a page for (similar to CNN iReport)  people volunteering to capture to news.
      3. Social production is a real fact and not a fad and is sometimes more efficient than market production at times.
      4. POLITICS
        1. i.      People can now do more for and by themselves alone or in loose cooperation with others.  Now you can diversify the things you do with others because you can collaborate in smaller bits.
        2. ii.      Example: New machines for voting (Diebold) – an activist publishes the source code (which is really hard to do).  Diebold complains that the emails are being taken – copy right infringement – but individuals in other campuses have already shared the material and it’s all over the place. The ecology that has been resistant is a combination individual volunteers, legal free software developers, illegal companies for commercial purposes for those doing illegal things in other countries. The combination of legal and illegal creates a robust system that can’t be broken at this point.
        3. iii. The internet democratizes.  – first generation critique
          1. 1. No one will know what anyone says supposedly (aggregation, power law, polarization)
          2. 2. Second Generation Critique – anyone can speak but not everyone can hear you

      What is Power Law distribution?

      1. 1. Sites cluster that are content related  – intensely related communities and cultures start to develop. What determines the agenda is a small number of broadcasters with links and they being to mutual linking, and then those few sites become the broadcast sites. What determines the agenda is that those few sites transmit what those few users believe is what the agenda is..

  1. Cultural public sphere – where we create images and sound, this is a political component, small number of producers to a large audience of passive consumers.
    1. Gold digger – Kanye West  Mashup –this video displays how far borrowing goes to show the relationship between culture and politics.
    2. Common based and peer production are beginning to help
      1. Free and open source software
      2. Open academic publishing
      3. Open source biomedical innovation
      4. Rules can make some actions easier the institutional ecology they want to make the battle of information sharing more costly or subject to permission, the market and society have a persistence desire and pushback to be free and productive.

      SHARING AND MASHING CAN BE POLITICAL

Excerpts from The Success of Open Source by Steven Weber

Property in a Software Economy

  1. Property and how it underpins the social organization of cooperation and production in a digital era.  Social organization has changed the definition of property (owning something and having legal responsibility and rights) and property in the new media age has changed the idea of social organization itself. The definition of property in an open source environment is basically the right to distribute, not the right to exclude. Open source and political economy  is a system of value creation and a set governance mechanisms.

“In this case it is a governance system that holds together a community of producers around this counter intuitive notion of property rights as distribution. It is also a political economy that taps into a broad range of human motivations and relies on a creative and evolving set of organizational structures to coordinate behavior. “

  1. So how do these people who are not physically connected to each other, manage to come together and build these complex projects for no monetary compensation?
  2. Open Source software and collaboration is a product of the internet culture and has been created by internet technology.

Open source depends on the following:

“It is about computers and software, because the success of open source rests ultimately on computer code, code that people often find more functional, reliable, and faster to evolve than most proprietary software built inside a conventional corporate organization.”

  1. Open source also does not eliminate the idea of profit, capitalism or property rights, companies and open source producers are joining together and creating new types of business models evolving our view of what property and intellectual rights are.
  2. The open source community is not a chaotic or calm place  it has political value. Conflicts of interests do arise within this environment.
  3. THE BIGGER PICTURE
    1. The context of the internet revolution  and the demise of the so called “dot com” boom put a damper on the what potentially was left in internet technology. Open source became popular when Linux was gaining attention. (Linux is an operation system for UNIX that is open source).

The open source story opens up a significant set of questions about the economics and sociology of network organization, not just network economics. And it demonstrates the viability of a massively distributed innovation system that stretches the boundaries of conventional notions about limits to the division of labor.

  1. Open source has brought forth several questions facing the sociology of network organizations and demonstrates what abilities open source has in what he calls the “division of labor.” This over laps with Lessig’s case – in a computational environment software codes plays a structuring role much like law does in conventional social space.  Human-computer interface designers are deeply aware of the fact that what they build embodies decisions about policy, rights, values, and basic philosophical views on human action in the world. The open source community has a set of principles. The criteria include:
    1. Entering/leaving, leadership roles, power relations, distributional issues, education and socialization paths.
    2. Weber makes a very good point in stating that during most social or economic change analysts tend to focus on what we are losing and not on what we are gaining from moving forward. We are challenge the old methods and conventional thinking believes that this is the destructive of creativity where as it is the rebirth of it in a new medium.
    3. The third area in is the nature of collaboration – “Production processes that evolve in this space are not a hard test of limits but rather a leading indicator of change and a place where experiments can be seen at a relatively early stage.”
    4. Open source is testing social organization based on what we would define as property. Issues arise when we try to think of ownership in this environment, “rights to access, rights to extract, rights to sell, etc. “What does it mean to own something??

“Open source radically inverts the idea of exclusion as a basis of thinking about property. Property in open source is configured fundamentally around the right to distribute, not the right to exclude. This places the open source process a step beyond standard norms of sharing in a conventional scientific research community.” Copy is encourage and allowed, your basically giving back to the community when you provide your input to the open source community.

  1. How big of a phenomenon is this? How broad is its scope?
  2. It is an important idea for the social scientists to think about if it was to become a large-scale cooperation (which I think already is)
  3. How can it help our economy, growth?

Commons-based Peer Production and Virtue – Yochai Benkler and Helen Nissenbaum

  • Common-based peer production is a socio-economic system of production that is emerging in the digitally networked environment.  There are 3 major parts for this to occur. The infrastructure of the internet, collaboration among large groups of individuals to provide information, knowledge or cultural goods without relying on the market, (corporations). Benkler and Nissenbaum believe that these type of production offers people the opportunity to engage in virtuous behavior.
  • A society that provides opportunities for virtuous behavior is on that is more conducive to virtuous individuals
  • The practice of effective virtuous behavior may lead to more people adopting virtues of their own.
  • “Thesis:  that socio-technical systems of commons-based peer production offer not only remarkable medium of production for various kinds of information goods but serve as a context for positive character information. “
  • Examples of Commons-Based Peer Production
    • Free software projects and open source software are a collective effort of people towards a common goal in a more or less informal or loosely structured way. No one owns anything.  The most famous products of this type are GNU/Linux Operating system, Apache Web server, Perk, and BIND. In the creation of these projects there is no formal leadership to limit power in discussion, the effort is a combination or good will, volunteerism and technology.
    • SETI@home is a large-scale volunteer production through Internet-connected computers in Search for Extraterrestrial Intelligence.  You download a small application in the form of a screensaver to your computer and while your away your computer would process the numbers from the SETI website. In essence created one of the largest super computers by distributing the tasks on different networks.
    • Nasa Clickworkers experiment: Individuals collaborated in 5 min increments to map and classify Mar’s craters. They would be completing tasks that a PhD would have to endure months of work.
    • Wikipedia – 30,000 users to collaborate and create an online encyclopedia. Wikipedia does not include elaborate software-controlled access or editing functionality.  Uses self-conscious use of open discourse, aimed at the consensus of all the users.

“Slashdot, a collaboration platform used by between 250,000 and 500,000 users. Users post links to technology stories they come across,

together with comments on them. Others then join in a conversation about the technology-related events, with comments on the underlying stories as well as comments on comments. – Slashdot is designed to constrain antisocial behavior based on it’s moderation points, limits the amount of influence a user can have on the collective. “

COMMONS-BASED PEER PRODUCTION – Principles

  1. Peer production is a model of social production, emerging along side contract and market based, managerial based and state based production.
  2. Two core characteristics in types of production  – Decentralization – the authority to act resides with individual agents faced with opportunities for action, rather than in the control of a central organizer.
  3. Second characteristic is they use social cues and motivations, rather than price or commands that are used in corporations (markets) to motivate and coordinate among individuals.
  4. This creates physical capital – a common goal – human effort and creativity.
  5. Peer production enterprises are becoming a mix of social and technical systems that encourage groups of users to collaborate without the backing or incentive of monetary compensation for the use of physical capital.3 Structural attributes – potential objects of peer production must be modular (must be divisible into components)
  6. Granularity of the modules – sizes of the project modules
  7. Low-cost integration – include both quality controls over the modules and functionality to bring the whole project together.
  8. One way to solve certain collective action problems is the introduction of GNU (General Public License), this prevents any defection from many free software projects. (ie GNU/Linux)

ADVANTAGES OF PEER to PEER Collaborations

i.      Information Gain

ii.      The variability in fit of people to projects and existing info resources is great. The larger the number of people the more resources they have for projects.

iii.      People contribute to these projects because they gain a sense of purpose, they can display their creativity, or there is a common social goal, a sense of companionship  within a technical community.

COMMONS-BASED PEER PRODUCTION and VIRTUE

CLUSTER1: AUTONOMY, INDEPENDENCE, LIBERATION

Individuals choose to participate freely and  can contribute however much they want. They exercise free will and aren’t placed under any demand constraints.

CLUSTER II: CREATIVITY, PRODUCTIVITY, INDUSTRY

Our day to day lives are programmed, from TV Channels, to our typical workdays,  peer production enables individuals to be more creative and productive in their tasks.

CLUSTER III: BENEVOLENCE, CHARITY, GENEROSITY, ALTRUISM

To seek the good in others, to benefit and help others, this is a common goal in commons based peer production – individuals are not providing in order to out do one another.

CLUSTER IV: SOCIABILITY, CAMARADERIE, FRIENDSHIP, COOPERATION, CIVIC VIRTUE

The open-hearted contribution is to a commons, a community, a pubic, a mission, or a fellowship.

“Virtue leads people to participate in commons-based peer projects, and that participation may give rise to virtue”

  1. Peer production benefits others because the individuals are contributing to a common good, and this enables autonomy and promotes public good.
  2. Free/Libre and Open Source Software (FLOSS): survey and study – found that the greatest percentage agreed that it enabled more freedom in software development, new forms of cooperation, opportunities to create more varieties of software and innovative breakthroughs.  “Share my knowledge and skill”

PUBLIC POLICY:

“Technical systems and devices are as much a part of political and moral life as practices, laws, regulations, institutions and norms. “

“Peer production can be said to provide a social context in which to act out, and a set of social practices through which to inculcate and develop, some quite basic human, social and political virtues.”

THE CATHEDRAL and the BAZAAR – Eric Steven Raymond

  • Linux – world class operating system created by several thousand developers all around the world by an internet connection. Raymond has been involved in this project in 1993 but had been part of the open source community for 10 years already.
  • Raymond has collaborated in the following projects – creation of the GNU, nethack, Emacs’s VC, etc.)
  • The most important software needed to build like “cathedrals”, carefully engineered and created by small groups of individuals in isolation.
  • The Linux community “seemed to resemble a great babbling bazaar of different agendas and approaches.”
  • This system of chaos, similar to bazaar, shockingly worked with the help of “Cathedral workers” as Raymond calls them.  Cathedral style is what is mostly used in the commercial world where as the Bazaar style is how the Linux system was developed. In this book he uses both approaches to see which is better in respect to “software debugging.”
  • He does an experiment by creating a new type of email service called Fetchmail. He wanted to have access to his email locally and SMTP doesn’t allow this, mostly POP3 accounts do.
  • Linus Torvalds, for example, didn’t actually try to write Linux from scratch. Instead, he started by reusing code and ideas from Minix, a tiny Unix-like operating system for PC clones. Eventually all the Minix code went away or was completely rewritten—but while it was there, it provided scaffolding for the infant that would eventually become Linux.
  • He used a POP client with the same base to start his creation with
  • After trying to edit fetchpop he saw a more robust system to base from Carl Harris
  • When you lose interest in a program, your last duty to it is to hand it off to a competent successor.
  • Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging. Unix tradition, one that Linux pushes to a happy extreme, is that a lot of users are hackers too. Because source code is available, they can be effective hackers. This can be tremendously useful for shortening debugging time
  • Early and frequent releases are a critical part of the Linux development model. Most developers (including me) used to believe this was bad policy for larger than trivial projects, because early versions are almost by definition buggy versions and you don’t want to wear out the patience of your users. Release early. Release often. And listen to your customers.
  • Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
  • Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. “Somebody finds the problem,” he says, “and somebody else understands it – Linus’ Law
  • source-code awareness by both parties greatly enhances both good communication and the synergy between what a beta-tester reports and what the core developer(s) know. In turn, this means that the core developers’ time tends to be well conserved, even with many collaborators.
  • Smart data structures and dumb code works a lot better than the other way around.
  • If you treat your beta-testers as if they’re your most valuable resource, they will respond by becoming your most valuable resource.
  • The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better.
  • Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong.
  • “Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away.”
  • There is a more general lesson in this story about how SMTP delivery came to fetchmail. It is not only debugging that is parallelizable; development and (to a perhaps surprising extent) exploration of design space is, too. When your development mode is rapidly iterative, development and enhancement may become special cases of debugging—fixing `bugs of omission’ in the original capabilities or concept of the software.
  • Any tool should be useful in the expected way, but a truly great tool lends itself to uses you never expected.
  • When writing gateway software of any kind, take pains to disturb the data stream as little as possible—and never throw away information unless the recipient forces you to!
  • A security system is only as secure as its secret. Beware of pseudo-secrets.
  • Provided the development coordinator has a communications medium at least as good as the Internet, and knows how to lead without coercion, many heads are inevitably better than one.