Whether to outsource….

By

Every issue of Harvard Business Review includes a hypothetical case study, together with commentaries by leading practitioners on what they recommend in the situation. The case study in the July-August issue, written by Nitin Nohria, titled “Feed R&D – or Farm it Out?” is a very interesting examination of a company that is wondering whether to outsource part of its R&D to India. Issues raised include accessing best-of-breed and potential loss of intellectual property. Of the four very interesting commentaries, the most striking is written by Azim Premji, chairman of Wipro, one of India’s top three technology services companies. He is cautious on the merits of outsourcing in this case, spends most of his analysis on the internal dynamics of the company involved, how to support collaboration, and emphasizes the importance of providing strong contractual protection of IP.

MeshForum in Chicago

By

As part of the whole shift to recognizing networks as central to everything, MeshForum is shaping up as a fantastic conference in Chicago this May 1-4. Shannon Clark is the driving force behind this, and he and his colleagues have brought together a fabulous array of content including social, business and biological networks. I’ll be running a Living Networks Forum session at lunch on May 2, just before a session on social networks run by Esther Dyson

The Metaweb

By

Nova Spivack, the grandson of Peter Drucker, has a vision of the connected future that aligns very strongly with mine. He describes the emerging “Metaweb” as the result of the rapid increase in both information connectivity and social connectivity, leading to the emergence of the global brain. A diagram and overview is provided on his website – well worth a look. I agree wholeheartedly this is the way we’re going.

Experience the Living Networks in New York!

By

Since I wrote Living Networks, I’ve dreamed of creating an event that would literally bring the book to life, to allow people to experience personally the power of the networks and the implications for business. The first of what I hope will be a whole series of events – the BDI Living Networks Forum – will be held in New York City on December 4th, 2003. I have the perfect partner for this – the Business Development Institute, which combines a fantastic network of network-minded individuals and organizations with innovative business development services.

In the agenda you’ll see that the event is focused on creating interaction and what I call “enhanced serendipity” between participants. Using Spoke Software we will show participants their “relationship path strength” with all other attendees, and with any other individual, who they know in common. Litéra collaboration software will be used as a platform for showing participants how to implement collaboration effectively in and across organizations.

We have some great supporters, and there’ll be some awesome people along. Nothing like this has been done before, so I’ll let you know some of what we learn at the event. Or of course would be fantastic to meet you there!

I believe that events that use emerging social network technologies and effective design of participant interaction will over time become the norm. Hopefully “talking-head” conferences will die a natural death very soon. Events that apply living networks ideas will create immense value in bringing the right people together to create and share knowlege, ideas, and relationships. Be there at the birth of something big!

Creating the infrastructure for the trusted networks

By

I had lunch earlier this week with Stuart Henshall in San Francisco, and we had a delightful wide-ranging discussion on topics of common interest. We’ve known each other for a good few years through scenario planning, and have a similar vision for the future of personal online networks. Stuart focuses on—among many other very interesting issues including consumer rights—trust in building networks. His vision is of a world in which everyone has their profile online, and shares both their profile and their personal connections selectively with trusted contacts. Sixdegrees.com was the first major online player in this space. I intended to write about it in Living Networks, but it went the way of all things in January 2001. The current top players in this space are Ryze.com and ecademy. However effective trust systems are essential for these public online networks to work. In the first instance we need to be able to create layers around how much of the information about our personal contacts we want to share. Intermediating software can help, for example by identifying in a secure system the contacts we share. For example, Stuart and I estimated we would share 20-30 people in our email address books, but we don’t know who all of those people are. On the next level, if we can create software that enables people to draw on their personal contacts’ perception of others’ trustworthiness, this will enable us to more readily expand our own personal networks in useful ways. These kinds of systems can be implemented either in a global context, or inside or across organisations.

One of the key issues which comes up for me is how we are going to get there. Creating a highly functional system that enables us to see and expand our global personal networks is a fabulous vision, which I dearly hope will come to fruition, but it is likely to take a long time and there is a risk it will never happen. In the first instance, as I write in Living Networks, people need to build evolutionary business models, that can make money in creating the first steps of this vision, and easily morph into new models as the context moves on. The other key issue is standards. As in many domains, whoever “controls” this extraordinarly valuable space of personal connections can create—and extract—enormous value, and thus there will be plenty of competition to be the winner. If there are standard information definitions and interfaces between competing systems, this fragmentation can be avoided. Ultimately, if the vision is realized, it will most likely be driven by an open source initiative, which means there is usually less commercial value to be extracted. There’s a long way to go yet in creating a system that will allow everyone to see exactly how they are connected in the global networks. I’ll try keep you posted along the way.

Terrorism, technology, and an open society

By

Having just returned from a quick five-continent round-world trip, I am frequently asked about the mood around the world. Wherever you are, it is characterised by uncertainty. The focus is very short-term; people find it hard to think beyond a few months ahead. If we do stretch our minds a little further, there are compelling issues that will shape our future as humans. Terrorism is in the forefront of people’s minds, and the nature of technological development and the flow of information is such that ever-more frightening tools are becoming broadly available. Sun Microsystems chief scientist Bill Joy harkens a world in which “how-to” guides to create deadly self-replicating viruses are freely available on the Internet. Joy himself has suggested that research be stopped or constrained in potentially dangerous fields. Others—especially governments—want to watch us all very, very closely, leaving privacy as a historical concept. One of the key choices the human race faces is how to respond to these challenges. A recent article in Salon.com provides an insightful perspective. To try to constrain knowledge will lead to a far more divided society. What has created the most pressing problems on the planet today is division—of wealth, opportunity, and access. If we turn our backs on an open society, if we have a future, it will be deeply unhappy. We can only respond to the risks if we know what they are. There is certainly a balance to strike, but if we err, it should be towards more rather than less openness.

Making the global brain – à la Google

By

I believe one of the most important themes for our future is collaborative filtering – I will keep on coming back to and developing this theme on these pages. This is fundamental to the formation of what we can think of as a “global brain”. As I describe in Living Networks, one of the most important functions of the human nervous system is to filter the massive sensory input it receives so that we are not overwhelmed. Similarly, in a world of massive and increasing information overload, we need mechanisms that make what is useful obvious, and what isn’t useful invisible. By collaborating on this task, each of us can benefit from the perceptions and judgments of us all. (Read the book sampler on “free downloads” page for more info.) Those that help create a higher level of collaborative filtering will add massive value – and with the right business models can extract part of that value. Discrete examples include Amazon.com’s book recommendation system, the Movie Lens film recommendation service, and Media Unbound music personalization system, used by Pressplay and mentioned in my book.

Which takes us to the much-discussed Google acquisition of Blogger. Steven Johnson has written an extremely interesting article on this for Slate. In short, he suggests that Google can pick up how people navigate the web in order to draw meaning for themselves and others. The analogy with the brain is that our repeated trains of thought are not only remembered more easily, but are also the very foundation of our neural pathways and thinking. I’d go further than Johnson to suggest that applying these approaches on a global scale could be critical in creating an information architecture that is far closer to that of a brain, providing highly effective filtering and the early stages of sense-making.

One of the key issues that emerges from this is that whoever monitors our information usage patterns to create useful tools, holds intensely personal information about us. Who will we trust to do this? Google-Watch for one doesn’t trust Google.

Preserving the end-to-end networks

By

Living Networks is about leadership in the networks. Lawrence Lessig is one of the true leaders today in realizing the true potential of the networks for all of us. In this article in the FT he gives a brief view of the underlying “end-to-end” architecture of the Internet, and why it must be preserved. He goes into this issue in far more depth in his book The Future of Ideas, a rich and marvellous exploration of how the architecture of the networks – in the broadest sense – will determine our ability to create by building on each others’ ideas. This is a good place to give another plug for Lessig’s fantastic Creative Commons project, which gives people access to customized licenses. It is the fluidity of intellectual property in all forms which will determine how fast innovation moves. Creative Commons specifically creates far more flexibility and fluidity in what is now a far-too-rigid intellectual property landscape

The decentralization of the networks

By

The recent Supernova Conference in Palo Alto explored the theme of decentralization – of the networks, software, communications, and media. So many of the attendees were bloggers that it’s easy to get a good feel for the conference and content through what they wrote. As attendees reported, the clicking of keyboards throughout conference sessions testified to the live logging of ideas and impressions. The conference set up a group weblog, with another list of conference blogs here. Salon.com also had a good article. In many ways, the theme of decentralization is that of the living networks. Emergent behaviours come from the unstructured combination of many participants. Open source software, distributed innovation, and peer-to-peer content distribution are just some of the examples. One of the key issues discussed at the conference was the postulated emerging “end-to-end” nature of networks. Connectivity – the pipes provided by telcos – will be dumb, and the value-added activities provided at the ends. Web services can be applied not just in application integration, but to making everyone’s touchpoint to the networks encapsulate highly customized functionality.

Patent madness

By

“Watch your step: If you’ve ever exercised your cat by having it chase the reflected spot of a laser pointer, you and kitty may be in violation of a bona fide U.S. patent. Don’t believe it? Take a gander at Patent No. 5,443,036, Method of Exercising a Cat, issued by the U.S. Patent and Trademark Office in 1995.” writes Lauren Weinstein in Wired News. Weinstein, like many others, points out how crazy the patent process is, and how it often dampens innovation. Indeed, the heart of the issue is the quality of the work of the US Patent and Trademark Office and its peers around the world. If they only grant focused and relevant patents, the problem would be minimized. Patent examiners get an average of 20 hours to review a patent, which is now often 30-40 pages long itself, and usually requires reviewing 50 or more articles. For example for complex biotechnology patents–applications for which are growing at 24% per year–this is insufficient.

In Living Networks I describe how when President George W. opened the way for federally funded research into stem cells, it came to light that a quiet biotech firm called Geron held patents that covered almost all embyonic cell lines existing at the time, as well as the methods to produce them. So in principle they own the results of almost any future research in the field. The USPTO’s attitude has been to grant patents easily and let them sort it out in the courts, but this means engaging in the intellectual property field requires substantial funds, and creates a domain in which intellectual property is blocked rather than allowed to flow freely. It’s easy to say patent processes must change. It’s harder to do. One thing that would help is if Congress would give back the $90 million it has taken away from the USPTO’s budget for each of the last three years.