Beyond 3D printing

3D printers are very much in vogue and used for everything from spectacle frames to jet engine components. They work by building up a 3D form one thin layer at a time. A variety of materials can be used depending on the desired properties of the resulting component.

I believe we should learn from nature. If you look at natural materials constructed by living organisms, it is really remarkable what has been achieved, for instance, hair, feathers, skin, teeth and bones. Insects are amazing to look at under the microscope and come in all sorts of weird forms. The structure of an insect’s antenna, or a butterfly’s wings are incredible.

The cell is a powerful molecular computer. At its heart, DNA provides the storage for the program. The human genome is said to be about three thousand million bits in size. The cell makes use of a complex set of molecules to determine which parts of the genome are being transcribed into proteins at any one time. The architecture is unlike any digital computer we are familiar with. The cell’s state is distributed across many components, and updated in complex chemical pathways. We are gradually improving our understanding of how they work together as a system.

It is now time to study how to create synthetic cells and learn how to utilize these to create complex materials we can use in a future generation of products. For this purpose, we will have to start relatively simply by studying particular subsystems without the need to fabricate the full complexity seen in living cells. This functional approach also has the great advantage of avoiding the risk of creating a new breed of organisms that can escape to the environment and replicate themselves unchecked.

The first step is to study how to create a molecular computer with DNA, RNA, ribosomes, enzymes and so forth. Can we build a system where we can design a program, translate it into DNA, and used it to switch on and off which parts of the DNA are being transcribed, and to update the state of the synthetic cell in predictable and controllable ways? Once that is achieved we could go on to develop the functional components needed to form a 3D assembler. These include counters and timers, as well as how to control the functioning of a synthetic cell according to its neighbours, or to chemical or electromagnetic gradients.

A working system would involve a means to design a program and translate it into DNA, to massively replicate this and assemble the synthetic cells from the raw ingredients, and then trigger them to start the assembly of the desired components in a carefully controlled environment. The synthetic cells would be unable to replicate themselves, and designed with only one purpose in mind.

The benefits of this approach would be the ability to create a very wide range of complex materials and forms from readily available raw materials in an energy efficient process. Today’s manufacturing processes aren’t sustainable in the long run as they use large amounts of energy and rely on materials that will increasingly be in short supply, for example, copper for electrical conductors and rare earths for electronic components and touch screens in smart phones. Biological processes by contrast make use of trace amounts of materials and as such are much more sustainable.

The time has come for a sustained programme of investment into research in molecular computing and synthetic cells. This is essential for sustaining a high standard of living as we move into a lasting era of increasingly expensive raw materials.

Posted in General, Miscellaneous, Software | Tagged , , | 1 Comment

Anonymous credentials in the browser

Identity matters!  In everyday life we present different “faces” to different people according to the social context, e.g. family, personal, and professional. Our online life is the same, and our privacy depends on keeping these different faces compartmentalized. To support this, we need ways to restrict access to services. As an example, a social website used by college students could be restricted to fellow students and off limits to everyone else including college staff and past students.  You certainly don’t want potential employers sifting through the site and rejecting your job application on the grounds of some loose talk or revealing party photo!

A powerful way to implement this is with anonymous credentials. Imagine the student union providing electronic credentials to all students that asserts that you are a current student at that college/university.  This is an electronic equivalent of a student ID card. When you go online to the social website operated by the student union, you are asked for proof you are a current student, but not for your actual identity.

I have been working with Patrik Bischel (IBM Zurich Labs)  on an implementation of this approach based upon a Firefox extension and the open source idemix (identity mixer) library.  The extension recognizes policy references in web page markup and asks the user for a PIN or pass phrase to unlock her credentials and construct a zero knowledge proof which is then sent to the website for verification. The browser extension is written in JavaScript and uses LiveConnect to communicate with the Java idemix library. The webserver is Apache2 and proof verification is implemented as a Java servlet on a backend Tomcat server.

This has been done with support from the EU PrimeLife project, and we hope to be able to make the extension and servlet widely available in the near future. Further work is needed on tools for simplifying the creation of credentials and proof specifications, and there are opportunities for integrating biometric techniques as alternatives to typing a PIN or pass phrase. One possibility would be for the browser to confirm your identity by taking a photo of your face with the camera built into phones and notebook computers. Another would be to ask you to say aloud a few digits and use the built in microphone for voice authentication. We’ve also discussed the role of physical tokens such as smart cards, and USB sticks for credential stores, but this is hindered by platform independent ways to access these from browser extensions.

As Dave Birch is fond of saying, there is no privacy without security. Anonymous credentials provide a powerful new way to boost privacy on the Web, and it is time to turn them from a laboratory curiosity into widely deployed solutions. I look forward to working on incorporating them in W3C’s suite of standards for Web platforms.

Posted in Browsers, Privacy, Software, W3C | 4 Comments

My personal space

There is intense interest currently in Apple’s success with a walled market for apps you install locally on the device. Developers get a route to market, and Apple helps with monetization in return for a substantial share in the money.

The challenge is to extend this to the web at large and make it scale across devices from different vendors. Users shouldn’t have to care about whether the app is locally installed, or downloaded on the fly from the cloud.

Today, many web apps are tied to websites, e.g. google docs is tied to the use of google’s server docs.google.com. End users don’t have a free choice in where apps run, and lack control over  where their data resides.

Imagine a market where I can choose an app/service and have it run on my own virtual server. This is akin to taking the idea of a device and expanding it into the cloud. My personal device includes my personal space in the cloud. I buy apps for my personal use and “install them” in this personal space. My personal space can include all of the devices I use, including mobile, desktop, tv and car. I may share my space with others, e.g. my family, friends or colleagues.

This model introduces new players and enriches the ecosystem compared with today’s narrower model, creating broader opportunities for developers. What’s needed to realize this vision?

  • Smarter caching and local storage for web pages will blur the distinction between online and locally installed web apps
  • Support for monetization, which is likely to necessitate some form of Web Application License Language

I am encouraged by the announcement of Mozilla Open Web Apps, and hope to explore these ideas further as part of an EU funded project called webinos which has only recently started with a view to making it easier to deliver apps across mobile, desktop, tv and cars.

Posted in Web of Devices, Web of Services | Leave a comment

Machine Interpretable Privacy Policies — A fresh take on P3P

W3C’s Platform for Privacy Preferences (P3P) was published as a W3C Recommendation in July 2002. It defines a machine interpretable format for websites to express their privacy practices, but failed to live up to its initial promise. One factor behind this is the flexibility that P3P offers for representing policies poses huge challenges for expressing user preferences in a practical way for the purposes of automatic comparison of preferences with policies.

This problem was recognized early on, leading to the definition of compact policies for P3P (as implemented in Internet Explorer), however, this is limited to cookies, and I wanted to cover much more than that whilst enabling a practical treatment of the user interface for expressing privacy preferences. To try this out in practice I developed a Firefox extension and adopted a JSON-based format for policies. For more details see my paper, which was submitted to the W3C Workshop on Privacy and data usage control, held 4-5 October 2010 at MIT.

Posted in Privacy, W3C | Leave a comment

New directions for natural language systems

In my spare time I am working on a project to explore the potential for a new generation of natural language systems, inspired by what could be done with a fusion of computational linguistics, cognitive science and symbolic reasoning. This started with a study of classical and statistical approaches to natural language processing, and a dawning realization that traditional approaches to parsing conflate different kinds of knowledge. Prepositional attachment is highly ambiguous at a purely grammatical level, and requires reasoning at a different level that operates in parallel.

Conventional symbolic reasoning is founded on mathematical logic and deals with what can be soundly deduced starting from a given set of assumptions. Cognitive Science is an interdisciplinary approach to gaining an understanding of the human mind. Cognitive theories such as ACT-R and CHREST aim to provide quantitative predictions of human performance, and have little in common with logic-based accounts of reasoning e.g. description logics as used in the Semantic Web.

To make significant progress will take plenty of effort and time, so I don’t expect quick results. My starting point is the development of a broad coverage chart parser with relatively flat grammar rules. I plan to then introduce cognitive models for dealing with parsing issues that are hard to address using a purely linguistic framework. The biggest problem I am facing is the difficulty of remembering where I left off when I pick up the work again. That’s inevitable for something that I only get time for now and then.

Posted in Natural Language, Software | Leave a comment

Privacy Dashboard

Have you ever wondered what information is being collected as you browse the Web? The Privacy Dashboard is a Firefox extension that enables you to see some of the practices that websites are using, e.g. whether they include 3rd party content, perhaps with lasting cookies that can track you across the Web, or are using a variety of other techniques. You can set your privacy preferences on a site by site basis, ranging from carefree to paranoid. The Dashboard also improves upon the browser’s built-in support, making it easier to track and revoke which sites you have told Firefox to provide your geolocation to.

The Dashboard is currently available as an alpha release (see screenshots), and I am looking for volunteers with an active interest in privacy on the Web to help with its maintenance and further evolution as an open source project. This work was made possible by the support received from the PrimeLife project under the European Union’s 7th Framework Programme. If you are interested in helping out, please contact me at <dsr at w3 dot org>.

Posted in Browsers, Privacy, W3C | Leave a comment

W3C Model-Based UI Workshop Report

The report from the W3C Workshop on Future Standards for Model-Based User Interfaces is now available. The workshop took place in central Rome on 13-14 May 2010, and focused on ideas for making it easier to create Web applications that can be delivered across many kinds of devices and that adapt dynamically to the context.

To achieve this, it is necessary to separate out different kinds of design concerns, and this is where models come in. There has been a steady stream of research work in this area for many years (including the W3C MBUI Incubator Group), and the workshop was held to bring together researchers to examine whether it is now timely to launch standards work. The workshop participants recommended that W3C consider starting a new Working Group on meta-models as a basis for exchange between different markup languages for model-based authoring tools. We hope to make a start on this later this year with help from the EU Serenoa project.

I want to express my thanks to my co-chair Fabio Paternò, all of the participants and to W3C and to the CNR-ISTI HIIS Laboratory for hosting the workshop.

Posted in W3C, Web Design | Leave a comment

Beyond Facebook, a world of opportunity

Facebook continues to attract lots of criticism with its evolving privacy policies. Its success in attracting users shows the importance of social networks, but I have seen relatively little discussion as to which features people really value, and explorations of the design space for alternatives to centralized solutions.

Users should be free to pick how they want to pay for services and not subject to a single model. A decentralized, distributed solution to social networks makes this easier to realize. Essentially you should be free to choose which server you want to host your social presence (your profile page).

The next choice depends on how paranoid you are. This boils down to how much you are prepared to trust your server with your data. For most of us, we are probably content to trust the server as long as it provides an adequate level of security. An alternative is to encrypt data in the web browser and to never give the keys to the servers. The browser generates a symmetric key to encrypt a notification, and then uses the public key for each friend to mask the symmetric key. As always there are trade offs. The more paranoid you are, the more computation and the greater the level of network traffic are involved. This slows things down and will drain the battery faster when you are using a mobile device.

What kinds of features do people want from social networking? Here is a quick brainstorm:

  • notifications of what their friends are doing/planning
  • shared calendars
  • easy ways to upload/share/tag/sort photos and videos
  • directories to find friends, colleagues and others
  • shared recommendations for all kinds of things
  • avoid unwanted leakage across different social groups
  • immediate/delayed and push/pull communication models
  • symmetric and asymmetric social relationships
  • cool social apps

When it comes to finding people/organizations it makes sense to control what information you disclose to different directories. This also involves some degree of trust in the servers that support search across these directories. Access to directories could be restricted to people in given groups. Search could be distributed across servers using peer to peer models as an alternative to centralized solutions such as we are used to with Google and Facebook.

All of the above could be implemented as open source modules that can be installed on any server.  It seems that Diaspora is working in this direction and on interconnecting existing social network sites e.g. twitter, facebook, etc, but, perhaps it is work taking a step back, as there is a world of opportunity to be explored. What do people really want from social networking?

Posted in Uncategorized | 1 Comment

Ecosystem for investors, upcoming workshop

As the W3C Team lead for financial data and the Semantic Web, I am looking at how the Web is changing the way investors assess the value of companies.

Public companies worldwide are required to file regular reports setting out the financial health of the company. These are available from corporate investor relations websites and from regulatory agencies like the Securities and Exchange Commission (SEC). If you want to analyze this data, you have to re-key it, which involves a lot of work and introduces errors. That is all about to change.

The SEC and kindred agencies around the world are starting to require companies to file reports in XBRL (the extensible business reporting language). XBRL ties each reported item of data to the reporting concept used to collect it, and moreover, does so in a way that computers can make sense of, avoiding the need for re-keying data.

XBRL will allow investor relation sites to support interactive access and sharing of tagged financial data. This will build upon Web 2.0 and the phenomenon of user provided content on wiki’s, blogs and social networking sites aimed at investors, e.g. wikinvest, where investors share data, insights and analyses. Youtube provides a powerful precedent in the way it allows people to share content by embedding a view or a link to a view in their blogs.

For XBRL, this means providing a way for people to browse the data, and to pull out tables and charts as needed for their blogs. These items could be rendered by the investor relations site and shared à la Youtube, or the blog could itself make use of a script to query data across one or more investor relations sites and render it locally. This is where the Semantic Web and linked open data comes in. I’ve previously reported on techniques for converting XBRL into RDF triples.

W3C and XBRL International are looking for your help in understanding what some people are calling “Investor relations 2.0″, and we invite you to attend a workshop at the FDIC training facility in Arlington, Virginia, this October. We want your help with identifying the opportunities and challenges for interactive access to business and financial data expressed in XBRL and related languages. This doesn’t just apply to the investor community, as the same technologies also offer huge potential for data published by governments on sites like data.gov (see demos). For more details on the workshop see the call for papers.

Posted in RDF, W3C | Leave a comment

New Directions for Privacy and Identity Management

I recently joined the PrimeLife Project which is funded by the European Commission’s 7th Framework Programme. It aims to bring sustainable privacy and identity management to future networks and services, and builds upon the former Prime Project. Privacy is something that most people take for granted, but we leave a digital trail as we interact with websites, and this can lead to abuse ranging from identity theft, discrimination, or even mild embarrassment. Privacy enhancing technologies have the potential to restore the balance and give all of us better control over data we would prefer to keep private.

One of the challenges is the ease with which interactions can be linked across websites. Having to remember user names and passwords for a large number of websites is hard. The increasing use of email addresses in place of user names for signing into websites makes it easier to link interactions across sites since email addresses are globally unique names. OpenID offers users the means to use a single digital identity for accessing participating websites, and relies on the user providing an HTTP URL as a globally unique identifier, with the same drawback as using an email address.

Having to remember lots of user names is much too hard, but using a gloabally unique identifier just makes it easier for people to track your detailed behavior. What’s the solution? I have been thinking about the possible role of a trusted privacy provider. With OpenID you are asked to provide your HTTP URL to the website you are connecting to. Imagine instead, that you are asked to disclose your privacy provider (e.g. through a drop down list or typing a URL). The website then re-directs the browser to your privacy provider to sign in. If this is the first time you have visited the website, your privacy provider will ask you for your privacy preferences for interacting with that site. The approach allows you to effortlessly use a different identity for each website if you wish, and like OpenID avoids the need for you to sign in with every website you visit. There are lots of further opportunities for privacy management, but I will leave those to another blog.

Posted in Privacy, W3C | 1 Comment