Default Prefix Declaration
The ideas behind the proposal presented here are neither particularly new nor particularly mine. I've made the effort to write this down so anyone wishing to refer to ideas in this space can say "Something along the lines of [this posting]" rather than "Something, you know, like, uhm, what we talked about, prefix binding, media-type-based defaulting, that stuff".
Criticism of XML namespaces as an appropriate mechanism for enabling distributed extensibility for the Web typically targets two issues:
- Syntactic complexity
- API complexity
Of these, the first is arguably the more significant, because the number of authors exceeds the number of developers by a large margin. Accordingly, this proposal attempts to address the first problem, by providing a defaulting mechanism for namespace prefix bindings which covers the 99% case.
3. The proposal
- Define a trivial XML language which provides a means to associate prefixes with namespace names (URIs);
- Invoking from HTML
- Define a link relation
dpdfor use in the (X)HTML header;
- Invoking from XML
- Define a processing instruction
xml-dpdand/or an attribute
xml:dpdfor use at the top of XML documents;
- Defaulting by Media Type
- Implement a registry which maps from media types to a published dpd file;
- Define a precedence, which operates on a per-prefix basis, namely xmlns: >> explicit invocation >> application built-in default >> media-type-based default, and a semantics in terms of namespace information items or appropriate data-model equivalent on the document element.
XML namespaces provide two essentially distinct mechanisms for 'owning' names, that is, preventing what would otherwise be a name collision by associating names in some way with some additional distinguishing characteristic:
- By prefixing the name, and binding the prefix to a particular URI;
- By declaring that within a particular subtree, unprefixed names are associated with a particular URI.
In XML namespaces as they stand today, the association with a URI is done via a namespace declaration which takes the form of an attribute, and whose impact is scoped to the subtree rooted at the owner element of that attribute.
Liam Quin has proposed an additional, out-of-band and defaultable, approach to the association for unprefixed names, using patterns to identify the subtrees where particular URIs apply. I've borrowed some of his ideas about how to connect documents to prefix binding definitions.
The approach presented here is similar-but-different, in that it provides for out-of-band and defaultable associations of namespaces to names with prefixes, with whole-document scope. The advantages of focussing on prefixed names in this way are:
- Ad-hoc extensibility mechanisms typically use prefixes.
The HTML5 specification already has at least two of these:
- Prefixed names are more robust in the face of arbitrary cut-and-paste operations;
- Authors are used to them: For example XSLT stylesheets and W3C XML Schema documents almost always use explicit prefixes extensively;
- Prefix binding information can be very simple: just a set of pairs of prefix and URI.
If this proposal were adopted, and a dpd document for use in HTML 4.01 or XHTML1:
<dpd> <pd p="xf" ns="http://www.w3.org/2002/xforms"/> <pd p="svg" ns="http://www.w3.org/2000/svg"/> <pd p="ml" ns="http://www.w3.org/1998/Math/MathML"/> </dpd>
was registered against the
text/html media type, the following would result in a DOM with an
input element in the XForms namespace:
<html> <body> <xf:input ref="xyzzy">...</xf:input> </body> </html>
The theme photo for W3C presentations at the TPAC09 showed the Natural Bridges state beach of Santa Cruz, California. We met in Santa Clara (not far from Santa Cruz) 2-6 November in order to bridge various communities and bring them together. For example, bringing together the HTML 5 browser folks and the extensibility folks was a goal. We joked this goal was called "Unnatural Bridges".
Broadening the W3C community was one of the themes of TPAC09, and was reflected in talks as well as participation.
For the first time ever, we invited the public to gather for an afternoon of discussion and networking, the Developer Gathering (the minutes are now available). Ian Jacobs aligned fantastic speakers who regaled us when they presented the latest on various open standards in development. Feedback from the #w3cdev demos included many "very cool", "absolutely amazing", "video element", "impressive", "geolocation", "accelerometer", "APIs", "nice possibilities", "features".
I thought the event went very well and think W3C should organize more. Please let us know what sort of event would appeal to you (e.g., with speakers as we had this time, or more like a bar camp, or a mix). If you blogged about #w3cdev, please, share a pointer in a comment!
TPAC is our biggest yearly event. Each year about 300 people who participate in various W3C groups meet face-to-face to exchange ideas, resolve technology issues, and socialize. My sense is that for most people involved, TPAC is their favorite W3C meeting of the year.
We tracked micro-blogosphere feedback on #tpac09. We expanded the number of people we follow (I'm not yet quite caught up with the additions I wanted to make, so excuse us if we're not yet following you). Likewise, at the occasion of TPAC and the Developer Gathering, a significant number of people also expanded their contact list and started to follow us (yay!). In Santa Clara @dckc said, '@w3c has ~5000 followers'. This is still growing, ~6300 now!
A bit of a mystery to me as I reviewed the tweets is the unicorn meme. I have no idea who started it and why, but unicorns were mentioned, portraited (it even made it to our theme photo!), tweeted, and interjected.
Oh, and I mentioned werewolves in the title. Although not (yet) a resident on our meetings agenda, Werewolves attacked the villagers almost every night at TPAC! Led by fantastic emcee @dontcallmedom, many people enjoyed the battles of minority against majority, the games of suspicion, trust, lies, doubts and beliefs. Nightly werewolf encounters are such fun in person.
If you attended TPAC09 and would like to give feedback, we'd appreciate if you took the WBS survey. I welcome additional feedback, or a pointer to your blog entry in a comment to this entry.
Earlier this week, Firefox turned 5 years old. The press coverage here in Europe has been amazing. We were lucky enough to have John Lilly with us on this day, so we organized a party with 700 people in Paris.
A couple of my colleagues have blogged about this, so I'll just link to them instead of repeating what they've wrote:
- Mike Shaver : five by five, in the pipe ;
- Mitchell Baker : Firefox Turns 5 ;
- Ken Da Numerator Kovash : Firefox Hits 25% Market Share on its Birthday ;
- Chris Blizzard : 5 years of Firefox ;
- Marco Zehe : Happy birthday, Firefox! ;
- Official blog : Celebrating Five Years of Firefox! ;
- About:Mozilla : Five years of Firefox, 25% market share, and more… ;
A few people have taken pictures. Here they are:
This cheatsheet aims at providing in a very compact and mobile-friendly format a compilation of useful knowledge extracted from W3C specifications — at this time, CSS, HTML, SVG and XPath —, completed by summaries of guidelines developed at W3C, in particular the WCAG2 accessibility guidelines, the Mobile Web Best Practices, and a number of internationalization tips.
Its main feature is a lookup search box, where one can start typing a keyword and get a list of matching properties/elements/attributes/functions in the above-mentioned specifications, and further details on those when selecting the one of interest.
The early feedback received both from TPAC participants after the demo and from the microblogging community has been really positive and makes me optimistic that this tool is filling a useful role.
This is very much a first release, and there are many aspects that will likely need improvements over time, in particular:
- some people have reported that there might be accessibility problems with the current interface, that I’m eager to fix once I get specific bug reports,
- the cheatsheet doesn’t work in IE6 (and probably even in later versions), and it would be nice to make it work at least somewhat there.
The code behind the cheatsheet is already publicly available, and I’m hoping others will be interested to join me in developing this tool — I’m fully aware that the first thing that will need to get others involved will be some documentation on the architecture and data formats used in the cheatsheet, and I’m thus hoping to work on that in the upcoming few weeks.
Next week's W3C Developer Gathering will bring together some great speakers:
- Leslie Daigle (ISOC) on Internet Ecosystem Health
- Mark Davis (Unicode Consortium) on controversies around international domain names
- Fantasai on CSS, with help and demos from the "CSS Strike Force": Tab Atkins, David Baron, Simon Fraser, and Sylvain Galineau
- Philippe Le Hégaret (W3C) on community-built browser test suites.
- Kevin Marks (OWF) on OpenID, OAuth, OpenSocial
- Arun Ranganathan (Mozilla) on what's new in APIs
I will be hosting the gathering (5 November in the afternoon). We've planned for some fun give-aways to be revealed at the meeting. Registration closes today, although we will admin walk-ins at a higher rate next week.
If you can't join us in person, you can follow the meeting on IRC; more details are available on the meeting page.
I hope you will join us next week.
We've received a number of helpful bug reports about the new site. I thought I should list a few here so that we can refer to them. We are working to have these particularly tricky ones fixed as quickly as possible.
- In IE, if you select mobile or print modes, you can't get back to desktop mode.
- In Safari, even if you select "desktop" mode you get mobile mode at narrow browser widths. Also, if you select "mobile" mode you get a mix of mobile and desktop at wider browser window widths.
- In some browsers, you can't expand the expandable content sections; they snap back shut.
We are working on these fixes. I also welcome fix suggestions from the community. Thanks again to those who have sent comments to email@example.com
Today we launched the new W3C. We've been working on it for a while, so I'm happy that it is seeing the light of day.
Comments are flowing in, some touching on issues we identified when we announced the beta version. Here are a few:
- Is the CSS invalid? The CSS does not validate with the W3C CSS validator. We mentioned this as one limitation of the site back in March. As we wrote then, "Because of known interoperability issues, we have accepted to use CSS that does not validate with the CSS validator. Over time we hope to evolve towards valid CSS."
- Why do some pages (such as the graphics introduction, though there are others as well) look unfinished? They are; the generic template text is still there from the beta. We decided to launch the site even without all the content we hope to have. We think the site is a significant improvement over the old one, and so prefer to begin using it rather than wait for more content. The site will continue to evolve, and I hope much more easily. We are asking staff, Working Groups, and the community to help out and provide content. We'd love your help, and are happy to acknowledge your contributions on the pages. Let us know at firstname.lastname@example.org.
- Some of the rewritten Recommendations have formatting bugs. Unfortunately, one of our processing passes modified the markup and we didn't realize it; we'll be fixing those problems in place. For the moment we are only using the new templates for Recommendations (old and new). As we gain more experience and resolve formatting issues, we expect to apply the new templates to more publications. One advantage of the new approach will be that it will be easier to tell right up front when a specification has been superseded by another.
There are also a few rendering issues we are aware of and plan to fix over the next few days. Please tell us about any issues you encounter on email@example.com. Please be sure to tell us the URI of the page in question and what browser and OS you are using.
I’ve just finished a visit with the wonderful Paris-Web group. What a fun time! I’ll be making my way from Paris to Prague, and wanted to share an interview done back in March and just now published on a Czech magazine site.
The interview provides some insights into education and evangelism as a career, as well as describing my journey from wee Web person in 1993 to my current work as a Web Evangelist for Opera Software.
There are so many things to say about European Moz Camp 09 – which took place in Prague last week-end – I just can't get started…
I'll try to describe it anyway. It was:
- Energizing. For example, I had a blast discussing about motivation with Milos, our Serbian localizer, among many other.
- Super informative, with tons of sessions, keynotes and no less than 4 tracks (QA, Dev, L10n and Advocacy).
- Full of interactions, the one you can't have on IRC, conference calls or Bugzilla
- Exciting (having Mike Beltzner explain the Firefox roadmap was great, but having him understand better the community in Europe was even better)
- Fun (involving laughter, friends, geeky humor and local beer).
Every single minute I have spent there was exciting, and I'd like to thank for this all the people involved in making this event happen (at the risk of forgetting a couple dozen names!).
- First and foremost, the lead organizers, William and Irina, all those who helped on this topic. If there were glitches (and nothing can be perfect on an event of this size!), I did not hear of them! 
- The track organizers who have built a wonderful program with the speakers
- All the volunteers who were willing to showed up
- Mark Surman, Mike Beltzner and Glyn Moody for taking time out of their busy schedule
There are tons of posts, pictures and tweets that have been published around
#eumozcamp09. But if there were only 2 things to see if you haven't been able to attend, they would be the following:
- Mozilla Drumbeat slides with audio, by Mark Surman (31mn), but full of interesting stuff on what Mozilla Foundation could do in the future besides Firefox and code
- Sethb and Choffman in a 42 seconds clip I love this Community Yeaaaah!!!. Pure joy
I'm looking forward a similar event next year, with FOSDEM next February in the meantime. See you there!
A couple of blog posts and photo albums from around Europe
- Mozillians of Europe, Unite (by Glen Moody, in English, from the UK)
- Mozilla Camp Europe 2009, Prague : j’y étais (in French, from Belgium)
- Mozilla Camp Europe 2009 by Jan Odvarko (Czech Republic);
- EU MozCamp 2009, Prague, by Kazé (France)
- MozCamp 2009 in Prague (from The Netherlands)
- All in all, community rocks (by Milos, from Serbia)
- Mozilla Camp Europe 2009 (in Slovene, from Slovenia)
- Pragatik bueltan #eumozcamp09 itzel baten ostean (in Basque) ;
Pictures & videos
The W3C RIF Working Group has just published the RIF specification as a Candidate Recommendation. As a coincidence, the OWL 2 Working Group published the OWL 2 specification as Proposed Recommendation just a few days before. Ie, two major sets of technologies that can be used for various kinds of inferences on the Semantic Web have arrived to a high level of maturity almost at the same time. If everything goes as planned (I know, it never does, but one can still speculate) they will become Recommendations around the end of the year.
A group of questions I often get is: how do these two sets of recommendations relate to one another? Did W3C create competing, incompatible technologies that are in the same design space? Why having two? How can they be combined?
To answer this question one has to realize that the two sets of technologies represent different approaches. OWL 2 (and, actually, RDFS) rely, very broadly speaking, on knowledge representation techniques. Think of thesauri, of ontologies, of various classification mechanisms: one classifies and characterizes predicates, resources, and can then deduce logical consequences based on that classification. (On the Semantic Web this usually means discovering new relationships or locating inconsistencies.) RIF, on the other hand, is more reminiscent of logic programming (think of Prolog). Ie, if these and these relationships hold then new relationships can be deduced. (It must be said that RIF also includes a separate work on production rules, but they are fairly distinct from OWL 2, so let us forget about that for the moment.)
Would I want to use OWL 2 or rather RIF to develop an application? Well, it depends. Some applications are better formulated this way, others that way. There are a number of papers published on when one approach is better than the other, how certain tasks can or cannot be expressed using classification or rules, respectively, how reasoning is possible in one circumstances or the other. Very often it also boils down to personal experience and, frankly, taste: some feel more comfortable using rules while others prefer knowledge representation. I do not think it makes sense to claim that one is better than the other. Simply put: they are different and both approaches have their roles to play.
So far so good, the reader could say, but what about using OWL and RIF together?
One of the six recommendation track documents of RIF is called “RIF RDF and OWL Compatibility”. Because we are talking about formal semantics, this document is of course not an easy read. However, in layman's term, what it describes is how the two “sides”, ie, the rule and the classification sides, should work together on the same data set. It defines some sort of an interplay between two different mechanisms: the, shall we say, logic programming part and the knowledge representation part. Implementations doing both are a bit like hybrid cars: they have two parallel engines and a well defined connections between those two. That said, the document only defines what the combination means; whether, for example, engines will always succeed in handling the two worlds together in a finite time is not necessarily guaranteed in all cases. But we can be positive: in many cases (ie, by accepting restrictions here and there) this combination does work well, and there are, actually, good implementations out there that do just that.
A simple case where no problem occurs is the so called OWL 2 RL Profile. This profile has been defined by the OWL Working Group with the goal of being implementable fully via rule engines. This does not necessarily means RIF (I myself have implemented OWL 2 RL by direct programming in Python), but the fact that RIF could also be used is important. The RIF Working Group has therefore published a separate document (“OWL 2 RL in RIF”) which shows just that: it reformulates the rules for the implementation of OWL 2 RL as RIF rules (more exactly, RIF Core rules). Ie, a RIF implementation can just take those rules, import any kind of RDF data that also include OWL 2 statements, and the RIF engine will produce just the right inference results. How cool is that?
So, the answer to the original question is: yes, for many applications, RIF and OWL 2 can happily live together ever after…
I had the privilege of attending the Gov 2.0 Summit in Washington DC a few weeks ago, and the following is what I shared at the WebSG meetup last night in an attempt to summarise the ideas that are relevant to our local climate.
It was a difficult presentation to put together because it combines such a broad range of areas, from technology, culture and mindsets, and all the way to political ideology. This is accentuated by the fact I straddle both the role of citizen and civil servant. It is my hope, however that this duality help us understand both perspectives.
For the purposes of this presentation, there is a need to define the word “citizen”, a term which will come up often in any discussion of government and even more so in government 2.0. For the purpose of this presentation I’m going to define citizen as “anyone who has thrown their lot in with us”. I think it is as absurd, in this day and age of globalisation that we should continue to define people by where they were born, as it is to judge a person by the colour of their skin. Instead, citizens should be seen as the people who have decided to share a collective fate and a common destiny. People who look at Singapore simply as a stepping stone or springboard need not apply.
In order to understand what Gov 2.0 is, we must first define what Gov 1.0 was, in order to effectively move away from the old model.
Latest Update: We’re holed up at room 4.3 instead. It’s just the next one.
Hey folks, it’s been a while, but we’re meeting up next week:
Date: Wednesday, 30th September 2009
Time: 7:30pm (we’ve booked the room from 7pm)
Place: Seminar Room 4.2 @ School of Economics & Social Sciences, SMU (it’s the building nearer the National Museum)
Speakers and Topics
Introduction to HTML5
There’s a lot that’s been said about HTML5, yet oddly enough everyone’s been so busy not many folks have actually kept up with what this new iteration of HTML means to those of us who are developers, or the potential it opens up to owners and managers of websites.
Gleanings from the Gov2.0 Summit
Having just returned from Tim O’Reilly’s Gov2.0 Summit in Washington D.C., I’m hoping to give a summary of the event - learning points, case studies and possible applications for us here in Singapore.
We’ll need to know how many are coming so we can get a bigger room if necessary. Drop a comment if you’re coming!
Today is One Web Day! Since 1994 W3C has sought to ensure the Web is available to all people, from anywhere, on any device. Today I'd like to invite people to help build One Web by:
- Learning about the Web Content Accessibility Guidelines (WCAG) and either (1) building your own customized checklist for How to Meet WCAG 2.0 or (2) putting together a customized web accessibility business case for your organization;
- Reaching more people by learning about the quick tips for internationalization;
- Cleaning up the Web and showing support for standards by using the W3C validator services;
- Checking to see if your pages are mobileOK.
Happy One Web Day! We'd love to hear your ideas for building One Web.
Tomorrow is the One Web Day, and Mozilla wants to celebrate the awesomeness of the internet. It’s also an chance to remind people that the web is a precious public resources. Your poster and photograph are a part of this. When you poster, you’re helping to keep the web open and free.
How can you help?
It's easy as one, two, three. And then four
- Download the posters at www.mozilla.org/onewebday
- Put your poster up at a prominent place, then take a picture of it
- Upload and tag as
#owdposter(on flickr.com, twitter or identi.ca)
- Enter the context (Mozilla laptop bags to be won!)
Other people show their love for the Web too:
Each year about 300 people who participate in various W3C groups meet face-to-face to exchange ideas, resolve technology issues, and socialize. We call this the W3C Technical Plenary (TPAC) Week, and it's my favorite set of W3C meetings. I enjoy reconnecting with colleagues, hearing news, playing music with them at the nearest piano, and chatting at dinners and hotel bars. This year we meet in Santa Clara, California, and we thought it would be a great opportunity to meet with local developers.
The result is our first ever Developer Gathering, to be held the afternoon of Thursday, 5 November. The gathering is open to the public though we have in mind in particular the local developers and Web designers who are not the usual participants in W3C work. We are planning a series of speakers to share the latest news about CSS, APIs, some new ideas about browser test suites, and more. The speakers will then take participant feedback back to their groups. Arun Ranganathan (Mozilla), Fantasai, and Philippe Le Hégaret have already confirmed that they will be speaking. We will announce other speakers as they commit. In addition, the HTML Working Group will be meeting at the same time, so lunch, breaks, and the hotel bar will offer more opportunities to network with your colleagues.
The Developer Gathering home has more information about registration (including the $75 registration fee, which covers food, wifi, and other meeting costs). Space is limited to 100 participants, so it's first registered, first served.
We look forward to seeing you in November.
The idea started with the fact that we have a number of Working Groups who are trying to review the way they do testing, but also increase the number of tests they are doing as well.
The CSS Working Group was foremost in mind when it comes to testing. The Group has several documents in Candidate Recommendation stage that are waiting tests and testing. The HTML Working Group is starting to look into testing as well and a key component of ensure the proper success of HTML 5 is through testing. The specification is quite big to say the least and, when it comes to testing, it's going to require a lot of work. We also have more and more APIs within the Web Apps group, Device API, Geolocation, etc. The SVG Working Group has a test suite for 1.2, but they're looking at different ways of testing as well. The framework produced by the MWI Test Suites framework allow two methods. One requires a human to look at it and select pass/fail. The other one is more suitable for script tests, ie APIs testing.
A bunch of us, namely Mike Smith, Fantasai, Jonathan Watt, Doug Schepers, and myself, decided to get together to discuss this and figure out how to improve the situation. We focused on three axes: test submissions, test reviews and how to run a test.
First, we'd like ideally every single Web author to be able to submit tests, so when they run into a browser bug based on a specification, it should be easy for them to submit a test to W3C. It should also allow browser vendors to submit thousands of tests at once. There is the question of how much metadata do you require when submitting a test. For example, we do need to know at some point which feature/part of a spec is being tested. We should also as many format as possible for tests. Reftests, mochitests, DOM-only tests, human tests, etc. The importance aspect here is to be able to run those tests on many platforms/browsers as possible. A test format that can only be ran on one browser is of no use for us.
Once a test has been submitted, it needs to be reviewed. The basic idea behind improving test reviews is to allow more individuals to contribute. The resources inside W3C aren't enough to review ten of thousands of tests. We need to involve the community at large by doing crowd reviews. It will allow the working groups to only focus on the controversial tests.
Once the test got reviewed, we need to run them on the browsers, as many as possible. Human tests for example are easy to run on all of them, but it does require a lot of humans. Automatic layout tests are a lot trickier, especially on mobiles. We focused on one method during our gathering: screenshot based approach. The basic idea here is that a screenshot of the page is compared to a reference. Mozilla developed a technology called ref-tests that compares Web pages themselves. You write two pages differently that are supposed the exact same rendering and compare their screenshots. It avoids a lot of cross-platforms issues one can. The way Mozilla is doing that is via the mozPaint API in debug mode. That works well, but only works in Mozilla. You can guess that other browser vendors have a similar to automatically take screenshots as well. We wanted to find a way to do this with all browsers without forcing them or us to write significant amounts of code. We found a Web site called browsertests.org and we got in touch with that Sylvain Pasche and, with his help, we started to make some improvements on his application. It works well on desktops at least. Once again, we don't think W3C is big enough to replicate all types of browser environments, so we should make it easy for people to run the tests in their browser and report the results back to us. Plenty of testing frameworks have been done already and we should try to leverage them as much as possible.
We started to set up a database for receiving the tests and their results. We'd like to continue the efforts on the server/database side, as well as continuing to improve Sylvain's application, allowing more tests methods and formats. Testing the CSS or HTML5 parser should be allowed for example.
You'll find more information at our unstable server but keep in mind that:
- we're in the very early stages
- this server is a temporary one that I managed to steal for a few days from our system folks. They'll want it back one of those days and I need to find a more stable home prior to that event. I'll update the link once this happens but expect it to break if you bookmark it.
- Unless I can secure more resources for the project, we won't go far by ourselves.
The server also contains links to more resources on the Web related to various testing efforts, as well as a more complete of what we wish the testing framework to accomplish.
For the conclusion, I'd like to thank Mike Smith and Doug Schepers, and especially Jonathan Watt and Fantasai from the Mozilla Foundation. They all accepted to argue and code for 8 days around the simple idea of improving the state of testing at W3C. I hope we're going to be able to take this project off the ground in the near future. If you're interested in contributing, got ideas and time, don't hesitate to contact me.
The W3C recently announced an exciting new incubator group – The Open Web Education Alliance (OWEA) – that is certain to have a significant impact on helping web standards and best practices find their way into classrooms around the world. The mission of OWEA is to bring together companies, schools, and organizations involved in shaping the education of Web professionals to explore the issues around the topic of Web development education and create solutions for improving it.
Many organizations like Opera, Adobe, Yahoo, WOW, and WaSP InterAct have been diligently working to develop curricula and outreach programs to help schools better prepare their students for a career on the Web. OWEA will bring many education initiatives together in a broad collaborative.
“ The mission of the Open Web Education Alliance Incubator Group, part of the Incubator Activity, is to help enhance and standardize the architecture of the World Wide Web by facilitating the highest quality standards and best practice based education for future generations of Web professionals through such activities as:
- fostering open communication channels for knowledge transfer
- curriculum sharing between corporate entities, educational institutions, Web professionals, and students ”
OWEA’s origins can be traced back to Web Directions North in Denver in February, where WaSP emeritus and CSS Samuri John Allsopp brought together educators, industry experts, and representatives of the W3C to explore ways of uniting the various education efforts already underway. Four months later, OWEA has transformed from a collection of ideas at a meeting to a W3C incubator group. The Web Standards Project has a strong representation in OWEA, and will be contributing content from InterAct to the initiative.
This is a huge step towards improving web education! Want to stay informed? Subscribe to the WaSP InterAct Twitter feed.
Jay Sullivan is the Mozilla VP of Mobile at Mozilla. He works on Fennec, the browser for mobile phones. Jay was recently interviewed by Lifehacker. Here is an interesting excerpt:
As the W3C Team lead for financial data and the Semantic Web, I am looking at how the Web is changing the way investors assess the value of companies.
Public companies worldwide are required to file regular reports setting out the financial health of the company. These are available from corporate investor relations websites and from regulatory agencies like the Securities and Exchange Commission (SEC). If you want to analyze this data, you have to re-key it, which involves a lot of work and introduces errors. That is all about to change.
The SEC and kindred agencies around the world are starting to require companies to file reports in XBRL (the extensible business reporting language). XBRL ties each reported item of data to the reporting concept used to collect it, and moreover, does so in a way that computers can make sense of, avoiding the need for re-keying data.
XBRL will allow investor relation sites to support interactive access and sharing of tagged financial data. This will build upon Web 2.0 and the phenomenon of user provided content on wiki's, blogs and social networking sites aimed at investors, e.g. wikinvest, where investors share data, insights and analyses. Youtube provides a powerful precedent in the way it allows people to share content by embedding a view or a link to a view in their blogs.
For XBRL, this means providing a way for people to browse the data, and to pull out tables and charts as needed for their blogs. These items could be rendered by the investor relations site and shared à la Youtube, or the blog could itself make use of a script to query data across one or more investor relations sites and render it locally. This is where the Semantic Web and linked open data comes in. I've previously reported on techniques for converting XBRL into RDF triples.
W3C and XBRL International are looking for your help in understanding what some people are calling "Investor relations 2.0", and we invite you to attend a workshop at the FDIC training facility in Arlington, Virginia, this October. We want your help with identifying the opportunities and challenges for interactive access to business and financial data expressed in XBRL and related languages. This doesn't just apply to the investor community, as the same technologies also offer huge potential for data published by governments on sites like data.gov (see demos). For more details on the workshop see the call for papers.
Google has innovated online geospatial service as like Google Maps with high quality satellite images, street view, oceanic exploration and Google Earth with 3D images and SketchUP in global scale. They were brilliant ideas and many people were impressed.
But, recently Korean web service made another remarkable map services in local scale. Daum and Naver invested many money to build these high quality service too.
Daum launched new map features “road view” and “sky view” on January in this year. The image of sky view is converted into high-resolution aerial photography, with the restaurant’s roof, driveway and the sea becoming impressively recognizable. Daum is believed to be spending over 20 billion won to develop the new service, collaborating with SamAh Aerial Survey for the aerial photos and Pix Korea for the street-level images.
Recently Daum also released Map API for 3rd party developers to want to adopt it in their services. You can feel very high resolution aerial photos too.
Naver also released panorama photo service for famous spots in Korea. It’s similar with Bird-eye of Microsoft Virtual Earth, but it’s possible to rotate 360 degrees for a look around.
Daum also released some of demonstration for indoor panorama view in specific building. Now you can see a campus of Sejong University.
If it is adopted in famous tour spot or educational resources in museum, it’s very excellent for people although it’s local based endeavor of online mapping service.
There was big news of partnership between Yahoo! and Microsoft. After the war of acquisition between Jerry Yang and Steve Ballmer, Yahoo got new woman CEO Carol Bartz and she finished remained problems that everyone minds.
But, I think this partnership is very positive for sustainable survival of Yahoo! in long term. Yahoo has very strong properties as like mail, news, finance, entertainment and Flickr. If Microsoft acquired Yahoo in last year, I’m not sure whether most of Yahoo! service could go on not. This partnership is good for Yahoo’s employee although there are some of redundancies in part of Yahoo’s search technology. Ballmer also said this things in conference call.
Ballmer: The deal last year was tailored more towards an investor than an operator. This deal is different, not better. Less upfront payment, and definitely a higher TAC rate.)
Microsoft has strong competency in technology and channel sales, but Yahoo does in web based service that generates good search contents. Both CEO thought Yahoo’s flexibility is very important in future.
Bartz: When we talk about internal Yahoo search that is some of the innovation we are looking at doing. Paid inclusion we will decide on later. We have full flexibility on what to do inside our site. That is the important thing, there is a lot of value there to add search to our properties.
Ballmer: It was important to us to structure the deal to give Yahoo full flexibility (to add search to its services).
BTW, In Korea, Yahoo! Korea and MSN Korea is also minor under 5% (if it’s summed) as like Google Korea. Daum(23%) and Naver(72%) dominated Korean search market. But, in case of search advertise market, Overture has almost of them except Daum’s share of Google AdWords. Now Daum and Google’s contract will be finished in this year for three years. If Daum decides to back to Overture, Korean’s search AD market is Microsoft’s.
Asiajin updated Japanese equivalent of western web services. You may know Japan and Korean are very different from western. It’s different between Japan and Korea too. I listed up them to compare with Japan and western. Leave a comment if you want to know more or think one of these attributions is off.
I) General web services
No equivalent (most people used Cyworld’s photo sharing.)
me2day (acquired by Naver)
No equivalent (Daum and Naver were failed and ma.gar.in, a startup also was done.)
dooyoo (price comparison engine)?
Danawa (Now Naver and Daum focused on shopping gateway too.)
No equivalent (most people use VOD site and P2P sharing service as like Fileguri.)
No equivalent (there is no culture for classifieds in Korea too.)
imdb (Internet Movie Database)?
No equivalent (Korean movie sites were failed now most information were stored in movie section of Naver and Daum)
Wall Street Journal Online?
ChosunIlbo (but, most korean consumed news in portal site in Naver and Daum)
Auction (branch of eBay)
No equivalent (most of them offers paid music service.)
Naver Knowledge In
Hangame (It’s high competed market in Hangame, Nexson, Netmarble and Pimang)
What is the Korean equivalent of Techcrunch?
No equivalent. Bloter offers similar service.
the Huffington Post?
III) Web tools and software:
Hanmail, a branch of Daum service.
Google Docs and Spreadsheet
Tistory, a branch of Daum service.
How about think? If you want to add more sites, please let me know.
As part of a series of interviews with W3C Members to learn more about their support for standards and participation in W3C, I asked David Ezell (NACS Advisory Committee Representative at W3C) some questions.
Q.For readers who may not be familiar with NACS Association for Convenience and Petroleum Retailing) , can you describe the role of NACS in one paragraph?
A. Quoting NACS: "NACS serves the convenience and petroleum retailing industry by providing industry knowledge, connections and advocacy to ensure the competitive viability of its members' businesses."
For knowledge, NACS supports member education through meetings such as NACStech (where Tim Berners-Lee has given the keynote on one occasion) and through its printed and on-line publications. For connections, meetings such as the NACS Show and NACStech provide members with the opportunity to meet regularly, and NACS Connect 365 is an online marketplace where NACS Retailer and Supplier members can conduct business.
For advocacy, NACS supports a Government Affairs Program that includes NACSPAC (a political action committee). The short explanation for the role of the Government Affairs Program is to advocate for a business climate that is fair, safe, and equitable for all convenience retailers, large and small. Advocacy also includes the support of standards development and implementation through PCATS, x9, W3C, and the PCI SSC.
The strategic technological direction is guided by the NACS Technology Council, a group of member retailers, technology providers, and CPG companies (i.e. telecom and communications).
Q. Given the current economic climate, what are some of the business challenges faced by NACS members that might be addressed through Web technology?
A. One of the the biggest economic challenges we face is increased interchange fees for use of payment processing networks, as well as the burden of paying for the security infrastructure required to use those networks. Web technologies stand poised to offer better cost alternatives in this area: entrepreneurial companies are already making some inroads on the processing side, and standardization of security protocols and exchange patterns to create a "trusted Web" may provide some relief from the security burden borne today almost solely by retailers.
Q. Can you share any success stories within NACS related to the use of a W3C standard?
A. PCATS (the Petroleum Convenience Alliance for Technology Standards), which is a close ancillary organization to NACS, has produced a standard for in-store EDI (Electronic Data Interchange) ordering. EDI makes it possible to place orders to suppliers (like Coca Cola, Pepsi, or McLane) electronically. The PCATS XML/EDI standard, called EB2B, enables large numbers of small and medium-sized retailers to make use of electronic ordering. Before the Web, this EDI technology was completely out of reach of these merchants. All PCATS standards make use of XML and XML Schema. Note that XML/EDI standards are also available from many other organizations for other retail segments.
Q. How does NACS communicate the value of Web standards to its Membership?
A. The annual NACStech meeting provides NACS members with the ability to hear from industry experts (Tim Berners-Lee and Rod Smith (of IBM) have both been recent keynote speakers) and to attend workshops on technology topics from the mundane (how to train cashiers) to the sublime (what is Web 2.0?). NACS Online also carries frequent articles on applicable technologies.
Q. Are there examples where NACS, through collective advocacy has enabled NACS members to influence the information technology market?
A. Through its Advocacy activity, NACS supports standards development at PCATS and x9, and through those standards produced has changed the landscape for technology products in the convenience and petroleum retailing industry. Before NACS advocacy of these standards, interoperability between store systems was scarce and expensive to develop. We’ve seen a sea change in how technology vendors approach our market as a result of the standards.
Q. NACS has participated in the development of XML Schema 1.1 (which W3C anticipates will become a Recommendation in 2009). For NACS, what are the most important new features in XML Schema 1.1?
A. Two key features are the ability to define co-constraints in the schema and support for the "versioning" of languages. Co-constraints (i.e., the value found in one place in an element can constrain the value allowed in another place in an element) have been used in data serialization technologies (like EDI) for decades: programmers, managers, and merchants understand them and rely on them. These constraints have almost always been implemented in the computer code, and therefore it has been up to the programmer on either end to read the spec carefully and to properly implement the checks on data required: in other words, it’s been very error prone. Allowing the designation of co-constraints in the XML Schema Definition language will be a huge help to those designing new XML languages for retail and for those porting existing retail languages into XML.
Support for versioning means making a given XML language work across processors and versions of those processors, and also to provide the ability to extend existing XML languages easily and intuitively. Deployment of a new language and its required software is a huge cost to any merchant, and support for these versioning strategies will make it possible to save a lot of money and frustration.
Q. Are there other areas of current W3C work of particular importance to NACS (e.g., related to data, voice interaction, ubiquitous Web, geolocation, ...)?
A. The Mobile Web Initiative, along with Device APIs and in-roads into Social Networking standards (I realize that’s a little out of scope) seem to me to have the biggest potential impact on NACS members during the next few years. Of course, the entire planet has a vested interest in the future of (X)HTML(5), but that’s too large a topic for now.
Q. What does NACS value in its participation in W3C?
A. The W3C provides a unique mix of the ability to work out practical solutions to existing problems (like co-constraints in XML Schema) and to consider the immediate implications of emerging technologies for our industry (like the Semantic Web, Device APIs, "trusted web", and Mobile Web). Being able to be involved in these things simultaneously is to me one of the main strengths of the W3C. Further, once the W3C creates or endorses a technology, it’s much easier for NACS to adopt that technology with the confidence that it’s doing the right thing for its members.
Q. What can W3C do that would most help your organization or the organizations represented in the trade association?
A. The NACS membership has immediate interest in XML technologies, including Web Services (though they are not yet widely deployed), in security related standards ("trusted Web"), and future interest in Semantic Web applications. So, making sure that these business related standards continue to be supported and revised is of great value. NACS would like to see W3C continue to push standards into everyday commerce, enabling further synergy and economies of scale so that we can repeat the "EDI success story" over and over in the future. The Mobile Web Initiative is a good example of the sort of activity NACS finds intriguing, especially as it applies to commerce.
Q. NACS developed an exchange format called NAXML. Can you describe your experience building an industry specific vocabulary on top of core XML standards?
A. The committee I chair at PCATS (the POS Backoffice Committee) was the first to switch to XML for the language of our standard, though I don’t think we finished first (that may have been EB2B). Switching to XML and XML Schema as the basis for our work provided the committee with an unprecedented ability to know we were using the right technology. Before that, we had disagreements about formats (ASN.1, CSV, name-value pairs), and arguments over the substrate consumed a lot of time, thought, and goodwill. Once we adopted XML plus XML Schema as our strategy, these old arguments disappeared and we were able to finish and implement our standard. These implementations were actually portable across various software platforms; we had not been able to achieve that before. And we’ve never looked back.
Many thanks to David for his answers.
Some very good blog posts have been published recently, and I wanted to signal them to my readers. I'll start with a very general statement about the importance of Free, Libre and Open Source Software. It's Atul Varma's Business Card, who quotes an 9 years old and fundamental article by Larry Lessig, Code is law (with the book Code and other laws of Cyberspace).
Mitchell has published not one but two articles related to Mozilla's vision:
- I Am Not A Number. "What’s the most interesting thing about the Internet today? To me, it’s not an application, it’s not a technology, it’s not a characteristic like “social.” The most interesting thing about the Internet is me. My experiences. And you. And your experiences."
- Eyeballs with Wallets. Excerpt: "There are times, however, when being a wallet attached to eyeballs is not enough. The possibilities available to us online should be broader, just as they are in the physical world. Sometimes we choose to skip the mall and go to the library, or the town square or the park or the museum or the playground or the school. Sometimes we choose activities that are not about consumption, but are about learning and creation and improving the environment around us."
I like the fact that we're not ''just' "eyeballs with wallets". Of course, we're "Website visitors" (aka "Eyeballs"), and we're also customers of e-commerce sites (aka "Wallets"), but we're much more than this.
I'll conclude with a link to a pretty good New York Times article about Mozilla that quotes Mitchell (emphasis mine):
We succeeded because more people got engaged, helped us build a better product and helped us get the product into the hands of people. We succeeded because of the mission.
Exactly. Mitchell sums it up in less than 140 characters on Twitter :
we build Firefox to advance a mission. Now we need to show that Firefox is the first step, not everything.
The product (Firefox) and the mission are intertwined. The mission helps mobilizing forces and energies to build the product. And the product is here to advance the mission. Now our users do see the product (even if sometimes they confuse it with a search engine or an ISP), and sometimes "sense" the mission. We need to keep pushing on the mission part too (aka "poetry"): without it, Firefox is going to be challenged more than with it.
Lift Asia 09 and Blog Talk 2009 will be held in Jeju, Korea in one week from 15th to 18th September. I engaged in both conferences as one of supporters.
Lift Asia 09 will happen on 17th and 18th. The conference will be themed “Serious Fun!” and I look forward to welcome you in the spectacular Pacific shore for two days of incredible inspiration and networking. In last year of Lift Asia 08, many people felt this was a special event, offering some of the best ever social moments, incredible speeches and ideas never heard before, and unique networking opportunities with some of the most interesting people in the world.
Blog Talk 2009 will be held in 15th 16th in same place. It is also traditional European based conference continuing with its focus on social software, while remaining committed to the diverse cultures, practices and tools of our emerging networked society. You can submit just 1 page proposal to present your idea and thought till 31th. July.
Venue is very excellent. Actually I lived for 4 years in Jeju island via Google Maps, a famous Korean tour spot. Its nature is very beautiful and one of top world tour spots although people don’t know well.
Jeju is a volcanic island far from 130 km from the southern coast of Korea. The island contains the Natural World Heritage Site entitled Jeju Volcanic Island and Lava Tubes.
Mt. Hallasan, the tallest mountain in South Korea and a dormant volcano rises 1,950m above sea level and 360 satellite volcanoes are around all of island. You can feel different kinds of plants by level from subtropics to cold zone. (Volcanoes were already dead and not dangerous. They are covered by green glasses and trees with some of volcanic outcrops.)
It also has beautiful emerald colored beaches with black volcanic rocks and white sands. I have been in Miami beach in US, but both is almost similar except length of beach.
Recently Jeju island was listed in a finalist of New 7 wonders of Nature that includes top 28 natural spots in the world. If you visit Jeju via Seoul, you can also experience the 3rd. busiest routes in the world.
Today, we've published a proposed correction against XML Signature. Normally, errata are published without much ado, and largely cover minor points of specifications. This one's a bit different: You haven't seen any public discussion of this particular erratum before, and it comes with a CERT Vulnerability Note and a bunch of software updates from various vendors.
What has happened? In January, I was reviewing the algorithms section in XML Signature while working on the XML Security Working Group's Algorithms Cross-Reference draft. I read a certain paragraph. I read it twice. I grabbed a copy of the nearest open source implementation of the spec that I could find. I read some code. I built an example to play with. And then, I took up the phone. A week later, we spent some time on a Working Group call to talk things through, followed by a series of informal conference calls to understand how serious the problem was, and what to do when. Half a year later, a number of vendors are pushing patches to fix their versions of the hypothetical problem I had stumbled over.
The paragraph that struck me was about HMACs. An HMAC is a message authentication code that lets Alice determine that a message came from Bob, if Alice and Bob share some sort of secret. It is much faster to compute than a usual public key signature, and therefore popular for authenticating large amounts of data. There are some reasons (which don't matter here) why it can be desirable to truncate the output of the HMAC function; hence, XML Signature introduces a parameter that defines a truncation length.
Here is the text that struck me:
The HMAC algorithm (RFC2104...) takes the truncation length in bits as a parameter; if the parameter is not specified then all the bits of the hash are output. An example of an HMAC
SignatureMethodelement:<SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#hmac-sha1"> <HMACOutputLength>128</HMACOutputLength> </SignatureMethod>
The output of the HMAC algorithm is ultimately the output (possibly truncated) of the chosen digest algorithm. This value shall be base64 encoded in the same straightforward fashion as the output of the digest algorithms.
Conventional wisdom is that you don't lose much in terms of security if you throw away up to half of the output. And that's where it gets interesting: XML Signature provides markup to send the truncation length along with the signature. But it doesn't say who has to worry about checking the truncation length.
Imagine Alice and Bob agree on a secret, and imagine that Alice will accept that a message comes from Bob if it's signed with an HMAC using that secret. They use XML Signature to encode the signatures. Enter Mallory: She doesn't know the secret, but she has a message that she wants to send to Alice, claiming that it comes from Bob. Guessing the right HMAC value is basically impossible when the output of the HMAC function is a string of 160 bits. But what if Mallory could convince Alice to throw away 152 of the 160 bits? Suddenly, she has a chance of 1 in 256 of guessing the right signature value. Or what if Mallory could convince Alice to throw away all of the 160 bits? There's only one signature value that's possible, regardless of the message and the key. Now, if Alice has a careful look at the messages she receives, she will surely notice when Mallory's message tells her to throw most of the signature away.
But what if Alice doesn't look?
That's precisely what has happened here: A naive implementation of XML Signature simply accepts an HMAC-based signature, including the truncation length parameter, looks at the number of bits it's been told to look at, and reports back whether the signature was good or bad. It doesn't do any other check of the output length, or report it back, or ask the calling code for a minimum (while, perhaps, setting a sane default).
Now, strictly speaking, the specification isn't even wrong. It does, however, leave a critical decision to "somebody else", without saying so. It turns out that the HMAC specification itself warns about too short output lengths -- and, indeed, we know of one implementation that got this right from the beginning. Others had simply not cared to implement this particular feature. But many (see the CERT vulnerability note for details) had implemented the feature, and then forgotten about it -- from all we know, the output length parameter is hardly, if ever, used with XML Signature.
Likewise, some specifications that refer to XML Signature come close to dealing with the issue, but ultimately don't: The WS-I basic security profile forbids sending the truncation length parameter -- but it doesn't keep implementations from accepting it. WS-SecurityPolicy has information about minimum key lengths for HMACs -- but none about minimum output lengths.
If there is a security critical parameter in a specification, then, at the very least, that needs to be said clearly and loudly. In this case, we're going one step further: The proposed correction that was published today adopts the limits recommended by RFC 2104 and tells implementers to consider signatures as invalid whose truncation length falls below these limits.
Why did it take half a year?
Releases across a possibly significant number of vendors had to be coordinated. Today was the earliest date everyone could agree on. At the same time, it didn't look like services exposed to the Internet would be adversely affected by the delay.
A number of people helped with dealing with this situation: Hal Lockhart was instrumental in helping to understand the implications in the Web Services space. Frederick Hirsch made time available during XML Security Working Group calls. Will Dormann at CERT helped to coordinate vendor responses during the last few weeks.
How do I know if my software is affected?
Probably not. It turns out that the affected feature in XML Signature is used less frequently than we had at first thought. If you use Web Services in a way that relies on the HMAC feature in XML Signature, then you might want to make sure that you're using the latest release of your toolkit. The same holds if you've built your own protocols and toolkits on top of XML Signature. See the CERT vulnerability note for details. (Please understand that we won't discuss individual implementations here.)
Does this affect other protocols?
We are not aware of any.
Currently, all the HTML5 / XML “serialization” stuff simply boils down to two straight-forward rules:
- If HTML5 using HTML syntax is served with MIME type text/html This is HTML serialization.
- If HTML5 using XML syntax is served with MIME type application/xhtml+xml then this is XML.
Disclaimer on all things series 5: I might be wrong now. Then again, I might be right in five minutes.
Damn, you cannot please all the browsers all the time. Funny, those browser beasts. They do stuff, then they do it again and change it. Or, they do it and you can’t talk about it.
If my Baloney has a first name, it’s HTML5! This is the best I can do at the moment, please and thank you.
Just remember, I didn’t lie and tell you I was right. Because as I quoted from Cowboy Wisdom in my #atmedia talk recently:
Never trust a man who agrees with you. He’s probably wrong.
Comment at will.
I was invited to hold a workshop at Lift France '09 which title was What's wrong with the Web. It looked like the topic was interesting, because it was the first workshop to reach the fully-booked status (with 25 seats) and we ended up with twice as many people in the workshop that we wanted! No doubt, LIFT participants – just like me – do think there are things to improve on the Web. I started the session with a brainstorm on sticky notes with the whole audience. We tried to put on the notes keywords describing what one considers as an issue with the Web (and more generally the Internet). We quickly ended up with hundreds of thee notes, posted on the wall. I asked Charles Nepote (FING member and LIFT co-organizer) to help with by categorizing the notes in order to list the top issues. Here they are, in no particular order:
- Identity management
- Universal access
- Too much centralization of services
- Privacy & big brother
Then we discussed most of them, trying to identify the sub-issues and potential solutions. Here are the notes I took on the whiteboards:
- Identity management
- Right to be forgotten
- Ability to have multiple identities
- Right to anonymous access (for political dissidents, whistleblowers...)
- Ability to take back my identity if abused by a third party
- Universal access
- The digital divide
- The lack of broadband in remote places
- One single Web, for mobile and desktop users
- Users need simplicity!
- Authors need to share best practices
- I need to be able to give feedback if a site does not work for me
- Politicians should work on this
- Too much centralization of services
- Makes censorship easier
- Gives too much power to a couple of search engines
- What happens when a service shuts down?
- Lack of control over my data
- We should operate our own servers
- Devices such as Fonera2, NAS Home servers and ISP "boxes" could host me on the Internet.
- How do I work while disconnected?
- How do I sync my off-line work with the Cloud?
- Spam is making email irrelevant
Hackerscrackers are dangerous
- Security is painful to deal with
- Security is a necessary evil
- It's complex. We need education
- Security UI is key to education (but if only people read the dialog boxes!)
- It's everyone's responsibility (users, software vendors)
- People tend to externalize these issues to large service providers (see too much centralization of services)
- Privacy & big brother
- Security is too often an excuse for monitoring people
- Security is too often an excuse for censorship
- The notion of privacy is evolving over time
- How can I make money on the Web? Is advertising the only way?
- Is free content sustainable?
- What does "free" mean? (Am I bartering my privacy in exchange of free services without knowing it? Or is is really free, no strings attached?)
- There is way to much advertising
- Flash advertising (animated with sound) sucks.
- I hate pop-ups (and pop-under too), along with ads that float over the content
- There are sites I cannot comment on. Can browsers fix this?
- Comments are too shallow/too aggressive
- Signal to noise ratio is too low
- It's all too serious
- Can I trust what I read on the screen?
- Where is the poetry on the Web?
Actually, as I'm using text to describe the issues, one can see they're all pretty much correlated. Security links to privacy, which links to data ownership, which links to identity, for example. So actually a graph would make a lot more sense to describe the relationships between all these issues.
After The discussion, I gave a quick wrap-up talk of these issues. I'm not to write it down here this post is already too long, but will certainly do in my next post. The idea of having my talk at the end of the workshop was two-fold:
- Let people come up with issues I did not have on my radar. 50 brains are more efficient than one!
- Try not to impose my view of the world to people in the room, but instead let them discover the issues (which is more powerful than hearing about them). In short: let people think by themselves instead of throwing a message at them.
Overall, leading this workshop was certainly a blast. It was intense, fun, challenging. I'm looking forward doing more of these in the future. In the mean time, thanks a lot to LIFT organizers, Laurent Haug (LIFT Founder), Charles Nepote (FING, for helping during the workshop), Jane Finette and Chris Hofmann (both from Mozilla) for preparing this with LIFT.
Daum.net, the #2 largest Internet portal site released Firefox 3 Daum Edition based on a partnership with Mozilla Corporation. In last year, Daum contracted a partnership with Mozilla Corporation to release Firefox 3 Daum Edition that includes the latest toolbar version with Daum search by default for its users. It’s one of similar cases that Mozilla already made such a partnership with Google, Yahoo, eBay and Yandex.
The Firefox 3 Daum Edtion was made by Mozilla that offers development and quality assurance and Daum that promotes and distributes it. It can be downloaded in promotion page and Daum toolbar page. It promotes to download this version on Daum’s top page which visits 9 million people per everyday.
Also Daum developed several useful extensions such as Daum Toolbar, Extension Packages for Korean people and Korean Developers. Daum Blue is very popular simple sky blue theme with cool interface as like summer sea. Mozilla Links also selected it as a recommended one.
It means Daum has tried to observe web standards and compatibility in making its web services despite of Korean’s poor standards situation by abusing ActiveX technology. These kind of endeavors of Daum for open web has given favorable impressions to opinion leaders in online.
By the way, Naver.com, #1 and competitor of Daum also has offered toolbar and own theme too. So it will be good sign to increase Firefox users in Korea that both big players gives convenient environment for Firefox users to surf the web sites.
The screen-shot of promotion pages
The Mozilla European Inter-Community Meetup is the first of a series of community gatherings aiming to bring together active communities from across Europe in the same city for a day of presentations, discussions and workshops. The aim of the event is to enable communities to share experiences, learn from each other and improve collaboration.
It was quite a blast, with the usual mix of energy, enthusiasm, big brains, diversity of cultures and general willingness to do the right thing for the World, the Web and Mozilla. I've been involved with Mozilla for more than a decade, but I'm still excited by this . The agenda was not too different from other Mozilla meetings: lot of hard work in a meeting room, sandwiches for lunch and partying during the evening – beer, good food – along with a walk in the center of Geneva, the unmissable Jet d'eau and the ritual silly group photo !
Photo by William Quiviger, used under CC-BY-SA license.
A couple of interesting numbers:
- Of the 22 people in the room, 17 were volunteers.
- 7 different nationalities. (FR, ES, IT, USA, DE, DK, AR)
- The 5 locales represented here (Spain, France, Italy, Germany, Denmark) covered roughly 70 million active users in Europe.
A couple of links:
Despite of launching iPhone in world wide before 2 years, Korean people still cannot use it. The WIPI was why iphone 3g couldn’t be launched in Korea. It was closed Korean national standard - “WIPI” that must be installed in all mobile devices to personal retails. So Apple rejected this duty for providing iphone, but fortunately Korean Communication Commission (KCC) decided to absolve from an obligation from April in this year.
Before a month, KT (wire phone, wireless internet) and KTF (mobile) were merged. KTF was one of candidates that could offer iPhone and there were many rumors from there. It was hopeful news too. But, there wasn’t good news in WWDC 2009. I couldn’t find the name of Korea in 80 launching country list of iPhone. So many Korean geeks were disappointed and thought another problem is still mobile carrier not to give up their closed mobile business.
Recently there is another hopeful news again. Apple gained a regulatory approval for sales of its iPhone 3G here. Radio Research Agency affiliated with the KCC released to grant certification to Apple for sales of its iPhone 3G A1241 on 12th June.
Its approval is mandatory for telecom equipment entering the Korean market. Not all of the products that received the body’s certification are sold in the local market, but the latest approval shows that “Apple is interested in entering the Korean market,” a KT spokesperson said and KT have been long in talks with Apple to launch its iPhone raising hopes among Korean consumers that the sleek device would come to the local market.
However, the older iPhone 3G may yet hit the local market. Someone are keen to see the possible impact that the iPhone may have on the local mobile phone market, which is dominated by Korean firms such as Samsung Electronics and LG Electronics. Also telecom companies have dominated all of mobile business with relatively expensive price for internet access. So Korean geeks is now alternation of joy and grief because of iphone.
Anyway it’s well going the status of Korean application for iPhone. According to the list of appstore developers, there are over 150 companies and 700 applications in appstore (about 2% per 50,000).
So it boomed up between Korean developers and there is Mr. Chanjin Lee who was one of heroes of software industry and developer Hangul word processor. He has tried to persuade many telecom company, to raise issue about opening market and to promote for many developer to join appstore by helping and consulting them. He also hosted several conference and workshop for iPhone and smartphones including Android, Windows Mobile.
I wish iPhone launches in Korea soon. It will be one of key milestones to open mobile market in here and increase Korean-made applications in mobile world.
I discovered very insightful experienced detection method for dying company via (http://www.infuture.kr/393 -Korean). It may be only Korean company, but I guess most of companies can be applied.
- Another process is made for a process. (There is not to do any more.)
- Only CEO is speaking on senior meeting. (CEO blames someone to talk.)
- No one thinks about new business. (All pressed by urgent business. It’s always.)
- New business are always hushing up. (CEO is not interested in them.)
- Ranking is down from second to below third in the market. (Heroes are needed in emergency.)
- Internal competition is bitter than competitors in the market. (Everyone thinks themselves rather than company.)
- When a strategy was failed, it finds victim in internal. (Everyone thinks I have to survive.)
- Toilet is not clean. (Insufficient loyalty to the company?)
- Everyone believe problems will be solved when CEO was changed. (Foundational problems still remained.)
- It’s increased that employees want to back to school. (In fact, they want to move another company.)
- Small reward programs are raised continually. (All medicines prove useless.)
- Meeting schedules are over half day in everyday. (It has to make a piece of work.)
- Employees blames there are rare educational programs. (How to find themselves?)
- Communication with instant messaging despite of near person. (It’s tired to face-to-face.)
- When idea is suggested, someone comments “Is there another company to do?” (Although creativity is the motto of a company.)
- A manager often says “I have no authority to do.” (Because all rights reserved in CEO.)
- Cigarette butts are increased in smoking room. (Boring or painful.)
- A thickness of report paper is heavier and thicker. (a formality overwhelms contents.)
- There are many people folding their arms in meeting. (It means I hate to listen.)
- When CEO(Manager) is absent, everyone is active. (Everyone feel cheap in normal.)
If your company is applied in from 15 to 20 items, it’s very risk, you’d better go out company.
from 10 to 14, risky, you have to request the change.
from 6 to 9, normal, but you’d better monitor.
below 5, it’s good.
How about your company?
There are tons of new developer-oriented features in Firefox 3.5 that are waiting to be used to create new Web applications. Geolocation. New
canvas features. Native
But the truth is that Firefox 3.5 is a modern browser, part of a movement who wants the Open Web to thrive, with the help of other browser vendors such as Opera, Chrome and Safari. An Open and Generative Web where one can invent new stuff without having to ask permission.
So we have to explain how these new features work, and what they enable developers to do. Enter Hacks.mozilla.org, a new blog put together by the Evangelism team, with material provided by the worldwide Mozilla Community. Over the 35 days to come, starting today, we'll try to post 2 articles per day. One to demo something really cool, one to explain something new. Get ready to get excited. Get ready to learn new stuff about Web development.
The first articles are:
- Introducing the Hacks.mozilla.org blog by Chris Blizzard. "While Firefox 3 was a signifigant upgrade for the web’s users, Firefox 3.5 does the same for developers."
- Pushing pixels with canvas article, by Paul Rouget ;
- Content-aware image resizing demo, by Stéphane Roucheray, a French Web developer.
 35 days. Firefox 3.5. hint, hint!
Coincidentally, John Lilly (Mozilla CEO) has just published a blog post titled Onward. John talks about Mozilla getting new office space and reflects about all the things that have changed over the 4 past years, when he came on board. The whole post deserve a read, but here is an excerpt for my busy readers (emphasis mine):
In just the four years that we’ve been here — out of the 11 since the Mozilla project started — the web has been transformed, and has itself transformed so much of the way we live our lives. It’s easy to gloss over, since we see the changes every day — and it’s easy to see the road that we’ve traveled on as being inevitable — but it really wasn’t. The reason we have a vibrant, open web today is because of millions of little decisions and contributions made by thousands of people in that timeframe — people who work on browsers, people who build web sites & applications, people who evangelize for standards, people who use the web and ask/demand that it be better.
If you happen to read my blog, there are good chances that you are one of these people who have contributed to this changes with your "little decisions and contributions", such as using Firefox, installing it on your friends' computers and making sure your Website is compatible with modern browsers. I would like to thank you for this. But I'd like to reiterate the fact that this is just the beginning of the Web. Most of it remains to be invented. Let's make sure that we keep making these little decisions and contributions coming, so that the Web we're going to use tomorrow is the one we want!
- Mitchell Baker: 7 years of Mozilla product releases
- Glyn Moody: Happy Birthday, Mozilla - and Thanks for Being Here. Pretty good read, with mentions of JWZ's Nomo zilla article.
I mean "hackable" in the sense that one can decide to experience it in ways that were not exactly what the author decided it would be. In short, the Web is not TV. It's not PDF either. Nor Flash.
A couple of months ago, we had this discussion during the Mozcamp in Utrecht. It's hard to summarize all of this in a blog post, but I'm going to give it a try.
What's cool for the (Open) Web is that one can tweak/change/hack most of the pieces of the stack. Of course, some of the pieces are out of reach (the DNS servers, the Web server, most of the network) and it's good. But for a lot of the pieces, the users has – if he wants – the ability to change the pieces in order to fit his needs. This sounds a little complex? Let's use examples:
- Changing the look of the document via CSS : you can use User Stylesheets (even better and easier with Stylish)
- Changing the content via user scripts, implemented via Bookmarklets, GreaseMonkey or Jetpack.
- Change the look of the browser using Themes for your browser or Personas
- Change the way you interact with the browser, with add-ons such as Ubiquity, which completely redefines how we interact with the Web browser and the Web itself.
The beauty of all this is that the people who have invented this did not have to ask permission to innovate. The way the Web was invented, with standardized layers, enable these kinds of things, and it's good.
This "hackability" (or generativity) is one of the key things I love about the Web. Now the issue is that this key ability does not have an actual name. Mark Surman has a good post on this topic. Should we call this essential "characteristic" about the Web "Generative", "remix", "opportunity", "hackable", "permissive"? Go and read Mark's post and comment here or there!
 The Web was invented 20 years ago, and bookmarklets became somewhat popular in 2002, GreaseMonkey was popular in 2005, Ubiquity Alpha was released in 2008 and Jetpack was announced a couple of weeks ago! No one knows what's going to be invented thanks to the generative nature of the Web...
Please note that the XHTML2 document was sent in error. The correct document has been forwarded along and Steven’s response to my query is now published as The Real “Why XHTML” Discussion.
With all the fuss about HTML5 at Google I/O last week, the question of “what about XHTML2?” keeps coming up in conversation. In an effort to better understand the answer to that question, I asked Steven Pemberton, W3c Chair of HTML and Forms Working Groups, who graciously took the time to chat with me about it and who then provided this overview to answer the question for the Web designer and developer public.
Based on the experience we have with HTML, XHTML 2 is an attempt to fix many of the extant problems.
The areas that are being addressed include:
Make it as generic XML as possible
- All the ones that you can imagine, because XML is a Good Thing (tools,
- If XHTML 2 gets accepted it will draw the web community further into
the XML world.
- Much of XHTML 2 works on most existing browsers already (as an example
Less presentation, more structure
Make documents more semantically meaningful; make CSS responsible for the presentation, not HTML.
- Easier to write your documents
- Easier to change your documents
- Easy to change the look of your documents
- Access to professional designs
- CSS gives more presentational possibilities than plain HTML
- Supports single-authoring: write your document once, supply different
stylesheets for different devices or purposes
- Your documents are smaller
- Visible on more devices
- Visible to more people
- Separation of concerns: authors write the text, graphic designers
design the look
- Simpler HTML, less training
- Cheaper to produce, easier to manage
- Easy to change house style, without changing your documents
- More control over the look of your site
- Reach more people
- Search engines find your stuff easier
- Visible on more devices
Reader (Surfer) advantages:
- Faster download (one of the top 4 reasons for liking a site)
- Easier to find information
- You can actually read the information if you are sight-impaired
- Information more accessible
- You can use more devices
The design should be as inclusive as possible. This includes finding a replacement for the unsuccessful
longdesc and making forms more accessible. Device independence and increased structure help here too.
It is a World Wide Web.
More device independence
New devices becoming available, such as telephones, PDAs, tablets, printers, televisions and so on mean that it is imperative to have a design that allows you to author once and render in different ways on different devices, rather than authoring new versions of the document for each type of device, or limiting your design to a single type of device. This includes creating a more flexible event handling system to allow for new sorts of events that new devices might generate.
Try to make the language easy to write, and make the resulting documents easy to use. According to research, usability is the second most important property of a website (after good content), so it is important that the technology supports this. This includes:
- observing how people currently write HTML documents, and designing content-models around these needs
- finding a better approach to frames than the current one. Usability experts advise authors not to use frames (http://www.useit.com/alertbox/9612.html); yet frames clearly have a useful functionality. Problems of frames include:
- The [back] button works unintuitively in many cases.
- You cannot bookmark a collection of documents in a frameset.
- If you do a [reload], the result may be different to what you had.
- [page up] and [page down] are often hard to do.
- You can get trapped in a frameset.
- Search engines find HTML pages, not Framed pages, so search results usually give you pages without the navigation context that they were intended to be in.
- Since you can’t content negotiatiate, <noframes> markup is necessary for user agents that don’t support frames. Search engines are ‘user agents’ that don’t support frames! But despite that, almost no one produces <noframes> content, and so it ruins web
searches (and makes builders of such sites look stupid!)
- There are security problems caused by the fact that it is not visible to the user when different frames come from different sources
More flexibility, future-proofing
As new technologies emerge, it is desirable not to bind documents to one particular technology but to allow flexibility in what can be accepted. For instance:
- HTML binds the document to the scripting language used, so that it is hard or impossible to write a document that works with different scripting languages. Technologies used by XHTML 2, such as XML Events, allows the separation of document content and scripting, so that documents can be made that work on different user agents.
- Fallback mechanisms allow a document to offer several equivalent versions of a resource and let the user agent decide the most appropriate to use, with a final fallback being to markup in the document. This makes documents more fault-tolerant — since if a resource is not available the document is still meaningful — and more accessible.
Achieving functionality through scripting is difficult for the author, restricts the type of user agent you can use to view the document, and impairs interoperability. We have tried to identify current typical usage, such as navigation lists, and collapsing tree structures, and include those usages in markup.
HTML Forms were the foundation of e-commerce. Improving forms covers many of the points above: return XML, more accessible, more usable (such as client-side checking), more device independent, less scripting.
The previous post was a document on XHTML2, sent in error. I noticed that Steven’s document didn’t match our conversation, but I made an honest mistake thinking what he sent in error was what he wanted to use to address the concerns.
So, I’ve left the other post up, but please know that this is the real discussion, and has lots more detail and insight than the other document, which is more of an overview of XHTML2 core principles.
Forgive me, and readers, Behold! It’s the real “Why XHTML” overview!
Molly Holzschlag asked me if I’d try and clearly and simply explain why XML parsing is advantageous and why XHTML still is relevant. This was my answer.
Firstly, some background. I sometimes give talks on why books are doomed. I think books are doomed for the same reasons that I used to think that the VCR was doomed, or film cameras were doomed. People present at the talks make the mistake of thinking that because I think books are doomed, I want them to be doomed, and get very cross with me. Very cross. But in fact, I love books, have thousands of them … and think they are doomed.
Similarly, people make the mistake of thinking that because I am the voice behind XHTML, that I therefore think that XML is totally perfect and the answer to all the world’s problem, etc.
I don’t think that, but
- I was chartered to create XHTML, and so I did
- XML is not perfect; in fact I think the designers were too print-oriented and failed to anticipate properly its use for applications. As Tim Bray said “You know, the people who invented XML were a bunch of publishing technology geeks, and we really thought we were doing the smart document format for the future. Little did we know that it was going to be used for syndicated news feeds and purchase orders.”
- I have often tried to get some of XML’s worst errors fixed (not always successfully).
- I believe that you should row with the oars you have, and not wish that you had some other oars.
- XML is there, there are loads of tools for it, it is interoperable, and it really does solve some of the world’s problems.
So, parsing. Everyone has grown up with HTML’s lax parsing and got used to it. It is meant to be user friendly. “Grandma’s markup” is what I call it in talks. But there is an underlying problem that is often swept under the carpet: there is a sort of contract between you and the browser; you supply markup, it processes it. Now, if you get the markup wrong, it tries to second-guess what you really meant and fixes it up. But then the contract is not fully honoured.
If the page doesn’t work properly, it is your fault, but you may not know it (especially if you are grandma) and since different browsers fix up in different ways you are forced to try it in every browser to make sure it works properly everywhere. In other words, interoperability gets forced back to being the user’s responsibility. (This is the same for the C programming language by the way, for similar but different reasons.)
Now, if HTML had never had a lax parser, but had always been strict, there wouldn’t be an incorrect (syntax-wise) HTML page in the planet, because everyone uses a ’suck it and see’ approach:
- Write your page
- Look at it in the browser, if there is a problem, fix it, and look again.
- Is it ok? Then I’m done
and thus keeps iterating their page until it (looks) right. If that interation also included getting the syntax right, no one would have complained. No one complains that compilers report syntax errors, but in the web world there is no feedback that it has an error or has been fixed up.
It was tried once with programming languages actually. PL/I had the property of being lax, and many programs did something other than what the programmer intended, and the programmer just didn’t know. Luckily other programming languages haven’t followed its example.
For programming languages laxness is a disaster, for HTML pages it is an inconvenience, though with Ajax, it would be better if your really knew that the DOM was what you thought it was.
So the designers said for XML “Let us not make that mistake a second time” and if everyone had stuck to the agreement, it would have worked out fine. But in the web world, as soon as one player doesn’t honour the agreement, you get an arms race, and everyone starts being lax again. So the chance was lost.
But, still, being told that your page is wrong, even if the processor goes on to fix it up for you, is better than not knowing. And I believe that draconian error handling doesn’t have to be as draconian as some people would like us to think it is. I would like to know, without having to go to the extra lengths that I have to nowadays.
So I am a moderate supporter of strict parsing, just as I am with programming languages. I want the browsers to tell me when my pages are wrong, and to fix up other people’s wrong pages, which I have no control over, so I can still see them.
There is one other thing on parsing. The world isn’t only browsers. XML parsing is really easy. It is rather trivial to write an XML parser. HTML parsing is less easy because of all the junk HTML out there that you have to deal with, so that if you are going to write a tool to do something with HTML,
you have to go to a lot of work to get it right (as I saw from a research project I watched some people struggling with).
Let me tell a story. I was once editor-in-chief of a periodical, and we accepted articles in just about any format, because we had filters that transformed the input into the publishing package we used. One of the formats we accepted was HTML, and the filter of course fixed up wrong input as it had to. Once we had published the paper version of the periodical, we would then transform the articles from the publishing package into a website. One of the authors complained that the links in his article on the website weren’t working, and asked me to fix them. The problem turned out that his HTML was incorrect, the input filters were fixing it up, but in a slightly different way to how his browser had been doing it. And I had to put work in to deal with this problem.
Another example was in a publishing pipeline where one of the programs in the pipeline was producing HTML that was being fixed up but in a way that broke the pipeline later on. Our only option was to break open the pipeline, feed the output into a file, edit the file by hand, and feed it into the second part of the pipeline.
Usability is where you try to make people’s lives better by easing their task: make the task quicker, error-free, and enjoyable. By this definition, the HTML attempt to be more usable completely failed me in this case.
The relevance of XHTML also starts with the statement that not everything is a browser. Because a lot of the producers of XHTML do it because they have a long XML-based tool pipeline, that spits out XHTML at the end, because it is an XML pipeline. Their databases talk XML, their production line produces and validates XML and at the end, out comes XML, in the form of XHTML. They just want to browsers to render their XHTML, since that is what they produce. That is why I believe it is perfectly acceptable to send XHTML to a browser using the media type text/html. All I want is to render the document, and with care there is nothing in XHTML that breaks the HTML processing model.
But there is more. The design of XML is to allow distributed markup design. Each bit of the markup story can be designed by domain experts in that area: graphics experts, maths experts, multi-media experts, forms experts and so on, and there is an architecture that allows these parts to be plugged together.
SVG, MathML, SMIL, XForms etc are the results of that distributed design, and if anyone else has a niche that they need a markup language for, they are free to do it. It is a truly open process, and there are simple, open, well-defined ways that they can integrate their markup in the existing markups.(One of the problems with the current HTML5 process is that it is being designed as a monolithic lump, by people who are not experts in the areas they need to be experts in.)
So anyway, the reason behind the need for XHTML is that the XML architecture needs the hypertext bit to plug in. It was a misunderstanding by many that XHTML 1.* offered next to no new functionality. The new functionality was SVG, SMIL, MathML and so on.
And my poster child for that architecture was Joost (alas no longer available) which combined whole bunches of those technologies to make an extremely functional IP TV player and you just didn’t realise it was actually running in a browser (Mozilla in that case).
Anyway, out on the intranets, there are loads of companies using that architecture to do their work and having then to do extra work to push the results out to the world’s browsers by making the results monolithic again.
So in brief, XHTML is needed because 1) XML pipelines produce it; 2) there really are people taking advantage of the XML architecture.
Our next meetup:
Date: Wednesday, 1st July 2009
Time: 7:30pm (can come earlier to chit-chat, we’ve booked the room from 7pm)
Place: Seminar Room 4.2 @ School of Economics & Social Sciences, SMU (it’s the building nearer the National Museum)
Speakers and Topics
This meetup we’ll delve into processes: how we make things work.
Website Design and Development Processes
Singapore web standards pioneer Nick Pan (@nickpan) will be kicking off the session with a presentation on common methodologies used to bring concept through development and unto the launch of a website. Nick bring with him a huge amount of experience, and has traversed the journey to and from code monkey, entrepreneur and project manager.
This presentation will be an open one, so feel free to conjure up your own deck of slides and take the stage if you think your approach to web development is something you’d like to share with the rest of us.
Thanks to a request over Twitter, yours truly will try to give an insight on the learning experience of revamping the Ministry of Education’s website. It’ll be a glimpse of working within large organisations, and hopefully you’ll walk away with a few tips on how to deal with Galactica-sized setups.
See you guys there?
We’ll need to know how many are coming so we can get a bigger room if necessary. So drop a comment if you’ll come yah?
1st TEDxParis. Photo From Rodrigo Sepùlveda used under CC license
I was invited Thursday evening to participate to the first edition of TEDx Paris, an independently organized event where talks previously recorded on video at the TED conference are shown and discussed with the crowd. I was eager to debate about Kevin Kelly's talk about The next 5000 days of the Web.
In short, Kevin Kelly tries to measure the size of the Internet in a meaningful way, and the closest thing he comes with is the human brain: in several ways, the Internet is as complex as a human brain. But the Internet doubles in size every 2 years, he says. Kevin Kelly also considers the Net as a single distributed very big machine, on which we're relying more and more every day.
This leads to the question "what will it like in 2040?". This is a question that I am often asked by reporters. Frankly, I don't know. I tend to answer by quoting Alan Kay "the best way to predict the future is to invent it". I don't know what the future will be, and having a huge always-on computer that each of us relies on is half exciting and half scary. How can we make sure that the Internet of the future is more exciting than scary? I'm just submitting a couple of ideas, that you can discuss in the comment, and that I'll discuss here on the Standblog:
- Users should be able to invent what one can do with it without having to ask permission
- Users should not be banned from it (unless they decide not to participate, of course)
- Users should control their experience.
I know this sounds very vague for now, but this is just the beginning of the conversation. Mitchell Baker and Mark Surman are also discussing this on their respective blogs and they are ahead of me. Go read them! (you could also read the Mozilla Manifesto).
In a more concrete way, I think that Open Source / Free Software is the way to go, and one should be able to host his/her own instances of the services he/she uses. In this regard, project like Weave or Laconi.ca are – in my opinion – the way to go.
My colleague Gen Kanai and John Lilly have pointed me to an interesting article on Wired: The New Socialism: Global Collectivist Society Is Coming Online.
Of course, the author is not really using the word socialism in the same way we use it to refer to Eastern Europe 30 years ago, and I'm not sure that resorting to such a loaded word is really helping in starting a discussion, because we have to clarify so many things before the conversation can start. However, there is indeed matter for an interesting discussion:
We're (...) applying digital socialism to a growing list of wishes—and occasionally to problems that the free market couldn't solve—to see if it works. So far, the results have been startling. At nearly every turn, the power of sharing, cooperation, collaboration, openness, free pricing, and transparency has proven to be more practical than we capitalists thought possible. Each time we try it, we find that the power of the new socialism is bigger than we imagined.
There are a couple key differences in the Eastern Europe socialism and this new collectivist society.
- the old socialism is a story used by the elite to dominate the people. On the other hand the new collectivism is something done on a daily basis by the people, without any authority trying to impose it, without necessarily giving it a name.
- the old socialism took place in the real world, ruled by the economy of things, while the new collectivism is taking place on line, ruled by the economy of ideas. This makes a huge difference, summed up by this sentence:
In the economy of things, sharing means dividing. In the economy of ideas, sharing means multiplying.
In short, this new digital collectivism may work where the old socialism failed, just because in the online world it's much easier to be generous and give things away as you're not deprived of them.
edit: Mozilla lives in this world where sharing means multiplying. When you understand this, you realize that the utopia of what we do (building software given away for free) suddenly makes a lot more sense.
One of the many cool things that Firefox 3.5 will bring is Open Video. What is it? It's native video in the browser using the HTML 5
video element, combined with the non-proprietary Ogg Theora codec. This means that now video is able to become a first class citizen on the Web (it's native, you don't have to resort to an external plug-in) and thanks to Ogg Theora, you can play video using free software, without paying a fee for using a patent-encumbered format subject to royalties.
This is very cool news, but the skeptics will certainly complain that we're facing a chicken and egg situation: Browser vendors won't put Ogg Theora in their products until there is significant content in this format, while video publishers won't use the format until enough browsers support it.
Well, we're doing our part at Mozilla, and thanks to the upcoming Firefox 3.5 release, we should soon see close to 300 million people with an Ogg Theora-enabled browser.
But what about content? That's the real scoop for today! Dailymotion.com is publishing 300'000 of its most popular videos in Ogg Theora, using the HTML 5
video element. This comes on top of very cool Websites such as Wikipedia and Internet Archive, who are doing similar things.
Of course, Open Video is not yet mainstream, but we have a beginning of an egg, and a young chicken . The future is brighter than ever for Open video! I'd like to personally thank Sébastien Adgnot, Web developer at Dailymotion for calling me after he had read an article on my blog about Open Video. This is how it all got started! Also a great thanks to the Dailymotion exec team, including Sylvain Brosset, for supporting this idea that looked a bit funky at first sight. Hat tip to Chris Blizzard (just because I can and Paul Rouget for helping with most of the tech stuff behind the scene...
A few links if you want to learn more about this:
Dear Jon Stewart,
I am an adoring fan who has followed your career since you first started doing talk shows on the Comedy Channel some 15 or more years ago.
You are a funny, well-educated, articulate man whom I consider to be a true American Hero.
All the fangirl worship aside, can you use your political influence to improve The DailyShow Web site? While HTML might not be your forte’, oh holy moly this code is so filled with bad bad things that it requires immediate diplomatic assistance.
And now, the code
Now, dear readers, what do you think of this lovely embed video code? I mean, really.
<table style=’font:11px arial; color:#333; background-color:#f5f5f5′ cellpadding=’0′ cellspacing=’0′ width=’360′ height=’353′><tbody><tr style=’background-color:#e5e5e5′ valign=’middle’><td style=’padding:2px 1px 0px 5px;’><a target=’_blank’ style=’color:#333; text-decoration:none; font-weight:bold;’ href=’http://www.thedailyshow.com/’>The Daily Show With Jon Stewart</a></td><td style=’padding:2px 5px 0px 5px; text-align:right; font-weight:bold;’>M – Th 11p / 10c</td></tr><tr style=’height:14px;’ valign=’middle’><td style=’padding:2px 1px 0px 5px;’ colspan=’2′><a target=’_blank’ style=’color:#333; text-decoration:none; font-weight:bold;’ href=’http://www.thedailyshow.com/video/index.jhtml?videoId=225113&title=the-stockholm-syndrome’>The Stockholm Syndrome</a></td></tr><tr style=’height:14px; background-color:#353535′ valign=’middle’><td colspan=’2′ style=’padding:2px 5px 0px 5px; width:360px; overflow:hidden; text-align:right’><a target=’_blank’ style=’color:#96deff; text-decoration:none; font-weight:bold;’ href=’http://www.thedailyshow.com/’>thedailyshow.com</a></td></tr><tr valign=’middle’><td style=’padding:0px;’ colspan=’2′><embed style=’display:block’ src=’http://media.mtvnservices.com/mgid:cms:item:comedycentral.com:225113′ width=’360′ height=’301′ type=’application/x-shockwave-flash’ wmode=’window’ allowFullscreen=’true’ flashvars=’autoPlay=false’ allowscriptaccess=’always’ allownetworking=’all’ bgcolor=’#000000′></embed></td></tr><tr style=’height:18px;’ valign=’middle’><td style=’padding:0px;’ colspan=’2′><table style=’margin:0px; text-align:center’ cellpadding=’0′ cellspacing=’0′ width=’100%’ height=’100%’><tr valign=’middle’><td style=’padding:3px; width:33%;’><a target=’_blank’ style=’font:10px arial; color:#333; text-decoration:none;’ href=’http://www.thedailyshow.com/full-episodes/index.jhtml’>Daily Show<br/> Full Episodes</a></td><td style=’padding:3px; width:33%;’><a target=’_blank’ style=’font:10px arial; color:#333; text-decoration:none;’ href=’http://www.thedailyshow.com/tagSearchResults.jhtml?term=Clusterf%23%40k+to+the+Poor+House’>Economic Crisis</a></td><td style=’padding:3px; width:33%;’><a target=’_blank’ style=’font:10px arial; color:#333; text-decoration:none;’ href=’http://www.thedailyshow.com/tagSearchResults.jhtml?term=Republicans’>Political Humor</a></td></tr></table></td></tr></tbody></table>
Did you enjoy that as much as I did? Knew you would.
As you may have seen, Mozilla Labs has recently announced Jetpack:
Jetpack is an API for allowing you to write Firefox add-ons using the web technologies you already know.
In short, the goal of Jetpack is to enable Web developers to create extensions for Firefox. There are already roughly 8,000 Extension developers who have built 12,000 add-ons. It's a lot, but it could be a lot more if we could find ways to enable people who build Websites to create more extensions.
What does Jetpack have to do with Generativity and generative technologies? Well, the Web is already a generative technology. When it was invented, Tim Berners-Lee and his team did not envision what it would become. People who have invented innovative Web sites and services did not have to ask them permission to invent them. Actually, the Web was invented on top of the Internet and the IP protocol. Tim Berners-Lee did not seek permission from those who invented the Internet nor those who deployed it (ISPs and network operators). This is exactly what makes the Net and the Web generative technologies: people could invent new things on top of them without having to ask permission.
Firefox add-ons are of the same nature: if you want to have a different browser, you don't have to ask Mozilla to build a specific version of Firefox for you. You can build your own add-on. Now building an add-on is quite easy compared to contributing core Gecko code, but it can be made easier. That's what Jetpack is aiming at. In short, enabling more people to hack.
In my recent article about generativity, I quoted Jonathan Zittrain about the 5 things that make technologies generative. #3 was ease of mastery, and this is exactly where Jetpack is good. It's actually lowering the barrier to entry.
It's also acting on item #5, transferability, as explained by the Jetpack announcement:
from a user perspective, Jetpack will allow new features to be added to the browser without a restart or compatibility issues, resulting in little to no disruption to the online experience.
By making the user experience better (no restart needed, less compatibility issues), Jetpack is making Firefox better and more generative, because innovations built with it will be more transferable.
Let's also talk about the 3 remaining items listed by Zittrain:
- 1 : leverage: Jetpack leverages the existing Web technologies (HTML, CSS and JS+DOM) and applications (via their APIs)
- 2 : adaptability: add-ons are already used in many different fileds, and I expect Jetpack extensions just in the same way
- 4 : accessibility: to use these technologies, all you need is an Internet-connected computer that runs Firefox (so you can run Windows, OS X or Linux), which is a free download.
To sum things up, Jetpack is yet another demonstration of what Mozilla does to make the Web Browser even more generative.
I've been travelling a fair bit. Some notes and thoughts...
There is a key concept about what we do at Mozilla, which is kind of familiar to most of us. It's the notion of Generativity. I know, it's not even a word! However, it looks like this concept is not so well understood by people who are not spending most of their time building the Web or a browser or similar things. So I figured I should spend some time explaining what it is about. Then I'll blog on why Generativity is central to the Mozilla project and the Mozilla Manifesto.
Let's start by asking Wikipedia about Generativity. Here is the definition (I have removed the part about epistemology to avoid unnecessary headaches and added emphasis where needed):
Generativity describes in broad terms the ability of a self-contained system to provide an independent ability to create, generate or produce content without any input from the originators of the system. (...) Technological generativity generally describes the quality of the Internet and modern computers that allows people unrelated to the creation and operation of either to produce content in the form of applications and in the case of the Internet, blogs. Jonathan Zittrain has expressed concern that many recent technologies such as DVR and GPS have moved away from the generative, two-way aspects of the personal computer and the Internet.
Generativity is a system's capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences. Terms like "Openness" and "free" and "commons" evoke elements of it, but they do not fully capture its meaning, and they sometimes obscure it.
The author then describes the five principal factors that make something generative:
- Leverage: how extensively a system or a technology leverages a set of possible tasks
- Adaptability: how well it can be adapted to a range of tasks
- Ease of mastery: how easily new contributors can master it
- Accessibility: how accessible it is to those ready and able to build on it
- Transferability: how transferable any changes are to others – including (and perhaps especially) non-experts.
I see the combination of PCs and the Internet as a wonderfully generative tool. A PC connected to the Internet, is amazingly leverage-able, adaptable, quite easy to master, affordable and the innovations produced can be easily transfered to other people. One could say that the connected PC is the ultimate generative technology: it enables people to invent new stuff, to do things that no-one had imagined before. Remember 20 years ago? The Internet was still used by scientists and the Web was still to be invented. Now let's think about things that were not possible at the time (I am sure I forgot tons of examples, of course):
- Publishing your own magazine. It's now called a blog. There are hundreds of millions of them today.
- Instant access for free to an amazing encyclopedia you can update with your own knowledge? It's now called Wikipedia. The English version is approaching 3 million articles. It exists in 265 different languages for a grand total of 13 million articles...
- Accessing maps of the world instantly, along with a satellite view? It's called Google Maps.
- Instantly accessing a fantastic wealth of information? It's called a search engine.
- Reuniting with high-school friends? Use social networks.
- Sharing pictures with friends, family and the world?? Flickr.com and cohorts of similar sites. Videos? Youtube and Dailymotion. Short messages? Twitter and Identi.ca.
- Work together as a community with people from all over the world to produce software to access all of this? It's called Open-Source / Free Software. (Or Mozilla ). Distributing these software products to ordinary people that enjoy them? Firefox has now 270 million active users in the world.
I hope that I have succeeded in explaining what Generativity is. In future posts, I'll discuss its pros and cons, along with its relationship with Mozilla. Stay tuned! In the meantime, Zittrain's book is available for download, and you can read its review by Cory Doctorow.
You’ve heard it’s coming in 2012. Or maybe 2022. It’s certainly not ready yet, but some parts are already in browsers now so for the standards-savvy developers, the future is worth investigating today. Ian “Hixie” Hickson, editor of the HTML 5 specification, hopes that the spec will go to Last Call Working Draft in October this year.
Accessibility Task Force member, Bruce Lawson, interviews Hixie on how the specification for the next generation of the Web’s markup language is shaping up. Disclosure of affiliations: both work for browser vendors—Bruce for Opera, Hixie for Google (and previously, Opera and Netscape).Bruce
The spec now known as HTML 5 began with a "guerilla" group called WHATWG. How and why did the WHATWG begin?
The short answer is the W3C told us to.
The long answer: Back in 2003, when XForms was going through its final stages (the "Proposed Recommendation" vote stage), the browser vendors were concerned that it wouldn’t take off on the Web without being made a part of HTML, and out of that big discussion (which unfortunately is mostly hidden behind the W3C’s confidentiality walls) came a proof of concept showing that it was possible to take some of XForms’ ideas and put then into HTML 4. We originally called it "XForms Basic", and later renamed it "WebForms 2.0". This formed the basis of what is now HTML 5.
In 2004, the W3C had a workshop, the "The W3C Workshop on Web Applications and Compound Documents", where we (the browser vendors) argued that it was imperative that HTML be extended in a backwards-compatible way. It was a turning point in the W3C’s history—you could tell because at one point RedHat, Sun, and Microsoft, arch-rivals all, actually agreed on something, and that never happens.
The outcome of that workshop was that the W3C concluded that HTML was still dead, as had been decided in a workshop in 1998, and that if we wanted to do something like HTML 5, we should go elsewhere. So we announced a mailing list, and did it there.
At the time I was working for Opera Software, but "we" in this case was Opera and Mozilla acting together (with Apple cheering us from the sidelines).
How did you become editor?
I was at the right place at the right time and everyone else was too busy.
How do you personally go about editing the spec and incorporating feedback? What are your processes?
This has varied over the years, as we’ve gone from a nascent organisation with a few dozen people to a well-established project with a mailing list with 900+ subscribers. Mostly it’s all down to managing e-mail. When someone writes feedback on the spec, whether by sending an e-mail to one of the mailing lists I’m on, or by blogging somewhere, or twittering, I log their feedback in a folder on my IMAP server. Feedback gets categorised into either feedback I can work on right away, or feedback that I can’t deal with yet for whatever reason. An example of the latter would be requests relating to mutation events, because I’m waiting for DOM3 Events to update how mutation events work.
Then, I just go through all the feedback I have, e-mail by e-mail, more or less in the order that I received them, sending replies and fixing the spec to address the issues that were raised.
This has some disadvantages, for example there’s a big delay in between when someone spots an error and when I fix it. It also has some really important advantages. If I respond to feedback on something I wrote straight after writing it, I sometimes find that I have an attachment to that section, so if someone suggests a total replacement, I tend to not like their idea. But if I have a delay, I find my attachment has gone away, and I’m eager to replace my old stupid idea with their better one. (Assuming it’s better, anyway!)
What’s the hardest thing to do?
There are a few things that are hard. One is saying "no" to people who have clearly spent the time to come up with a good idea. The sad truth is that I reject almost everything that I and anyone else thinks of, because if I didn’t, the spec would be a thousand times more bloated than it is now. We get proposals for all kinds of things, and we have to have a very high bar for what goes in. There’s also the danger that if we add too many things to the spec too quickly, the browser vendors will each implement their own bit and it’ll be a big mess that won’t help Web authors.
So I have to make judgements about what is worth adding and what isn’t, and that’s hard. I’ve upset a lot of people by rejecting their ideas, because they take it personally. On the other hand, some of the most productive members of the community now are people who’ve had many of their ideas rejected, but they stuck around long enough to see a few of their ideas make it in. The best way to get an idea into the spec is to find something in the spec that’s just clearly wrong, which is something that a lot of the most active people do a lot, too!
Something else that’s hard is making up new features. The bulk of HTML 5 is actually just defining how browsers already do things, which, although complicated and sometimes unbelievably arcane, is, at the end of the day, pretty easy to spec: you test the browsers, and you write what they do. Rinse, repeat, until the spec covers every possible case.
Making up new features, though, means actually thinking about what should happen, what is the most understandable solution, figuring out how things should fit together, and so on. It’s often tempting to make something that is theoretically neat, but which doesn’t fit in with the rest of the language, too. After all, that’s where all this came from—we don’t want to create a new XForms, a really well-designed technology that doesn’t fit into the way people write pages.
What’s in the spec?Bruce
You’ve said that HTML 5 is in "direct competition with other technologies intended for applications deployed over the Web, in particular Flash and Silverlight". Why is it so important to do so, and isn’t it a lost cause given that those techologies are already out there while HTML 5 is not yet complete?
HTML 4 is also in direct competition with proprietary technologies, and it’s winning, hands-down. HTML5 is just continuing the battle, because if we don’t keep up, then the proprietary technologies will gain ground.
What are the main philosophies of HTML 5?
Backwards-compatibility, incremental baby steps, defining error handling. Those are the main philosophies.
What else did WHATWG try to achieve with this new iteration of HTML?
We started from trying to put features from XForms into HTML 4, and we quickly also took the opportunity to fix some of the things in HTML 4 that were either too vague or disagreed with reality (that is, where the browsers all did one thing but the spec said another). It turns out that HTML 4 is so vague that this is a pretty big task—it even involved defining the whole HTML parsing model, including error handling, which is a huge job (it took me the better part of a month to write the first draft, and we were tweaking it for about a year before it become more or less stable).
Something else we’ve tried to do is make things simpler. We’ve simplified the syntax (e.g. the rules about what can be quoted, what strings are valid
autofocus=""to focus a form field when the page loads, instead of using
control.focus(), which allows the browser to do clever things like not actually focus the control if the user is already typing elsewhere.
Does HTML 5 legitimise tag soup? Does "paving the cowpaths" perpetuate bad markup?
No, HTML 5 actually makes the rules for markup even stricter than HTML 4 in many ways, both for authors (the rules are simpler, but stricter, than HTML 4’s) and for implementers (gone are the days where they can just do whatever they want when handling parse errors, now every browser has to act the same).
Hopefully, we’ve managed to make the rules on what is valid syntax more understandable, which should help with getting more good markup. We’ve also made it possible to write clearer validators, so I have high hopes.
I didn’t know about a message about separating behaviour and structure, I must have missed that memo! HTML 5 takes a pretty hard line on separating style and presentation from structure and semantics; there are no more
fonttags. Separating the logic and behaviour from the structure and semantics of an HTML document isn’t as important, generally, as far as I can tell.
The main advantage of defining the HTML DOM APIs and the HTML elements in the same specification is that we don’t let stuff fall through the cracks. In practice, browsers implement the HTML elements as DOM nodes, there’s no difference. When we separate the two in the specs, therefore, we introduce a conceptual gap where there isn’t one in reality. The DOM2 HTML spec, for instance, doesn’t say what happens when you change the
typeattribute of an
checkboxon the fly, and the HTML 4 spec doesn’t mention that changing attributes on the fly is possible, so in the HTML 4 / DOM2 HTML era, there’s a big hole there. In HTML 5, this is all defined together, so we can tighten this up and make sure there are no gaps.
Why no native support for microformats/ RDFa in HTML 5?
Microformats is natively supported in HTML5, just like it was in HTML 4, because Microformats use the built-in extension mechanisms of HTML.
We considered RDFa long and hard (in fact this is an issue that’s a hot topic right now), but at the end of the day, while some people really like it, I don’t think it strikes the right balance between power and ease of authoring. For example, it uses namespaces and prefixes, which by and large confuse authors to no end. Just recently though I proposed something of a compromise which takes some of RDFa’s better ideas and puts them into HTML 5, so hopefully that will take care of the main needs that caused people to invent RDFa. We’ll see.
Do the browser makers have too much influence on the spec?
The reality is that the browser vendors have the ultimate veto on everything in the spec, since if they don’t implement it, the spec is nothing but a work of fiction. So they have a lot of influence—I don’t want to be writing fiction, I want to be writing a spec that documents the actual behaviour of browsers.
Whether that’s too much, I don’t know. Does gravity have too much influence on objects on earth? It’s just the way it is.
One of the chairs of the W3C working group is a Microsoft employee. Is that giving too much power to one browser vendor, or a good thing, given that Microsoft’s browsers still dominate and their buy-in on any spec is therefore essential?
Personally I would like Microsoft to get more involved with HTML 5. They’ve sent very little feedback over the years, far less than the other browser vendors. Even when asking them about their opinion on features they are implementing I rarely get any feedback. It’s very sad. If I e-mail them a question about how I can best help them, I usually get no reply; at best I’ll get a promise that they’ll get back to me, but that’s it.
There has been a lot of spirited debate (ahem) about accessibility in the development of HTML 5. How does the spec deal with the requirements of people with disabilities?
Universal access—the requirement that anyone be able to use information on the Web—is a fundamental cornerstone of HTML’s design, just like security, privacy, and so on. In general, we try to design features so that they Just Work for everyone, regardless of how you are accessing the Web. For example, in HTML 5 we’ve added new input controls like calendars. These will Just Work with screen readers once browsers support them, authors don’t have to do anything special.
Does your personal support of humanitarian eugenics affect your opinion of giving extra "help" for people with disabilities?
You’ve been reading too much of our pet troll’s blog! ;-)
[Bruce's note: this refers to Mr Last Week, mysterious author of the blog Last Week in HTML 5, which lampoons the HTML 5 Working Group in very funny, frequently foul-mouthed manner.]
People with disabilities are just as important to me in my work on HTML 5 as is anyone else.
You wrote to ask screenreader vendors to participate in the specification process. Did they ever reply?
A couple did, but only to say they had little time for the standards process, which was quite disappointing. Since then, though, Apple has ramped up their efforts on their built-in Mac OS X screen reader software, and we do get a lot of feedback from Apple. So at least one screen reader vendor is actively involved.
When there’s a built-in way to do something, using that is the simplest and most reliable solution. So for example, if you want to have a checkbox, using the
inputelement with its
typeattribute set to
divs, scripting your own controls and so forth.
Can we expect ARIA-specific constructs which have no equivalent in HTML 5, such as live regions, to be allowed under the rules of HTML 5 so it will all validate?
Yes, the plan is to make sure ARIA and HTML5 work well together. Right now I’m waiting for ARIA to be complete (there are a number of last call comments that they haven’t yet replied to), and for the ARIA implementation rules to be clearer (it’s not yet obvious as I understand it what should happen when ARIA says a checkbox is a radio button, for instance). Once that is cleared up, I expect HTML 5 will give a list of conformance criteria saying where ARIA attributes can be used and saying how they should be implemented in browsers.
Why, when, how, who?Bruce
Why would we content authors want to move to HTML 5? What’s in it for us?
Today is probably too early to start using HTML 5.
Long term, content authors will find a variety of new features in HTML 5. We have a bunch of new structural elements like
footer, and so on. We have new elements for embedded media, like
audio. We have new input controls, like the calendars I mentioned, but also fields for URLs, e-mail addresses, telephone numbers, and for color selection. We have control over autocomplete values in text fields, as well as field validation so that you can say which fields are required. We have context menus,
pushState()so you can update the URL in Ajax applications, and offline application cache manifests so that your users can take your applications offline. The list goes on.
There’s also the benefits that come from using an HTML 5 validator. HTML 5 is much more precise about many things than HTML 4, so the validators will be more useful in catching real errors. The
embedelement is no longer invalid.
Are there advantages for end-users, too?
A more powerful HTML means more powerful Web applications. Just like
XMLHttpRequestresulted in more interactive apps, HTML 5 will result in a richer and more consistently reliable experience. I hope!
What’s the the timeline? When can we start using HTML 5?
The plan is to have the spec mostly finished by October 2009. A lot depends on the browser vendors, though. I don’t know when things will be implemented widely enough that authors can use them reliably everywhere. Some features, like
video, are getting implemented in most browsers as we speak. Others will take longer.
What can standards-savvy WaSP readers do to get involved with the specification process?
There are a number of ways of taking part. What we need most of all these days is technical review of the specification text, calling out places where I screwed up, where the spec defines something that’s not easy to use for Web authors, where the spec contradicts itself, typos, spelling mistakes, grammar errors, errors in examples, you name it.
I posted a blog entry recently detailing how people can send feedback. You can join the W3C HTML Working Group or the WHATWG. There are also lots of other things people can do—write demos, write tutorials, edit other related specs, write articles introducing parts of the spec on the blog, write test cases… Anyone who wants to help out but doesn’t know where to start should drop me an e-mail at firstname.lastname@example.org.
Will there ever be an HTML 6, or is it a convenient fiction to park out-of-scope discussions?
I’m sure there will be an HTML 6, and 7, and 8, and probably many more, until someone comes up with something so radically better that we stop evolving the Web as we know it.
I expect work on HTML 6 will start even before HTML 5 is completely done, in fact. Putting the finishing touches on HTML 5 will be a long and tedious job involving writing a massive test suite. HTML 4 never had a serious test suite created (it was too vague as a specification to really be properly tested), so we have to start from scratch with HTML 5. The HTML 6 team will at least be able to build on what we’ve done with HTML 5, I’m jealous!
Actually if it was up to me, after HTML 5 I would probably transition HTML to an incremental model. Once we have a basic spec that is well-defined and has been proven, instead of releasing a frozen snapshot every few years, I’d prefer a model where we can slowly evolve the language, call it "HTML Current" or something, without having to worry about versioning it. To some extent that’s what we’re doing with HTML 5, but I think formalising it would really help.
Having versions of specs doesn’t make sense when you have multiple implementations that are all evolving as well. No browser is ever going to be exactly HTML 5, they’ll all be subsets or supersets. So why bother with versioning the spec?
It’s a very unusual idea in the standards world, so I don’t expect us to do this. But I do think it’d be the best way forward.
Would you like to be the HTML 6 editor?
Too early to tell! It’s been a lot of fun working on HTML 5, it’s quite challenging and you have to deal with all kinds of issues from the deeply technical to the highly political. I might want a change of pace when we’re done with HTML 5, though.
What’s your fave feature that didn’t get into HTML 5 that you’d put into HTML 6?
In-window modal dialogs or dialog box—the kind of prompt you get when the computer asks you a question and won’t let you do anything else until you answer the question. For instance, the window that comes up when you say "Save As…" is usually a modal dialog.
Right now people fake it with
divs and complicated styles and script. It would be neat to just be able to say "make this section a modal dialog". Like
showModalDialog(), but within the page instead of opening a new window with a new page.
I’d add it to HTML 5, but there are so many new features already that we need to wait for the browsers to catch up.
Finally, is it true that you and Mr Last Week are the same person, like Edward Norton and Brad Pitt in "Fight Club"?
Oh, no. Our pet troll is a phenomenon all to himself.
Thanks for your time.
Work is well and truly underway to get WaSP InterAct translated into multiple languages. With an army of over thirty volunteers working in eighteen languages we hope to get localized versions of the Curriculum into schools colleges and universities near you soon.
It’s a huge project and we’re looking for as many volunteers as possible. If you’d like to help translate or help with localizing content for your local education system email the ILG leads and we’ll put you in touch with other volunteers.
Full details abut how to get involved can be found in the Internationalizing and Translating InterAct forum.
Thank you to everyone who’s involved so far!
We took some time at the WebSG meetup to discuss possible avenues for improving the government’s online efforts. While Singapore has won accolades for its drive for eGovernment, participants at the meetup highlighted a few steps the Singapore government could do to improve her services. I will be passing these suggestions on to the relevant folks in the government.