It’s the beginning of the month. The couple stands in their empty living room, expectant. The movers have finally arrived with their furniture.In a sort of lumbering choreography, two guys bring in a sofa. One turns his head to the couple and raises an eyebrow. “Put it against that wall,” says the wife.
They do, and there’s a pause. The husband says, pointing to the opposite wall, “Sorry, if you don’t mind, could you put it over there instead?”
They do, but almost as it touches the floor the wife pipes in, gesturing toward a third wall. “No, that’s not working…let’s try it over there.”
The sofa’s designer, fabric manufacturer, master furniture craftsman, and even the furniture store associate who sold them the piece are long removed from the sofa’s eventual integration into this household. Their visions and intentions for the sofa have long since evaporated. What’s left are the couple, who now own the article and must quickly figure out how to maximize the sofa’s effect in this new environment, and the poor movers, whose backs will be a bit achier at the end of this prototyping exercise.
That’s Perfect! Now Get Rid Of It.
The concept of rapid prototyping in software is not a new one. Known also as “throwaway prototyping”, it describes a modelling method that assumes the first several drafts of an application may be functional, and may allow for user testing, but will be discarded before the final, delivered version. The benefit of throwaway prototyping is the ability to quickly create something that the user and the developer alike can test, which leads to faster improvements in architecture and interface design.
This is particularly important in the era of mobile computing, where the user’s experience is shared across many more form factors than were originally conceived by the application’s inventor, and where the expected pace of new feature release is vastly accelerated.
Development of stand-alone computer applications (such as Microsoft Word or Adobe Photoshop) starts with some manner of a software development kit, whether it’s any of Adobe’s SDK’s, the Apple SDK, or the Windows SDK. Historically, the benefit of a stand-alone application (as opposed to a web site) is that the application is purpose-built; it is more efficient in its data handling and can access and leverage functional components of the host machine unique to the application’s purpose.
The trade-off to this enclosed ecosystem is compatibility: each operating system needed its own native version of the application. The kicker – in this new era of mobile computing – is that most applications are not able to run on any mobile devices.
Today, however, the improbable confluence of four phenomena – pervasive high-speed connectivity; data architectures designed for lightness and speed; the increasing homogeneity of features in today’s mobile devices; and adoption of HTML5 as a new web browser standard – combine to deliver a new, better way to bring early functional versions of web applications to life faster and less expensively than before. As many already know, HTML5 is a standard markup language that is enhanced to provide the ability to manage the presentation layer as well as leverage some of the functional components of the devices where it is running. In this way, HTML5 represents a true (if still imperfect) “write-once-run-anywhere” platform. It transcends the boundaries of operating system compatibility by leaving that to the browser itself, which also means that sites coded in HTML4 will continue to display properly in HTML5-enabled browsers.
A quick search engine trawl will yield a lot of information about HTML5, which we will skip for purposes of this article. Neither should this quick overview be viewed as a call to arms against standalone applications. Rather, I view HTML5 as an important new platform that aids in the web and mobile application development process. While not exactly a panacea of all web development ills, HTML5 is an opportunity for the teams working on the development of all applications to benefit from an earlier, low-cost, high-impact proof of concept phase which will help with concept sell-through, decision validation
A Camera Is A Camera Is A Camera
When developing an interactive project, we are focused on what the end user will need to do most. These days for moving forward, we are also dealing with the end user likely not being in front of a computer most – if not all – of the time. This necessarily changes the way we approach creating the best online experience.
The value of HTML5 as a new prototyping toolkit is that it does several things well that are incredibly useful for developers and designers alike:
- Semantic markup: across all devices, common accessories have a standard name, unifying their definitions in the code.
- Leveraging native device functions: as an extension of the semantic code bullet above, the mark-up is streamlined thanks to the consistency of reference in the code.
- Works on all (compatible) browsers: The current leaders in mobile web browsers are largely uniform in their adoption of HTML5. Microsoft – long a web browser compatibility outlier – is still not a significant factor in mobile computing as of yet. Avoiding Microsoft per se is not necessarily a benefit except when it comes to uniformity of web browser behavior.
- No SDK or compiling needed: build and run in a standard text editor.
- Familiar to current web developers: fully backwards compatible; your HTML 4 code won’t break.
Don’t Look Back In Anger
Like everything in interactive development and publishing, there is grumbling from the developer audience. Generally speaking, behaviors or media deliveries that are easy to knock out in HTML5 (managing video playback) are hard to make consistent for non HTML5-compliant browsers. For this reason alone, the rosy, perfect world of HTML5 harmony does not – and cannot – exist.
When considering HTML5 as the central platform for a large-scale interactive release, it is worth remembering that the world simply is not yet ready for it: older browser vintages have not been flushed out use yet, video formats aren’t yet fully standardized, and there are still many broadly accepted toolkits that execute Flash-like operations better than HTML5…like Flash, for instance.
What makes HTML 5 tactically valuable is for prototyping more complex user experiences. A web developer is able to create a good working model of an application, and rapidly deploy different executions in pursuit of the ideal user experience, with HTML5. Various iterations of the interface and the way the application manages the flow of data can be modeled and refined in near-real time. The result can be a significantly faster, more cost-effective development process.
Perhaps best of all, while the notion of throwaway prototyping may apply it doesn’t have to: the mobile browser execution of the application can stay in HTML5 and be released as production-ready.
Whether HTML5 is the solution to all the world’s problems is not the question. Standards in interactive technologies take time to be fully adopted and cannot be imposed even by 500-lb. gorillas: Adobe’s Flash took over 10 years to get to where it is, Microsoft has 3 browsers with 3 different approaches to standards adherence in circulation with a 4th on the way, Google has less than 7% share, and does anybody even remember Netscape?
Now, Where Should We Put The Coffee Table?
What’s most important in the rapidly changing technology landscape that attends mobile computing and the wide use of smartphones is the ability to imagine a user experience and quickly bring it to life. Getting a concept into beta faster and less expensively is what allows for the maturation of the idea, and of the way the idea works when users get their hands on it. With technologies like HTML5 the tools to do this are more broadly available, and can be more rapidly put into action.
After all it’s the movers who know best that it’s easy to tweak the room’s layout if someone else is doing the heavy lifting.