Dan Bricklin's Web Site: www.bricklin.com
Early Thoughts About the iPad
It's more than just a big iPod touch or iPhone. Also a look at what makes a smartphone different than other devices.
I've had an Apple iPad for about three days now. During that time I've used it for several hours, both by myself and while showing others. In addition, I traveled to Dallas to give a dinner speech on "Evaluating Innovation" to the local chapter of the Society for Information Management (senior information officers at leading corporations, academia, etc.). Occurring two days after the shipping of the much-hyped iPad, I felt that I needed to address the device in my talk, too. In the beginning of the talk I explain how I came up with various rules of thumb for evaluating new technology by telling the story of VisiCalc and explaining what I learned from others. I was told by the event organizers that it would be good if I looked at the past but also do some discussion about the future. This forced me to come up with somewhat concrete statements, but also forced me to look at the iPad in light of some other things I was talking about. (The date also forced me to make sure that I had used an iPad by then so I ordered mine within a few minutes of when it went on sale weeks ago.) What I said there led to this essay.

First, a little bit about smartphones as background.

I've spent the last few years developing web applications using JavaScript and Perl. Before that I developed or helped lead the development of various desktop applications. I've been an iPhone developer for several months now (producing a popular app called Note Taker). As you look closely at the UI guidelines Apple very strongly encourages you to follow (at the risk of having your app submission rejected from the App Store if you don't), and at many other applications, and look at the physical capabilities of the iPhone itself (similar to other devices like the Droid and Nexus One), as a product designer you get a feel for where things are different than on platforms like a Windows or Mac desktop or laptop. As a happy user of a netbook and a longtime user of older smartphones, there are things about this new genre of smartphones that stand out.

While they are called smartphones, a phone is only a small part of what they are. They include camera input, GPS, Wi-Fi and 3G connectivity, an accelerometer, a powerful CPU with lots of memory and permanent storage, a very capable graphics processor (GPU), a screen that is large enough to display lots of information as well as UI controls, good finger-touch capability, an easy and inexpensive channel for distributing applications, and access through the Internet to extensive need-it-now data on demand. All this together makes the smartphone capable of running applications that take advantage of any combination of these technologies. In addition, the small size, and the fact that it is your phone, too, so you carry it with you all the time, bring this computing power to places and times as you go about your life that it couldn't be brought to before.

This combination of technology and where it is used is really good for lots of applications: Games and other forms of entertainment, search, maps, weather, reference material, data collection, and general use of the web, just to name a few of the most obvious and popular. As I pointed out in my "When The Long Tail Wags The Dog" essay, if you know that a product can be used for a wide range of needs, you will have less uncertainty about it being useful for something you may need in the future and therefore value it more than you would a more limited product. Apple worked really hard, through lots of advertising and the help of thousands of external developers, to have people believe that the iPhone was a very general purpose tool, even if those people didn't understand what it was about the iPhone that made it possible (since those people didn't have an iPhone themselves to get a feel for what it was like). The "stories" with the punch line "There's an app for that" served the purpose of giving them comfort. Once you actually have such a device and sample a few apps, you learn that yourself.

The importance of the combination of screen size, GPU, and touch cannot be minimized. The inexpensive, low-power GPU, the result of Moore's Law being aggressively applied because of the needs of gaming, animation, video, and data visualization, gives UI designers simple access to quality effects that we could only dream of previously for mundane parts of a program. "Fluid" feedback as the user does "input" by touching the screen with a finger and then moving it is possible. In the early days of GUI mouse-based computing, we were lucky that the systems were powerful enough to do simple scrolling, smooth tracking of a mouse-cursor, and eventually dragging of objects on the screen (first outlines and then all of the pixels). Only recently, with the ubiquity of simple GPUs in desktop and laptop systems, are we seeing sophisticated smooth visual transformations used in system operations, but not as much in regular applications which use UI controls that hark back to the days of Xerox PARC and early Visual BASIC. This isn't because we couldn't or didn't want to, but rather for compatibility with various existing hardware and software. This evolution to making more and more of the interface fluid has been a natural progression.

The old tablet computers of the 1990s started out being designed around handwriting recognition, so they used a pen-style touch screen to get the resolution and feel needed for being "just like a pen and paper." The pen has almost the position and targeting accuracy of a mouse, so you could piggyback the pen as an input device onto existing traditional mouse-based apps, development environments, and APIs. (The pen is also much, much better than a mouse for comfortable drawing and writing.) With direct finger touch technology it is not as easy to make quick targeting as accurately and repeatedly as that assumed by the existing GUI controls, so you need to use different layouts, UI styles, and often APIs. There are ways to get more precise positioning with a finger, such as magnifiers and/or handles with click-stops, but they aren't as "natural" or easy for the user to discover.

The early smartphones used pens because their screens were so small that the only way to fit enough control-space on them was with the precise positioning of a pen (or fingernail). They also needed to use gestures (like Palm's Graffiti) to enlarge the number of distinct data items (such as characters as well as simple editing commands) you could enter.

The larger screens we can affordably have today, together with the feedback made possible through use of the CPUs and GPUs, open up rich UIs on portable devices that can be controlled without the need for the fine point of a stylus.

Now let's look at the iPad. It's not a smartphone. It's something else. It isn't "just a big iPod touch" any more than a car is just a big motorcycle.

It has most of the technical capabilities of the typical smartphone, less the camera and (on the inexpensive units) 3G connectivity (and without the phone). In addition, though, it has a much larger screen plus a larger black border around the screen you can hold without touching the screen itself. It also has a "lock orientation" switch instead of a "be quiet" switch.

The non-GUI/windows UI, with the bare-bones smartphone style windowing ("view" objects) means that it doesn't need to "waste" pixels to keep different application windows apart/sizeable/titled, etc. This gives effectively more area for many applications (including browsing) to display information - the black frame around the screen gives a nice border if you need it. Portrait orientation lets you see more of the pages you read than you usually get on a laptop with similar screen resolution. (Laptops are hard to hold in portrait mode and Tablet PCs are expensive.) The quick zooming with a simple gesture lets you easily crop out superfluous material when viewing and then go back to full size for navigation or context. Therefore, 1024x768 is actually a lot more space than it seems it should be. This was a nice surprise.

The plain iPad is quite slippery, with metal and curves on most surfaces that touch a table. It's hard to prop up just a bit on a table. It would have been nice to have rubber ridges like the Droid feels like it has. The standard Apple iPad case helps, but it has an overly sharp edge. Case design will be important here and we will each be doing a little bit of Goldilocks looking for the "just right" design -- an early adopter "tax".

The main obvious thing vs. a smartphone is that the screen is much bigger -- about the size of the main printed area in a book or a magazine like Make Magazine. This makes it much better for reading. It is also much better for UIs for manipulating what you are looking at. There is more space to have multiple things on the screen. This space can be used for just-in-time training, handles to manipulate, status or detail information, and more. This means that deeper applications are more appropriate than on the smartphones. The iPhone-size screens have room for mainly one major UI-control cluster plus a small toolbar or two -- when the keyboard is up there isn't room for almost anything other than a small view into what you are entering with little context. The iPad does not have such severe limitations. There will be a different class of applications on it, not just the larger-format "broadcast" material like video and news.

I've found that the "use it on the run" nature of smartphones, together with the limited space to provide a discoverable UI and the few standard controls, forces developers to create very simple applications or else very tedious to use ones (with lots of drill-down). The more sit-down nature of a tablet, together with the bigger screen and its immersive large part of your field of view, lend it to richer, more powerful applications.

A nice surprise is how natural it is for multiple people to use the iPad at once. A smartphone pretty much cuts you off from others - you hold it in your hand facing you and you look down at it and appear to be ignoring others around you. A laptop faces you and sits between you and the person you are facing. A bright tablet like the iPad can be used flat or held far enough away from you that others around you can participate. It also has a screen that is readable even off-axis. This was especially true when some friends and I were having fun looking at some vacation locations on the Maps application (a Google maps app) in satellite-view mode. Multiple people can easily take turns controlling it, zooming, panning, etc., without moving them or the device. The Scrabble game is a very explicit example of this, too. I'm sure you'll see multi-touch used as multiple separate controllers, instead of a combined gesture like pinch.

You become very sensitive to responsiveness. The use of the GPU gets you spoiled (plus I use it on Wi-Fi for fast connectivity). Any delay or non-smoothness feels wrong. But also, too much damping or momentum seems bad, too.

When you use it propped up vertically, with an external keyboard or not, you find that you want a touchpad / mouse of some sort, especially if you are used to the Apple Magic Mouse or something like the Wacom Bamboo touch & pen tablets. You'd like to have a temporary "cursor" on the screen at that point, like on the iPhone software development system's simulator, and then do touch gestures with your hand laying horizontally and not lifted up.

Like a netbook, having a feel for the long battery life matters so that you don't feel like you are rationing use. You feel like there is a lot of battery life in reserve - that you aren't pushing things. This was important for the original Palm Pilot and the Kindle, too.

Some apps, like the Pages word processor, use the orientation of the device as a way to switch between different layouts with different tools shown or not. I think this can be a problem with the dock and other stands. Needing to twist the device to do things that aren't really related to the twisting feels intrusive and wrong to me.

I found that having the Help function in the Pages app requiring Web browsing is a problem - especially since it's a writing application that doesn't need the Internet and I was using it while in a plane and wasn't paying for connectivity. (The Wi-Fi in the plane thought I had a smartphone and not a laptop anyway and didn't give me the right interface.)

I think that's enough for now.

-Dan Bricklin, 6 April 2010

© Copyright 1999-2010 by Daniel Bricklin
All Rights Reserved.