Behavioral Blips

Elizabeth S. Bennett   December 21, 2011

Your trompe l’oeil is ringing (image via A Look Askance)

The Breakdown: As digital becomes a more constant part of our everyday lives, Elizabeth Bennett observes that sometimes our brains get confused about what mode we’re operating in.

Last week, I was awakened by a sound I thought was a forcefully vibrating cell phone. It was in fact a fog horn in New York Harbor. I frequently find myself trying to swipe the pages of a physical book if I have recently been reading on my tablet and, as hard as I try, the ATM screen still won’t respond to my unconscious finger swipes.

Most of us have experienced some version of this kind of disjointedness which has emerged from the liminal space we’re living in.  We hop back and forth between machines and the physical world, constantly dividing our attentions between three dimensional and digital interfaces. Computers and TVs have converged. Phones and computers have converged. The avenues for information and content consumption seem to multiply 10-fold on a monthly basis. (See my colleague Jake Keyes’s post on how businesses are trying to bridge the gap between the physical and digital world.)

We are also deeply influenced by all the new tools and tech that we encounter, to the point where we experience the world in ways we wouldn’t have imagined just a few years ago.

I polled some colleagues and friends to find out about the quirks that they’ve observed as their brains and bodies can’t always catch up to the demands and realities of the moment.  These anecdotes don’t all fall into the same category but they represent a nice sampling of the goofy ways we respond and react to our technology, even when it’s not in the room.

Any of these sound familiar?

“I feel my phone vibrating in my pocket – even when it isn’t even in my pocket.”

“I often get frustrated with my in-car navigation. I am so used to the pinch to zoom interaction on my phone and iPad, I often grab the screen getting frustrated that it doesn’t zoom.”

“My daughters think that everything digital is touch interactive. TV’s, Screens in cars, anything. They are shocked when nothing happens when they touch “non-touch” devices.”

“When I’m on a computer keyboard, I tap the spacebar twice to make a period because that’s how I do it on my phone.”

“After listening to a lengthy voice mail from a friend, I momentarily forgot it was a recording and responded out loud.”

“It’s so annoying when I’m about to take a great photo and somebody calls my camera.” – @JordanRubin

We’d love to hear about your liminal behavioral blips. Please add them to this post so we have a record of this zany modern moment.

Talking Siri

Jake Keyes   October 17, 2011

Siri may be the first voice interface to get it right. (image via Marc Wathieu)

Will Siri, the new voice-recognition interface of the iPhone 4S, live up to Apple’s promises? It’s probably too soon to tell. The digital assistant demos well on stage, but the idea of talking aloud to your phone may prove too weird for most people to do in public. Regardless, Siri is evidence that Apple has realized something fundamental about voice interaction: the way we talk is essentially different from the way we read. And the same interface can’t work for both.

We’re all accustomed to visual and written interfaces, in which a user is asked to choose from a collection of words or symbols. Given lots of items, interface designers and content strategists tend to arrange those items in a hierarchy. Which makes sense: we can only focus on a few things at once, and without a hierarchy we’d have to deal with thousands of options laid out with equal weight in a flat and endless grid. To solve the problem of navigating this complexity, visual interfaces start at a high, general level, and work down to the specific. This holds true in print as well, in dictionaries, restaurant menus, and so on.

Many audio interfaces adopt this structure, too. Take automated phone support, and the typical menu system: “Welcome. Please choose from the following options…” On the surface, an audio hierarchy is appealing, for the same reasons as a visual one. How are you supposed to know what options are available without being prompted, or without having your choices constrained? Using a hierarchy should feel streamlined, necessary, and clean. But it doesn’t. People hate it. They hit 0 without listening, trying to break out of the command tree. Look no further than GetHuman.com for evidence of how passionately people resist hierarchical voice command systems.

The fact is, audio interaction is essentially different from visual interaction. In day-to-day speech, no hierarchy is necessary. We can pluck any sentence from the bottomless pool of possible sentences. There’s no need for people to tell their roommates “Hey, New Reminder. Payments > House and Home > Utilities > Electricity > Pay Bill.”  This certainly isn’t the most natural way to speak and it likely requires some user training to navigate the hierarchy successfully. We want our computers to understand commands without context. In other words, we want audio interfaces to be flat.

For years we’ve struggled. Technology seems to be the bottleneck — computers are bad at interpreting the human voice, so we’ve had to constrain significantly the number of things a person can say. Search engines approximate something like an audio interface, in that they take a single input and provide possible solutions, but there’s a key difference: search engines don’t take our input and execute an action, or bring us directly to a single URL. Some refinement and judgement is always required on the part of the user. (If all we had was “I’m Feeling Lucky,” Google would be a strange and frustrating service). We’ve never really achieved true flatness in any interface.

This is why the concept of Siri is so important. The virtual assistant is a promising step toward a true natural language interface. There are still constraints; for instance, Siri’s capabilities are limited, and she gets confused from time to time. But the concept is there, the idea of a perfectly efficient system for achieving an action: one command, one response.

This morning I asked Siri for directions home. She understood, and spent a moment thinking about my question. And then, something eerie happened. She pulled up my empty contact information from my address book. “I don’t know what your home address is, Jake.” Immediately, without thinking, I tapped in my information and saved it. I did exactly as I was told.

Autofail: How Apple’s Autocorrect Teaches Bad English

Robert Stribley   October 7, 2011

Lost in translation. (image via benjamin.krause)

The Breakdown: Apple just released the iPhone 4S, which incorporates voice recognition to intelligently interpret your voice commands. Robert Stribley explains how one of the iPhone’s existing features, however, ain’t so genius.

This week, Apple released the iPhone 4S, which promises, via the wonder of Siri technology, to respond intelligently to voice commands. The innovation may turn out to be ground-breaking, but it was greeted with somewhat muted applause, as Apple’s well-trained audience had been expecting the advent of the iPhone 5. Probably unfairly, the 4S ended up sounding like a way station on the road to the big event. Still, it’ll be interesting to see whether the adoption of Siri’s intelligent assistant feature can mitigate one of the iPhone’s most intermittently annoying features: Autocorrect. Probably not.

I’m hardly the first to notice that Apple’s Autocorrect feature often fails to live up to its name. Many have noted that the program actually proves pretty poor at correcting your spelling, sometimes even inserting an embarrassing substitute for what you intended. There’s a popular blog, which capitalizes on the more amusing instances of this behavior. What I haven’t seen is anyone articulate all the different ways Autocorrect actually performs abysmally. It’s not just that it corrects poorly: In fact, it fails in three key areas. And it fails in ways that arguably teach its users bad English. As a public service then, allow me to codify the ways in which Autocorrect fails.

Autocorrect Substitutes Misspelled Words with Words Which Makes No Sense

The best-known issue with Autocorrect is its sometimes comical tendency to replace misspelled words with something that makes little or no sense or to create a new meaning the writer didn’t intend. The reason this frustrates people so much is that the misspelling is often not too far from the correct spelling. Yet, Autocorrect often manages to suggest something completely different.

Therefore, “making brownies” becomes “making babies,” “Disney” becomes “divorce,” “sinus infection” becomes “dinosaur infection,” and much hilarity and/or awkwardness ensures. And these are some of the tamer examples, you understand.

Autocorrect supposedly works by analyzing the keys near the ones you actually selected to estimate which ones you intended to select. Then it replaces your word with its best guess based on those letters nearby. In order to be improved, Autocorrect would have to incorporate some higher order artificial intelligence, some fuzzy logic, so it would recognize when a word it wants to substitute seems absurd or inappropriate in a particular context. It would need to base its intelligence upon a nuanced, contextual understanding of language, instead of the much more limited contextual understanding of the layout of a Qwerty keyboard. If Apple is able to apply Siri’s semantic capabilities to texting, they could make some great strides in this area.

Autocorrect Teaches Bad Spelling & Punctuation

Yes, perhaps most infuriating of all, Autocorrect sometimes corrects words which are spelled correctly, actually rendering them incorrectly. For instance, if you type “Being simplistic is its problem,” autocorrect will change “its” to “it’s,” which means your sentence now technically reads “Being simplistic is it is problem.” Many college graduates have difficulty distinguishing between these two spellings (one a possessive pronoun, one a contraction) as it is. Now Autocorrect is drumming the exact wrong spelling further into their craniums.

Another example, and this issue is probably often overlooked, but when you use an ellipsis in any sentence Autocorrect automatically capitalizes the first letter of the next word, apparently assuming the last of the three marks to be a period. However, a capital letter does not necessarily have to follow an ellipsis. Ellipses are employed to show that words have been omitted – words, not necessarily sentences – and also, perhaps informally, to show the passage of time. Since Autocorrect cannot know if a capital letter needs to follow an ellipsis, it shouldn’t automatically create one. Otherwise, it may be teaching bad punctuation and capitalization.

Autocorrect Bowlderizes Your Writing

Perhaps it’s inadvertent, but Autocorrect also appears to censor or Bowlderize your writing on occasion. If I write that “Autocorrect doesn’t do a hell of a good job,” the program renders “hell” as “he’ll.”  Oddly enough, however, it doesn’t correct “helluva” as in “helluva good job.” That indicates that if you type in “hell,” Autocorrect assumes you probably meant “he’ll.” That correction, however, takes about six keystrokes to undo. Of course, it’s only one extra touch to dismiss the little bubble that comes up suggesting “he’ll” if you happen to see it, which I seldom do. But the point is, “hell” ain’t a misspelling, so Autocorrect shouldn’t correct it. Think I’m being picky? That, chances are, Autocorrect is generally correct on that one? Well, Autocorrect doesn’t change “shell” to “she’ll,” so it’s not even consistent. And I’m far more likely to say “hell” on any given day than “shell.” But that’s just me.

Additionally, there’s at least two other words Autocorrect doesn’t recognize as spelled correctly despite their being canonical English curse words. I mean, it’s hard to misspell a four letter word. Yet Autocorrect fails to even recognize these words and primly places a neat red line beneath them. Since this is a family publication, I’ll not post the words here. I’ll let you discover them on your own. Of course, I only discovered them myself while thoroughly researching this article.

Let’s also note that Autocorrect highlighted the word “Bowlderizes” above as a misspelling. Leave my writing the he’ll alone, Autocorrect!

Postscript 10/07/11: With the utmost sincerity: Rest in Peace, Steve Jobs, Visionary


Instagram Beyond the Numbers

Natalie Rodic Marsan   October 4, 2011
Brooklyn rush hour in an instant. (image via Natalie Rodic)

The Breakdown: Natalie Rodic Marsan tells us why Instagram has been embraced so broadly since its launch less than a year ago. Read on to see why it’s not just about taking compelling photos. >>

Have you tried Instagram yet? I installed the App the first week it launched, and it’s been a gradual progression from dabbling to completely hooked.

At least I know I’m not alone. The sheer numbers of Instagram users and the volume of their posts are phenomenal.  The mobile photo sharing App boasted 100,000 “Mobile Photo Addicts” in less than one week after its public launch in November 2010.  The most recent count is that 1.3 million photos are uploaded every day. Users have shared a whopping total of 150 million photos on the platform in only 9 months.  The top users are garnering enough attention to threaten the world of professional photography. And early adopter businesses, some big names even, are utilizing the App for community building around their own brand. Best practices for business engagement are even emerging. All this for an App that only runs on one operating system: iOS.

For all its success, Instagram is not an isolated case. It belongs to a new, exploding category of Applications focused on mobile photography and mobile photo sharing, which are collectively changing the way we think about photography. With the device we have with us constantly, we can capture quality shots of practically any subject, then choose from a plethora of free or cheap Apps to edit or filter these shots to enhance the feeling of the moment.  These photos can then be shared instantly in any of our social networking sites (and syndicated across platforms if so desired).

It is the next step in the democratization of content. Instagram (also referred to as IG by members) enables anyone to be a content creator, and a narrator of his or her world via images. As we begin to understand and relate to our world increasingly on a visual level, soaking up information by way of data visualization, infographics, and digital images, anyone who chooses to engage can be empowered by this technology.

A Wired article in September 2010 “The Web is Dead” outlined that content distribution and engagement is going, “to simpler, sleeker services that just work”. Why didn’t Flickr, or even Facebook, currently the largest repository of social photos, foster the same kind of rich interaction around images that Instagram has in such a short time? Most likely because of the lack of attention on the mobile user experience, their mobile apps aren’t laser-focused on sharing photos socially, nor do the Apps make it easy to do so. Conversely, Instagram is a singularly-focused service on your iPhone that works well, without endless options to pull you in a million directions and lose focus of what you’re there for — to share images, the context of those images, and to connect with others around the subject  (or even simply the beauty of that content). It is what an IG member and community manager, Rachael King @rachaelgk, called “so peaceful, like the ‘going fishing’ place in Social Media Land”.

There is something unique in this design and intent. Other widely adopted social networks have brought us closer to those we already know (Facebook), or have enabled us to build our professional networks (LinkedIn). Others (like Twitter) have enabled us to find new people who share information we find interesting or helpful. But Instagram is bringing us closer to people all over the world whom we’ve never met, but whose take on the world and aesthetic choices resonates with us. Just as the Norwegian artist Edvard Munch followed bohemian intellectual Hans Jaegger’s advice of discovering and telling the story of his life with his paintings in the mid 19th century, so is each Instagram user in the 21st century.

Being the narrator of one’s world through imagery is an intimate experience. Because of this, real life Instagram communities are forming all over the globe. To understand this next step of interaction, the in real life aspect, I attended a NYC Instawalk hosted by Postagram one recent Sunday morning. The group was roughly 20 people of all backgrounds and persuasions, iPhones in hand, and eyes wide open.

Walking from Union Square to the Highline Park, we shot pictures that encapsulated the moment: an elderly lady in a walker skillfully navigating her way through our swarm, reflections in windows, other Instagrammers taking photos. We shared knowledge on photo editing Apps, and got to know one another. The saying that you belong to New York in five minutes as much as you do in five years could also be true about Instagram. To have an active account on Instagram, and to be at this event is immediate inclusiveness.

Seeing this thriving community in real life hit home the raison d’etre of Instagram. The proliferation of this mobile photo App and the focused slick mobile interface has created grounds for creating great content and human interaction. The unprecedented rate of adoption, the popularity factor, is impressive – it proves something is working. But ultimately all this exists for an unquantifiable next step:  the fostering of community and relationships. The IG community is one any community builder emulates. And it is only getting started.

Natalie is Founder of Broken Open Media, where she consults on building communities and creating social media strategies. She is currently managing the Razorfish Idea Tank community amongst others. She can be found on Instagram, Twitter, and Tumblr as @rodicka.

Refocusing on a New Mobile Content Landscape

Matt Geraghty   July 8, 2011

Web Content 2011 hits the street in Chicago. (Image via Ciccioetneo)

Content professionals gathered in early June at Web Content 2011 in Chicago to discuss the state of content strategy for mobile. Over the course of two days there were workshops and presentations by over 20 speakers and while there were many highly engaging topics with some common themes arising, the takeaway was clear: the future is wide-open.

For content strategists and consumers of mobile content this is just the beginning.  Tasked with developing world-class mobile experiences, the future is ours to seize.  It is not however, without challenges.  For those willing to experiment and find bold new ways to provide compelling content true to the brand and on any device, opportunity abounds.

Here are some of our favorite quotes from the featured speakers and be sure to explore videos from Web Content 2011.

By the end of the year, over half of the people in this country are going to be using their mobile device to connect with content.  That’s something that even 2 years ago seemed like a far off milestone.” – Mark Donovan, comScore, Inc.

Mobile means we now we need to focus much more on creating experiences that are useful and appropriate to that target audience in their context and appropriate to the needs of that brand.” – Margot Bloomstein, Appropriate, Inc.

We have to start thinking in an abstract sense — if I only have this much space what am I going to do? It’s not only about designing the iPhone version, but rather what am I going to do to present content across any device in the future.  That’s a hard transition for people.  When it comes down to priorities, the conversation will be around what are the fundamental tasks that the user performs.” – Karen McGrane, Bond Art and Science

As content people working in mobile, I think it’s really important to think like designers.  Being able to have these hybrid skills is really valuable for mobile content development.” – Erin Scime, HUGE

This day has been about raising the alarm that context aware platforms are on the horizon.  They are coming and becoming an increasingly important part of how we will deliver content to our audiences — no matter how big or no matter how small. We need to start applying that filter of context to how we’re handling our content strategy.” – Robert Rose,  Big Blue Moose.

What’s the best technology solution to go and deliver content across different devices?  There is no one size fits all environment. There are lots of choices and options.   You need to make the trade offs and understand what your business requirements are, what your content strategy is and what your organization’s threshold is for complexity and for cost. ” - Bryan House, Acquia

This is the beginning of very very very big changes and many companies are going to fail, some of them will go under, some of them will be purchased at a much cheaper value by bigger companies that will munch them up and take their business assets, which will be their content.” - Scott Abel, The Content Wrangler

Did you attend Web Content 2011? Let us know about your experience.

The Web-Wide World

Jake Keyes   June 7, 2011

In your computer and IRL (photo by Junhao)

A few weeks ago, Google announced the Google Wallet, a payment service that will allow customers to use the Android phone itself as a kind of cloud-connected credit card. There is a lot to talk about, here: the privacy concerns, the PayPal lawsuit, whether Apple will follow Google into the Near Field Communication arena, etc. But from the digital consumer’s point of view, the Google Wallet hints at something broader: the boundary between the digital and the physical has started to disappear.

It started with smartphones and data plans. Or maybe it started with the discovery of electricity. In any case, the border is blurring. As Internet-connected devices have become effortlessly portable, and their cameras and touchscreens have become increasingly good at letting us throw content up into the cloud, the content we generate has been freed of its tethers. A piece of content — say, your paycheck — can start its life as a physical object, be scanned and parsed by a bank app, and end up a figure in a checking account, stored in a database. Then, with a service like Google Wallet, it can be summoned back down into the physical world, with a swipe at a cash register and a merchant-side cash withdrawal. The essence of the check, the content it represents, can be transformed easily from physical object to digital object, and back again.

A range of apps and web services serve as gateways for these digital-physical transitions. Take Postagram, an app that allows you to “make your Instagram come true” by sending a $0.99 postcard, printed with an image you select from your iPhone. Remember the early days of the camera phone, when the photos you took would often die with the device that took them? Those boundaries are long gone.

Or take location-based social networks. Foursquare and others leverage ubiquitous GPS tech to give transmittable social value to, say, my being at a certain restaurant at a given time. Watch one of your Foursquare-using friends for a few minutes. You get the sense that there’s a hidden digital world layered over our own, with its own invisible system of status and reward.

We’re only beginning to imagine the specific economic possibilities. Pepsi is experimenting with “social vending” soda machines, which let customers send gift codes to friends or strangers, redeemable at other Pepsi social soda machines. Codes can be redeemed, or passed along to someone else. The gaming industry, too, is testing the digital-physical divide, with Wii, Kinect, and Playstation Move. A few years ago, Nintendo even put out a kind of Pokemon pedometer, which allowed players to accrue redeemable game points by walking around in the real world.

What does this all mean for people who create and curate content? Well, for one thing, the dream of a paperless ecosystem is inching closer. Taxes, visa applications, ID cards, deeds, and so on may soon have no reason to exist on paper — at least in any permanent way. And more than ever the physical location of things is less relevant: points of sale, documents, and consumer identities are portable, sharable, and extremely easy to touch.

But this is what’s most important: digital content is coming into its own as a consumable, trade-able, valued good. The exchange rate is evening out. We’re moving closer to the point where an object is universally useful, whether it lives on a desk or on a screen. 

First Impressions: The Daily

Rachel Lovinger   February 16, 2011

Delivered to your door every day. (image via Rob Gallop)

The Breakdown: Publishers have been wondering if the tablet is going to save journalism. News Corp recently put a stake in the ground with their launch of The Daily. It’s way too early to tell whether this experiment is going to be a success or a failure, but we’ll let you know what we think of it so far.

When The Daily, the first publication created exclusively for the iPad, had been out for a week I sat down with Beth Lind (@bethl), the head of the Media & Entertainment practice in Razorfish’s NYC office, to discuss what we liked and what we thought still needs work.

Beth had seen a preview presentation of the prototype, and the first thing she commented on was that they had launched with all of the features they demonstrated at the preview – and that’s no small achievement these days. Then, as she pulled up the app, it announced that there was a new version, but she had to uninstall the old version before updating. This had the unfortunate side-effect of clearing out all of her saved articles. Here are some of our other initial observations:

What we liked

Beth is excited that someone is finally using the tablet to present content in ways that go beyond static pages. The layout is still pretty magazine-like, but it uses interactivity in some subtle and fun ways.

  • 360 degree images
  • Images that can be zoomed in and out
  • Animated design elements
  • Graphs that build as the page loads
  • Seamless use of inline video
  • Embedded polls
  • Audio functionality reads the news to you!

What we didn’t like

Oh, it’s always easier to criticize, isn’t it? Here are the things that fell short of our expectations.

  • It crashes, it freezes, it takes a long time to load
  • The interface of the app is confusing and inconsistent. I often found myself clicking on things that seemed like they should do something, but they didn’t.
  • It was wise for them to include the ability to share, but in order to make it work the content has to be mirrored on the web (which makes it not-exactly-iPad-only). This is a shortcoming of the platform, not the app, but whose fault is it that when you share a link it offers unhelpfully vague messages like “Check out this article from the Daily” (on Twitter) and “I want to share the web version of an article from The Daily, the tablet-based original news publication.” (on Facebook)?
  • The audio feature is awkward and only applies to some of the articles
  • It’s a walled garden. We think it must be aimed at that super-select segment of early adopters who want to get all their news from a single source. If you want to read other points of view on a story, you still have to visit any of the thousands of other news sources online.

Of course, we’re sympathetic about these shortcomings – these aren’t easy problems to solve. But the bigger question is the value equation: Does The Daily bring something valuable enough to the table to make people look past the bugs and pay for a weekly subscription? A lot of these issues will be fixed sooner or later, but halfway into the 2-week free trial period, neither of us was convinced that it was something we wanted to pay for. The coverage is not unique enough, and the features are not quite there. We’ll keep an eye on it though and see how The Daily develops. 

Robosketching for the People

Robert Stribley   October 15, 2010

Scatter/Gather

Don’t feel boxed in when it comes to your digital sketching options. (Image via Banksy.)

As the jokes about the iPad being an outsized iPhone recede and sales continue to skyrocket, many of us are finding ways to incorporate an iPad into our working lives. Early buzz suggested that iPads were great as communications and reading devices, but not so hot for any sort of genuine professional productivity. Au contraire.  As a cursory review of the landscape reveals, for example, there are quite a few apps, which enable you to sketch your ideas into reality, whether you’re sitting on the subway, sipping at a café or spacing out in a meeting. So, no need to reach for that cocktail napkin anymore – simply reach for your iPad.

I can’t claim to have reviewed all the sketching apps competing for your attention, but here’s some info on the few I have had the opportunity to use and can recommend.

Adobe Ideas – This app bears the twin virtues of being free  and extremely simple to use. It may not offer much in the way of brushes or stencils, but it renders nicely and unlike some more expensive apps, it doesn’t begin to pixelate your sketch when you zoom in. Perfect for jotting down a quick sketch when you’re not at your desk. Did I mention, it’s free?

Autodesk SketchBook Pro – By far the most robust of the sketching apps I’ve tried, SketchBook Pro includes only simple shapes, but a myriad of different brushes and tools. It also allows layering. Coming from the same folks who brought us AutoCAD, yet priced at $7.99, it’s remarkably affordable for all the features it includes. But wait there’s more! It’s also currently on sale for $3.99.

Penultimate – Described as a notetaking app, Penultimate actually performs the role of a digital Moleskin notebook.  It gives you plain, lined or graph “paper” to sketch or write on and allows you to save multiple notebooks, not just pages. And at $3.99 it’s quite affordable. Though there are limited colors and no brushes or stencils, Penultimate may win you over with its old-school elegance anyway.

Additionally, I’ve heard great things about Omnigraffle’s iPad app, but I haven’t yet felt compelled to plonk down the $50 to try it out. I understand it’s reasonably robust with access to plentiful stencils as well as line and text tools. And as you’d hope, the exported files can be opened in Omnigraffle on your desktop. On the minus side? The sole two reviewers over at iTunes agree that the price is pretty inflated for what you get.

Got an app? Now, grab yourself a Pogo sketching stylus (or if you’re the DIY type, make one yourself) and you’re ready to sketch.

This listing isn’t exhaustive; it simply reflects those apps your intrepid reporter has heard others speak highly of and so felt compelled to procure. Feel free to make your own recommendations in the comments.

And, of course, if you can’t afford all this stuff, a yellow legal pad and a Sharpie can still work wonders.

 

Content Strategy and the iPad: Part 2

Doug Bolin   September 24, 2010

CS and the iPad. Are you experienced? (Image via Patricio Villarroel)

What is the iPad?

In the last post, we started to address five questions related to content strategy for the Pad.  Here are 3 more questions to consider:

1) What are the content strategy and user experience best practices for content being experienced on the iPad?

2) Same question, but for iPad apps, what are the content strategy and user experience best practices for App content?

It’s probably out there somewhere, or already in the works, but so far I haven’t been able to find a single article or book that seeks to address even a piece of these questions

Everything is about the iPhone, or, as a marketing ploy, the title includes the iPad as an afterthought, “… for the iPhone and iPad.”

Do you agree that the iPad is not just a big iPhone or an iPod touch on steroids? Have you found or written anything that is really about best practices for iPad content strategy and user experience? Can you post a link?

These aren’t specifically about the iPad, but I’m posting them as thought provokers towards iPad best practices for content:

Unleashing the Power of Digital Signage: Content Strategies for the 5th Screen
Keith Kelsen is the author and no one knows more about content strategy for Digital out of Home (aka digital signage) than he does. Networked, dynamic in real time, multi-channel and zone, interactive, touch, gestural and GPS powered, Digital out of Home (DOOH) has gone far beyond billboards and signs. It has a lot in common with the iPad, which is a new addition to the list of “5th Screens”.

Designing the iPhone User Experience
By Suzanne Ginsburg, this book lays out an application of basic UX practices to developing iPhone apps. Nothing about the iPad or content strategy, but it is a start.

Tapworthy: Designing Great iPhone Apps
This is a fun, clear and well-written book about iPhone Apps by Josh Clark. If only there was a similar resource for iPad content! At least the title evokes the touch interface.

3) Is the iPad itself already a content strategy?

Huh?

This isn’t my idea or phrase, but I’m including it because it captures the unique nature of the iPad when it comes to content. It is from a Strange Attractor blog post by Suw and Kevin Charman-Anderson.

The emotionally charged discussion that follows is actually more interesting than the post itself and worth checking out.

Here’s a quote from the discussion thread that somewhat explains the premise:

“ Kevin’s post is about how Apple’s design strategy for the iPad was content-focused rather than tech-focused. … The iPad will live or die because of the content one can access through it, not because of the technical spec – that’s why it’s a content strategy not a tech strategy.”

Let’s continue this discussion on Scatter Gather with a slightly different twist. Do you think the iPad’s design and functionality embodies and creates an implicit content strategy? Or is it just a delivery system, a platform, for traditional content?

Content Strategy and the iPad

Doug Bolin   September 13, 2010

Blurring the lines between old media and new media. (Image via Shakespeare Monkey)

As content strategists in our never-ending quest to extend the practice of content strategy to emerging digital interactions, we have now come face-to-screen with the iPad. Plus, rumor has it that Apple has filed a patent on a similar OS and interface for desktop computers. So, it’s time to leverage our skills and experience to develop a body of thought and practice around content strategy for it.

Right? Maybe not.

Maybe, as content strategists, we will need a fundamentally different approach to the iPad. So the goal of this post is to start generating some discussion around content strategy for the iPad.

A proposal, let’s start in two parts, answering the following:

1.    What is unique about the iPad experience?
2.    What makes content strategy for the iPad more than the sum of the content components?
3.    What are the content strategy and user experience best practices for content being experienced through the iPad?
4.    What are the content strategy and user experience best practices for App content?
5.    Is the iPad itself already a content strategy?

What is unique about the iPad experience?

Most of what is being written and said about the iPad these days concerns what it is not. For example, it is not just:

A big iPhone …An iPod Touch on steroids…Portable digital out of home …. A Netbook…A way to do email… A way to surf the web…An eBook reader…A big portable game player… Digital Signage…Its Apps…A great screen for watching movies and television…Microsoft Surface…a kiosk… Apple TV… A virtual collaboration tool … and so on.

We seem to be developing a pretty good idea of what the iPad isn’t, but not much about what it really is and how to do CS for it.  Rich Jaroslovsky nailed the challenge facing us:

“A far better name would be iWonder. As in, it certainly is a consumer-tech wonder. And also as in, I wonder if the content providers (read content strategists) who may determine its success are prepared to take full advantage of it?”

As a content strategist, what do you think makes the iPad experience unique? What do we need to do to take full advantage of it?

What makes content strategy for iPad more than the sum of its content parts?

In other words, is content strategy for the iPad merely a discrete collection of the thinking about the different content types and interactions it delivers?

I’d argue that it isn’t.

The fact that the iPad can potentially integrate virtually every digital experience, App and content type into one experience means we need to do the same. Plus, it uses a touch interface with gestural overtones. It’s fast, it’s mobile and it’s dynamic. All of this changes and extends the content experience in ways we have barely begun to explore. We won’t explore them if we keep thinking about the iPad as a delivery device for traditional content types and traditional interactions.

One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad.

Thoughts?

Please tune in again next week for Questions 3 – 5.

Razorfish Blogs

Events

  • SXSW Interactive

    March 7 – 11, Austin, TX
    Several of our contributors will be speaking this year. If you’re going, say hi to Rachel, Robert, & Hawk.

  • Confab Minneapolis

    May 7-9, Minneapolis, MN
    The original Confab Event. Rachel will be there doing her Content Modelling workshop with Cleve Gibbon. Get details and we’ll see you there!

  • Intelligent Content Conference Life Sciences & Healthcare

    May 8-9, San Francisco, CA
    Call for Presenters, now open:

    intelligentcontentconference.com

  • Confab for Nonprofits

    Jun 16, Chicago, IL
    Another new Confab Event! Early Bird pricing until March 7:  http://confabevents.com/events/for-nonprofits

  • Content Strategy Forum

    July 1-3, Frankfurt, Germany
    International Content Strategy workshops & conference: csforum2014.com Call for speakers now open!

Search scatter/gather

What is this site, exactly?

Scatter/Gather is a blog about the intersection of content strategy, pop culture and human behavior. Contributors are all practicing Content Strategists at the offices of Razorfish, an international digital design agency.


This blog reflects the views of the individual contributors and not necessarily the views of Razorfish.

What is content strategy?

Oooh, the elevator pitch. Here we go: There is content on the web. You love it. Or you do not love it. Either way, it is out there, and it is growing. Content strategy encompasses the discovery, ideation, implementation and maintenance of all types of digital content—links, tags, metadata, video, whatever. Ultimately, we work closely with information architects and creative types to craft delicious, usable web experiences for our clients.

Why "scatter/gather"?

It’s an iterative data clustering operation that’s designed to enable rich browsing capabilities. “Data clustering” seems rather awesome and relevant to our quest, plus we thought the phrase just sounded really cool.

Privacy Policy | Entries (RSS) |     © Razorfish™ LLC All rights reserved. Company Logo.