The Communo-editron™ 2000

Rachel Lovinger   October 20, 2010
Robot arm, writing textWhen will robots start writing all the copy for us? (Image via Mirko Tobias Schaefer).

Yesterday I spoke at the Smart Content Conference. In one of the morning talks, Jeff Fried of BA-Insight emphasized the point that analytics, semantics, and machine learning are powerful technologies, but not perfect technologies. As such, he advises that business innovators should be realistic about their capabilities.

Over the course of the day, there was much talk about the interaction of these technologies with social information, and about how these tools could be used to help people (such as content creators and call center reps) to fulfill their responsibilities more efficiently. At the end of my presentation on semantics and publishing (based largely on the Nimble report) someone asked, among other things, which analytic or semantic tools could serve to automate the creation of ongoing stories.

My answer:  this is not a task that can be 100% automated. To be fair, she may not have meant 100%, but I wanted to reinforce this point. There are tools that can help – semantic media monitoring tools for research and tracking, machine-assisted tagging tools for more thorough metadata, and many others – but these are still just tools. For optimal results, they should still be wielded by a person.

In fact, by the end of the day-long conference, I had started thinking about how to combine the content creation efforts of machines, experts, and crowds to benefit from the strengths and overcome the limitations of each. Maybe it will be the topic of one of my future conference presentations.

There’s No Semantic Web Without Content and Data

Rachel Lovinger   June 23, 2010
 

The breakdown:  In this post we hear from Rachel Lovinger reporting from the front lines of content strategy and the semantic web.  She enlightens us on how the semantic web fits into the larger content strategy discipline and brings new context for the conversation.

I’ve been thinking, speaking, and writing about the semantic web for several years. It seemed like there was a natural affinity with my work as a content strategist, but for some reason the two worlds remained separate. In the past year or so I’ve seen these areas of interest finally start to converge, but sometimes I hear content strategists express concern that with the onset of the semantic web everything will be automate and there won’t be any need to do the kind of work we do.

I think that couldn’t be further from the truth, so I put together this presentation explaining what the semantic web is, how people are using it, and what it means for people practicing content strategy. I’ve presented this three times – in Paris, in Dallas, and in Chicago. It’s a lot of information to absorb, but hopefully it helps put this set of emerging technologies in perspective for people who just want to design interesting and useful experiences with digital content.

This week, at the Semantic Technology Conference in San Francisco, I’ll be getting more specific with a presentation about the findings from the Nimble Report.

Announcing the Nimble Report

Rachel Lovinger   June 1, 2010

Nimble Report

The Breakdown: Announcing, Nimble: A Razorfish report on publishing in the digital age. Rachel provides a description of the report that she wrote for Razorfish’s Media & Entertainment practice, with support from research partner Semantic Universe.

Last week I mentioned being busy. One of the things that has been keeping me occupied for the past several months is writing and producing a report called Nimble. It’s aimed at content producers that are moving from traditional media distribution to digital, and finding themselves facing new challenges.

Most magazines, newspapers, TV shows, etc. have a website at this point, but it doesn’t mean that they’re making the most of the digital experiences that they’re creating for their audience. The report looks at three major areas of interest to content companies – how they attact and retain their audience, how they deliver content across new channels, platforms, and devices, and how they remain profitable in the new digital economy.

The key is: Content needs to be free. Not necessarily free-of-charge, but free to be accessed wherever and whenever the consumer wants it. And to truly be free, content needs to be “Nimble.” Content becomes nimble by being well-structured and having meaningful metadata.

The report discusses the types of structure that can set content free, and how this approach will change the role of the editor, the way content companies make money, the way they deliver content, and the way they attract an audience. It also includes information about emerging technologies and tools that can help digital content publishers move into this nimble world.

Read or download the entire report at http://nimble.razorfish.com and follow us on Twitter (@NimbleRF) for interesting developments and updates. I’ll be presenting the report at the Semantic Technology Conference on June 23rd, and we’ll be doing a lot more with this material in the coming months.

Are you my Elvis?

Rachel Lovinger   September 9, 2009

5364-18
Some president guy meets a singer dude. (image via Chronophobic)

The breakdown: How does a major announcement by the New York Times to make a massive digital index available to the public change the landscape for reliable content topics and metadata? Rachel Lovinger explores why Wikipedia shouldn’t be our one stop shop when it comes to significant events.

A few months ago the New York Times announced their intention to make their entire index available, in a structured digital format. The Index was first published in bound volumes in 1913 and has grown to include over 500,000 terms that have been used to tag articles going all the way back to 1851. That’s 500,000 significant people, places, things, organizations, and concepts. To be clear, the Index includes the tagged terms, not the articles themselves.

Ok, so that’s a big list of words, but why does it matter? As we move towards a more data-driven digital world, there’s a strong need for online services to have a reliable, accurate, common frame of reference that covers all the major topics, people & things of interest. Let’s say you’re a big fan of the movie Up and you want to subscribe to a service that pulls in any news, media, and conversations about the animated movie. In order to be sure that content is related to the film, and not all the many other uses of the word “up,” automated services will need to use some kind of unique identifier. This can be an alphanumeric code (like an AMG ID, licensed from All Media Guide) or a URL (like http://www.imdb.com/title/tt1049413/), but it has to be something that the service and the content providers can both share.

Many experimental projects have tried using Wikipedia as this kind of database of knowledge. In some ways, this makes sense. If you strip out the content of the pages, you’re left with a taxonomy of nearly 3 million page names. This list of terms is well-structured, because of Wikipedia’s use of links and categories, and it covers a huge body of human knowledge.

But one could argue that Wikipedia has an unhealthy emphasis on pop culture and internet memes. How valuable are those 3 million page names when they include a huge number of topics like The Hampster Dance (an animation of rodents dancing), Chrismukkah (a blending of Christmas and Hanukkah, popularized by a TV show called The O.C), Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116 (a name given to a Swedish child born in 1991), More cowbell (a popular phrase from a Saturday Night Live sketch starring Christopher Walken) and nearly 500 pages devoted to the creatures of Pokémon (a media franchise about battling monsters)? Suppose you mention Elvis, does Wikipedia know if you mean Elvis Presley, Élvis Alves Pereira, the TV miniseries, the album, the film, the TV special, the text editor, the comic strip, the character in the movie Cars, the pinball machine, the helicopter, or the other album?

The New York Times Index would offer the Web of Data another option for a structured, digital, open representation of human knowledge. One that comes from a trusted brand that’s known for its depth and breadth of coverage. Coverage that’s been researched and fact-checked by professionals.

IA Summit Goes Semantic

Rachel Lovinger   April 2, 2009
watzImage via Marius Watz

This was my first year attending the IA Summit. I’m not an Information Architect, so I had no idea what I would find there, or how much of it would be applicable to me. I went because I was participating in the pre-conference Content Strategy Consortium (agenda here), and I decided to stay for the entire conference because several of my coworkers were speaking and the program looked promising. (Note: I’ve tried, where possible, to link to the presentations online. Some of these talks may be a little hard to understand just from the slides, but at some point Boxes & Arrows will post podcasts of the audio.)

There were many talks on content-related issues. Andrew Hinton led a workshop and gave a talk about the importance of Context. Dan Brown, whose talks I missed because the schedule was too packed with amazing things, spoke about data driven sites in Modeling Concepts and Designing Rules. From the comments coming through on Twitter, I got the sense that a lot of Content Strategy type issues were addressed. Colleen Jones, one of my fellow consortiumists, gave a very practical and entertaining talk about Usable, Influential Content.

But the thing I was most excited about was the prevalence of talks about the Semantic Web and what it means for the future of IA. I’ve been trying to address this same issue in my own work for several years now, so I was looking forward to seeing what the IA community would bring to the discussion. Here’s a brief rundown of the talks I saw on this subject (with more detailed accounts of each talk over on my own blog, Meaningful Data):

· In A Fundamental Disruption, Peter Sweeney and Robert Barlow-Busch of Primal Fusion posed the question “How do IAs design for information that’s self-organizing?”

· Chiara Fox set out to introduce the audience to The Semantic Web: What IAs Need to Know About Web 3.0

· Richard Ziade and Tim Meaney, of arc90, focused on the data-sharing aspect of the Semantic Web in their talk, Discovering & Mining the Everyday

· In The Facets of Faceting, Kristoffer Dykon and Helle Hoem presented some case studies on taxonomy and ontology structures used for navigation

· Chris Thorne, of the BBC, focused on the architecture of URIs in his talk, Ubiquitous Information Architecture: Building for change and web 3.0

While I’m excited that the topic was so pervasive, I was a little disappointed that the level of discussion has not advanced very far beyond “What is the Semantic Web?” We’re talking about the questions that need to be asked, but not about realistic, practical answers. Hopefully, now that people are being exposed to these ideas at a rapid rate, it won’t be long before IAs and Content Strategists put their heads together and start coming up with some elegant approaches to designing semantic solutions that address user and business needs.

Razorfish Blogs

Events

  • SXSW Interactive

    March 7 – 11, Austin, TX
    Several of our contributors will be speaking this year. If you’re going, say hi to Rachel, Robert, & Hawk.

  • Confab Minneapolis

    May 7-9, Minneapolis, MN
    The original Confab Event. Rachel will be there doing her Content Modelling workshop with Cleve Gibbon. Get details and we’ll see you there!

  • Intelligent Content Conference Life Sciences & Healthcare

    May 8-9, San Francisco, CA
    Call for Presenters, now open:

    intelligentcontentconference.com

  • Confab for Nonprofits

    Jun 16, Chicago, IL
    Another new Confab Event! Early Bird pricing until March 7:  http://confabevents.com/events/for-nonprofits

  • Content Strategy Forum

    July 1-3, Frankfurt, Germany
    International Content Strategy workshops & conference: csforum2014.com Call for speakers now open!

Search scatter/gather

What is this site, exactly?

Scatter/Gather is a blog about the intersection of content strategy, pop culture and human behavior. Contributors are all practicing Content Strategists at the offices of Razorfish, an international digital design agency.


This blog reflects the views of the individual contributors and not necessarily the views of Razorfish.

What is content strategy?

Oooh, the elevator pitch. Here we go: There is content on the web. You love it. Or you do not love it. Either way, it is out there, and it is growing. Content strategy encompasses the discovery, ideation, implementation and maintenance of all types of digital content—links, tags, metadata, video, whatever. Ultimately, we work closely with information architects and creative types to craft delicious, usable web experiences for our clients.

Why "scatter/gather"?

It’s an iterative data clustering operation that’s designed to enable rich browsing capabilities. “Data clustering” seems rather awesome and relevant to our quest, plus we thought the phrase just sounded really cool.

Privacy Policy | Entries (RSS) |     © Razorfish™ LLC All rights reserved. Company Logo.